Azure Data Factory Region Detection

23 Dec

If you are building an Azure Data Factory (ADF) pipeline and receive an error that contains this message when your pipeline is executed:

Failed to detect region of linked service

or

Failed to detect the region for

… then you may be running into a situation where the Data Movement Service (DMS) feature of ADF is either not able to detect the region of the data store or there is no DMS in that region.

The Data Movement Service of ADF is the Azure-managed cloud service (PaaS) that performs scale-out data movement at elastic scale. Azure handles all of the plumbing for moving Big Data for your data pipelines. You can see the locations available for Data Movement on the Azure Regions page (https://azure.microsoft.com/en-us/regions/services/). On the screenshot of that page below you’ll see that the Data Factory service has several sub-services. The Data Factory service stores your factory account metadata while the Movement, Activity Dispatch and SSIS IR are separate managed services that have their own region deployments. It is the Data Movement service in those regions that perform the heavy lifting of moving your data and that is where you should focus to bypass the error.

regions

In the V1 original ADF service, there is a property on the Linked Services definition that allows you to explicitly tell ADF the location of your data store (executionLocation). This is taken directly from the online Azure documentation for ADF (https://docs.microsoft.com/en-us/azure/data-factory/v1/data-factory-data-movement-activities#global):

For example, to copy between Azure stores in Korea, you can specify "executionLocation": "Japan East" to route through Japan region (see sample JSON as reference).
Note

If the region of the destination data store is not in preceding list or undetectable, by default Copy Activity fails instead of going through an alternative region, unless executionLocation is specified. The supported region list will be expanded over time.

In the new V2 ADF service, the Integration Runtime (IR) feature is the primary way to move data in the cloud or on-prem. So, you may have to explicitly tell ADF about the location of your data source by creating an IR in your data store region and then reference that IR in your Linked Services definition using the new “connectVia” property. If you do not specify an explicit IR reference, then ADF will use a “default IR” which may not be able to resolve the location.

First, create an Integration Runtime in the region where your data store is located:

https://docs.microsoft.com/en-us/azure/data-factory/create-azure-integration-runtime#create-azure-ir

Then add the

connectVia

property to your Linked Service using a reference to that new IR:

https://docs.microsoft.com/en-us/azure/data-factory/concepts-datasets-linked-services#linked-service-json

Advertisements

Microsoft Announces Preview of Azure Data Factory (ADF) V2

25 Sep

This week at the Microsoft Ignite conference in Orlando, we announced the public preview of new features in ADF and released them bundled as ADF V2. You can go to the Azure portal now, select Data Factory and choose V1 or V2 from the Version picker in the new Factory blade.

The overview of the new service is in our docs page here and I’ve compiled a list of new scenarios, use cases and features enabled in ADF V2 here on SQL Pro mag.

Very exciting new features are enabled like flexible scheduling, control flow, on-demand Spark execution and SSIS packaged execution in the cloud.

To get started, I recommend the Quickstarts for Powershell and these tutorials:

Provision SSIS in the Cloud on ADF

ADF Incremental Data Load

Transform Data Inside Virtual Network

Monitoring Azure SQL Data Warehouse from SSMS

13 Apr

For those of us who lived through the Microsoft lifecycle of bringing to market a scale-out MPP data warehouse offering from DatAllegro to Parallel Data Warehouse (PDW) to Analytics Platform System, the technology behind that offering has evolved tremendously and were all happy to see it elevated to new heights in the cloud as Azure Data Warehouse.

But it’s important to understand the lineage from the perspective of same of the namings of the DMVs that you’ll use in SSMS. And, yes, those of us who had to evolve from the early PDW v1 days using Nexus because SSMS didn’t work with PDW are very excited with new T-SQL compatibility and SSMS compatibility.

Just make sure that when you are using SSMS to monitor your Azure Data Warehouse that you recognize that many of the DMVs from SQL Server land do not work in PDW or ADW and that many of the names of similar DMVS will have PDW in their names. But these will work with ADW.

For example, here is the documentation on monitoring your ADW workloads and grabbing SQL command syntax, similar to using DM EXEC REQUEST in SQL Server, but with PDW DMVs: https://docs.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-manage-monitor.

-- Find queries 
-- Replace request_id with value from Step 1.

SELECT waits.session_id,
      waits.request_id,  
      requests.command,
      requests.status,
      requests.start_time,  
      waits.type,
      waits.state,
      waits.object_type,
      waits.object_name
FROM   sys.dm_pdw_waits waits
   JOIN  sys.dm_pdw_exec_requests requests
   ON waits.request_id=requests.request_id
WHERE waits.request_id = 'QID####'
ORDER BY waits.object_name, waits.object_type, waits.state;

So now you know why you have to look for PDW for ADW DMVs!

Advanced Analytics Going Mainstream in 2017

8 Jan

Well, I finally feel comfortable saying it: Advanced Analytics is going mainstream this year. Even the term “Advanced Analytics” is a recent amalgam of long-time analytical disciplines that includes predictive analytics, descriptive analytics, data mining, machine learning and more. And now we refer to these techniques at Big Data scale as “Deep Learning”.

Here is Microsoft’s Joseph Sirosh talking about “Deep Learning in Every Software“. I would probably state it instead as “Advanced Analytics everywhere”. Not all scenarios require Big Data scale techniques, but most every application can gain an advantage by including cognitive capabilities as a natural aspect of the end-user experience.

Having spent years in the wilderness working on projects that included predicitve, data mining and machine learning, I wondered what are some of the recent technology and business drivers that have led us to the current inflection point in which advanced analytics begins finally breaking through into mainstream applications.

At Pentaho, we struggled for years to break through with machine learning projects using the popular Weka ML platform and retrofitted Weka to Big Data platforms Hadoop & Spark. At Microsoft, we had data mining built into the mainstream SQL Server database product for a long time, but it was a niche capability.

To me, these 5 factors have most impacted the recent turn, which is also the next-step result of US businesses focusing a lot of time, attention and resource on hiring, training and mentoring the Data Science role in their organizations.

  1. Open source projects, tools and libraries eliminated both the high-cost requirements of advanced analytics tools as well as making pre-built, trained and tested models available to non-math PhDs.
  2. R, Python, CRAN, TensorFlow, Cognitive Toolkit. I’ll also throw in my affinity to Weka because it was a trailblazer in the open source ML market and is still taught in many academic classes.
  3. Data quality and governance maturity: Decades of collecting data for business intelligence by the business and IT communities has raised awareness of the need to curate data, meaning that there are more quality data marts available for advanced analytical projects that can mine and optimize those marts.
  4. Artificial intelligence in everyday life: The more comfortable and familiar people become with AI, the more they will come to expect that in business applications as well. Everyday exposure to AI, ie. recommendation engines (Amazon, Netflix), face recognition (Facebook)
  5. Cloud Computing: Without needing to put resources into acquiring, standing-up and maintaining complex analytics architectures on-prem, I can just build machine learning experiments, explore data sets and operationalize learning as web services from my broswer or client tool using Azure Machine Learning, R Studio or Spark/R notebooks from an on-demand Hadoop cluster.

 

 

Azure Big Data Analytics in the Cloud

3 Nov

Hi All … I’m BAAAACK! Now that I’ve settled into my new role in the Microsoft Azure field team as a Data Solution Architect, I’m getting back out on the speaker circuit. Here are my next 2 speaking engagements:

Tampa SQL BI Users Group

Global Big Data Conference Dec 9 Tampa

In each of those, I will be presenting Azure Big Data Analytics in the Cloud with Azure Data Platform overviews, demos and presentations.

I am uploading some of the demo content on my GitHub here

And the presentations on Slideshare here

 

Pentaho Native Analytics on MongoDB

15 Dec

Pentaho has a very rich and complete business analytics product suite. There is ETL, data integration, data orchestration, operational reporting, dashboards, BI developer tools, predictive analytics, OLAP analytics … and I’m probably missing a few others!

So when you are looking to implement a business intelligence and analytics solution for a Big Data platform using a modern technology outside of the traditional RDBMS sphere, like MongoDB NoSQL database, you have the advantage of a complete BI product set that works out-of-the-box to take advantage of that platform’s strengths.

What I mean by that is with Pentaho, there are different tools to optimize each aspect of a complete BI solutions. For instance, Pentaho Data Integration (PDI) has direct hooks into MongoDB using their API directly to manipulate and move data using MongoDB documents. The Pentaho Report Designer (PRD) also uses that same direct access mechanism to provide reporting for your business users directly on MongoDB.

With the Pentaho 5.1 BA Suite Release, interactive OLAP analytics using Pentaho Analyzer was introduced. This is Pentaho’s unique capability to translate business user queries using slice-and-dice MDX mechanisms directly into MongoDB AggPipeline queries.

With these capabilities, Pentaho does not require extracting and staging of MongoDB data from documents in collections into traditional RDBMS tables. Instead, analytics is turned into native MongoDB query syntax on the fly without any SQL requirements. And as I stated above, this allows the user to fully leverage and optimize your Big Data source, in this case MongoDB. Pentaho will push down queries into your MongoDB cluster, thereby not requiring you to establish an entirely separate analytics platform with its own hardware and scalability requirements.

Big Data Analytics Presentation for SQL Saturday Orlando

28 Sep

Thanks to all for joining my session on Big Data Analytics at Seminole State College in Sanford, FL for the SQL Saturday event. I’ve uploaded my slides to SlideShare here. Thanks again!  Best, Mark

cbailiss

Microsoft SQL/BI and other bits and pieces

TIME

Current & Breaking News | National & World Updates

Tech Ramblings

My Thoughts on Software

SQL Authority with Pinal Dave

SQL Server Performance Tuning Expert

Insight Extractor - Blog

Paras Doshi's Blog on Analytics, Data Science & Business Intelligence.

The SQL Herald

Databases et al...

Chris Webb's BI Blog

Microsoft Analysis Services, MDX, DAX, Power Pivot, Power Query and Power BI

Bill on BI

Info about Business Analytics and Pentaho

Big Data Analytics

Occasional observations from a vet of many database, Big Data and BI battles

Blog Home for MSSQLDUDE

The life of a data geek