Click here to download this solution as a pipeline template. Go to the ADF browser UI and click new > “Pipeline from template”. Choose “Import template” and point to this zip file. That will add the data flow and dependencies to your factory.
A common requirement that I see from customers who are processing text files in data lakes with Azure Data Factory, is to read and process files where there are variable numbers of lines that precede both the data headers and the data that needs to be processed. ADF already has facilities that handle the ability to switch headers off or on as well as the ability to specify parameterized skip line counts. However, in many cases, files that are received for processing have variable numbers of superfluous lines that need to be skipped.
In ADF, between pipeline activities and data flows, there are a number of ways to handle this scenario. In this post, I am going to demonstrate one such technique. In this case, I’m going to first scan the source file using Data Flows to find the beginning of the actual data. Then I’ll use the Lookup activity in the pipeline to grab that line count from that first data flow and send it as a parameter into the final activity, my data flow activity that actually processes the data in the file.
To demonstrate this technique, I typed up a quick sample file as a CSV that mimics the properties of many of these files that we see every day. Many times, you’ll have descriptions, copyright information, etc. before the row headers and row data:
this is
my file from my vendor
they do strange things! :)
id,name,qty,sales
1,mark,2,4.00,10.00
2,sally,1000,25.00
3,sam,1,250.00
4,judy,22,24.00
5,bill,31,1.50
“Skip lines” works well as an option in ADF Copy Activity and Data Flows when you know the number of rows to skip. But what do you do when the number of rows to ignore can vary from day-to-day? You can always go into your pipeline and update a parameter that sets skip lines. You can also read each line without any delimiters or headers and treat every row as text, then parse row strings. Or use a combination of those together in an automated solution.
Data Flow 1: Where is my data?
This data flow will use the technique I mentioned above of reading each line as a string with no delimiters and no headers. You then must choose a row marker to look for to indicate to ADF where your data starts. When I look at my sample data set, I see that finding an integer value for ID in the very first ordinal position will work. So, let’s create a data flow to do this.
-
- Source: My source is the CSV file that I pasted above sitting in my blob store. I set the dataset for no headers, no schema, and no delimiter. I just want to read in every row as a full string value so I can parse it and find characters in it.
- AddRowNums: This is a Surrogate Key transformation. I use it to create a sequential unique row number for every row in that file.
- DefineDataLines: This is a Derived Column transformation that has the important formula that you’ll use to detect the beginning of the row data. In this case, I’m looking for an integer in the very first position:
iif(!isNull(toInteger(left(toString(byPosition(1)),1))),toInteger(rownum),toInteger(0))
- FindFirstLine: This is an Aggregate transformation that will find the lowest rownum that matches our matching criteria from step #3. We use no grouping since we need to examine the entire dataset for the lowest matching row number and use this formula for the minimum match rownum:
minIf(datalines!=0,datalines)
- The final transformation is a Sink that simply writes that single value to a file. I call my file “skiplines.csv” and we’ll use that in the pipeline Lookup activity to pass the value on to the next data flow.
Lookup Activity: What line should I start on?
The Lookup activity will use the dataset that was the output sink for the data flow above. We point to the skiplines.csv file, which contains a single number that is the line number to begin processing the data file.
Data Flow Activity 2: Process the file with dynamic Skip Lines
Use a pipeline expression formula to pass the lookup value into the next data flow activity:
Now you can set a dynamic Skip Lines value in your source and process that same file above using that same CSV dataset with that parameterized value.
Hi, When the source format is in excel, we do not see an option to skip line count. we have a situation where the source data is in excel and we need to skip the header. Any inputs on this would be of great help.
Thank you,
Satish
Skip lines is not there today in the ADF connector for Excel. Would you mind filing this as a feature request on Azure User Voice? TY!
https://feedback.azure.com/forums/270578-data-factory
hi, thank you for showing the solution for dynamically skipping the header lines. What if my source data has footers and i would like to dynamically skip it, is it possible?
If your footer is the last row in your file, you can add a Surrogate Key transformation after your source. Name the new key column something like sk_id. Then, use an Aggregate Transformation to find maxrow=MAX(sk_id) and add a Filter transformation. The formula will be sk_id < maxrow. That will skip that last row.
Hi Mark, Thank you for the solution. My colleague suggested using REJECT_VALUE, which somehow also solved the same problem. For lines that doesn’t match the schema, they are rejected. Therefore avoiding the need to differentiate headers/footers.
Hi Would there be possible to dynamically skip footers as well?
Hi there,
Thanks for providing the article, it’s very helpful. What if I’m looking for characters or a phrase rather than an integer? Also, I could not get past step D, using the expression provided, minIf(datalines!=0,datalines). Thanks for your help in advance.
You can use any matching criteria you wish. You are essentially looking to return a boolean to determine which lines are the lines with data.