Azure Data Factory (ADF) allows users to insert a delimited text file into a SQL Server table, all without writing a single line of code.

To get started, if you do not already have an ADF instance, create one via the Azure Portal. Afterwards, select Author and Monitor from the ADF resource:

Screen-Shot-2019-03-02-at-13.44.51-1

Next, select Copy Data:

Screen-Shot-2019-03-02-at-13.41.11

Give the pipeline a descriptive name and an optional description. Select to execute the pipeline once or on a schedule:

Screen-Shot-2019-03-02-at-13.52.40

Click Next and create a connection to retrieve your delimited text file via Create New Connection. Azure offers dozens of data sources for use in a pipeline, so choose the source that contains your file. Next, choose the input file by entering the data source path or browsing the source and selecting it interactively. If you only have one file leave Copy file recursively unchecked, and always disable Binary copy. ADF also gives you the option to Filter by last modified, to select the files that were edited in a specific time frame for processing.

Screen-Shot-2019-03-02-at-15.20.23

After clicking Next you're presented with the File format settings. These settings will be prepopulated for you based upon the data in the file you selected, but you should change any incorrect configuration items. You can also preview the data and view the schema. Ensure the schema datatypes are correct, as I found ADF tends to designate ints as strings.

Screen-Shot-2019-03-02-at-15.27.35

Next, for the Destination data store step, create a connection for your SQL Server instance. If your connection does not already exist, click Create new connection, search for SQL Server, and continue to provide your credentials. You will then be prompted to select a table from your SQL Server database.

Ensure your table mapping is correct and that the types match. You can ignore the Azure SQL Database sink properties unless you know how to use them and have a reason to:

Screen-Shot-2019-03-02-at-15.34.30

Click Next. The only setting I tend to change is the Fault tolerance as sometimes the delimiter is also contained in the dataset, which would cause issues for one or many rows, or there may be a casting issue. For this reason I Skip and log incompatible rows. However, this will also require you to provide a storage account to land the log files in.

Click Next, and you're presented with a Summary page. Review your pipeline's configuration to ensure it's correct, then click Next and your pipeline will deploy.