How to Load Data into an Azure SQL Database

Introduction

SQL Server on-premises BULK INSERT is a very popular method to import data from a local SQL file to SQL Server. The BULK INSERT feature also could be used to import data from a file stored in an Azure storage account to SQL Server on-premises. Because Azure is growing each day and SQL Server is improving the features to connect SQL Server on-premises, BULK INSERT could be a powerful tool to import data because it is fast and it can be easily combined with T-SQL code. In this article, I will demonstrate with examples how developers can use the traditional BULK INSERT statement to load data from a local CSV file or Azure blob to Azure SQL DB.

Creating an Azure SQL DB

Before you start, make sure SQL Server 2016/2017 is already installed. In addition to SQL Server, you also need an Azure active account. A developer must log in to the Azure portal with a valid subscription. See Figure 1.

New SQL DB creation
Figure 1: New SQL DB creation

Create a new SQL Database and wait until it’s deployed successfully (see Figure 2).

New SQL DB created successfully
Figure 2: New SQL DB created successfully

To demonstrate with an example, I have created the following MyBulkInsertDB and copied the connection string for connecting the DB from Visual Studio and SQL Server Management Studio. This is shown in Figure 3.

Noting the server name and connection string
Figure 3: Noting the server name and connection string

After successful deployment, add an admin account and password, as shown in Figure 4. Also, add the IP address of local machine in the portal Firewall setup to provide access

Creating admin user and password
Figure 4: Creating admin user and password

Next, I have used SQL Server Management Studio to manage my new Azure SQL server. I need to connect to the server to start writing Transact SQL scripts. Choose the database engine as the server type. Enter the fully qualified name of the Azure SQL Server. Pick SQL Server authentication as the security option and supply the login/password of the server administrator. All this is shown in Figure 5.

Connecting to Azure DB locally
Figure 5: Connecting to Azure DB locally

I have to create a table to insert data. First, the BULKUPLOADTABLE table will contain all Stored Procedures and data. The following T-SQL create statement will create the table (see Figure 6).

Creating a New Table in Azure DB
Figure 6: Creating a New Table in Azure DB

CREATE TABLE [BULKUPLOADTABLE]
(
   ST_ID INT IDENTITY(1, 1) NOT NULL,
   ST_FIELD1 VARCHAR(32) NOT NULL,
   ST_FIELD2 DATE NOT NULL,
   ST_FIELD3 REAL NULL,
   ST_FIELD4 REAL NULL,
   ST_FIELD5 REAL NULL,
   ST_FIELD6 REAL NULL,
   ST_FIELD7 REAL NULL,
   ST_FIELD8 BIGINT NULL,

   CONSTRAINT [PK_BULKUPLOADTABLE_ID] PRIMARY KEY CLUSTERED
      (ST_ID ASC)
);
GO

Second, the [STAGE].[BULKUPLOADTABLE] table is a copy of the active table without the surrogate key column named. ST_ID will be working as a staging table for intermediate data.

CREATE TABLE [STAGE].[BULKUPLOADTABLE]
(
   ST_FIELD1 VARCHAR(32) NOT NULL
   ST_FIELD2 DATE NOT NULL,
   ST_FIELD3 REAL NULL,
   ST_FIELD4 REAL NULL,
   ST_FIELD5 REAL NULL,
   ST_FIELD6 REAL NULL,
   ST_FIELD7 REAL NULL,
   ST_FIELD8 BIGINT NULL,

   CONSTRAINT [PK_BULKUPLOADTABLE_ID] PRIMARY KEY CLUSTERED
      (ST_ID ASC)
);
GO

The OPENROWSET T-SQL command can read both text and binary files from Azure Blob Storage. The next T-SQL snippet is for reading the sample Text list file. I have provided the path to blob storage file, the name of the data source, and the large object binary (LOB) option. There are three valid options:

  • BLOB: Read in the file as a binary object
  • CLOB: Read in the file as a character object
  • NCLOB: Read in the file as a Unicode object.
SELECT *
FROM
OPENROWSET
(
   BULK 'MYDATA/SAMPLEDATA.TXT',
   DATA_SOURCE = 'MYDATASOURCE',
   SINGLE_CLOB
) AS RAW_DATA;

The following corrected T-SQL code loads the data into a BULKUPLOADTABLE table.

BULK INSERT [BULKUPLOADTABLE]
FROM 'MYDATA/SAMPLEDATA.TXT'
WITH
(
   DATA_SOURCE = 'MYDATASOURCE',
   FORMAT = 'CSV',
   CODEPAGE = 65001,
   FIRSTROW = 2,
   TABLOCK
);

In the next T-SQL code, I have added a Truncate statement to delete all existing data before loading.

-- Clear data
TRUNCATE TABLE [BULKUPLOADTABLE];

-- Load data
BULK INSERT [BULKUPLOADTABLE]
FROM 'MYDATA/SAMPLEDATA.TXT'
WITH
(
   DATA_SOURCE = 'MYDATASOURCE',
   FORMAT = 'CSV',
   CODEPAGE = 65001,
   FIRSTROW = 2,
   TABLOCK
);

-- Show data
SELECT * FROM [BULKUPLOADTABLE];

At the end, the script is importing files into blog storage and inserting it into the BULKUPLOADTABLE table.

-- Import blob files into stage
EXEC [BULKUPLOADTABLE].[LOAD_FROM_BLOB_STORAGE]
   @VAR_VERBOSE_FLAG = 'N';
GO

-- Move from stage to active
INSERT INTO [BULKUPLOADTABLE]
SELECT * FROM [STAGE].[BULKUPLOADTABLE];
GO

Conclusion

BULK INSERT is a very fast option to load massive amounts of data. It is a popular tool for older versions of SQL Server, and also for new ones. SQL Server 2017 supports the capability to run BULK INSERT statements to load data from Azure storage accounts to SQL Server on-premises.

That’s all for today. Happy reading!

More by Author

Get the Free Newsletter!

Subscribe to Developer Insider for top news, trends & analysis

Must Read