Speed Up Repetitive Insert, Update, and Delete Query Statements

Using SQL Server 2008 Table-Valued Parameters

Welcome to this installment of the .NET Nuts & Bolts column! This particular column harkens back to the style of some of my early columns where I took a problem faced in my every day project work and shared the solution with the masses. In this case, the problem at hand is how to boost the performance of a web page that performs a large number of inserts to a database. This article will cover the use of ADO.NET framework to make use of table-valued parameters.

Table-Valued Parameters

Table-valued parameters are a new parameter type that was quietly introduced as part of SQL Server 2008. You use table-valued parameters to send multiple rows to a T-SQL statement such as a stored procedure without requiring many parameters or temp tables. This can result in large performance gains in your application when replacing a loop performing operations such as an insert. Table-valued parameters are declared by using user-defined types.

Creating and Using Table-Valued Parameters in SQL Server

The first step in using table-valued parameters is to create your user-defined type. The script statement below creates an example type that I used on a project to import data from a data file. The syntax is very similar to a table definition.

  CREATE TYPE [dbo].[ECR_Import_DataTable] AS TABLE(
  	[LineNumber] [nvarchar](10) NULL,
  	[LineNumberSubscript] [nvarchar](10) NULL,
  	[ColumnNumber] [nvarchar](10) NULL,
  	[ColumnNumberSubscript] [nvarchar](10) NULL,
  	[FiscalYear] [nvarchar](10) NULL,
  	[ProviderNumber] [nvarchar](20) NULL,
  	[in_ReviewId] [int] NULL,
  	[Value] [nvarchar](150) NULL,
  	[WorksheetCode] [nvarchar](10) NULL,
  	[UpdateUser] [nvarchar](50) NULL

There is an example stored procedure definition below that takes our user-defined type as one of the parameters. The actions being performed are pretty common for a file import. It is deleting the current contents of the table and replacing them with new contents. It is all being done as a transaction so the whole thing works or fails. The last parameter in the stored procedure call is our user-defined type.

  CREATE PROCEDURE [dbo].[ECRImport_File]
  	@ReviewId int,
  	@fiscalYear nvarchar(10),
  	@ProviderNumber nvarchar(20),
  	@UserName nvarchar(20),
  	@ProfileId uniqueidentifier,
  	@ecrRecords ECR_Import_DataTable READONLY

    	Declare @PUFFiscalCalId as nvarchar(20);
  	set @PUFFiscalCalId = cast(@fiscalYear as nvarchar) + '01';

  		DELETE FROM   Analysis.factECR WHERE  in_ReviewId = @ReviewId;

  		INSERT INTO Analysis.factECR
  		SELECT ecr.LineNumber, ecr.LineNumberSubscript,
    ecr.ColumnNumber, ecr.ColumnNumberSubscript,
    ecr.FiscalYear,ecr.ProviderNumber, ecr.in_ReviewId, ecr.Value,
    ecr.WorksheetCode, ecr.UpdateUser
  		FROM @ecrRecords as ecr;



Making the Call from ADO.NET Framework

Now that we have created our user-defined type and a stored procedure to use it, we need to actually call the stored procedure from within our application. The following example code demonstrates how to make the call.

There are a couple of things to note about the example below. It is assumed that the file contents have already been read in to a string array and available as a variable called ecrContent. I didn’t include the code for doing this to keep the example more compact. Additionally, the table-valued parameter is passed in ADO.NET as a DataTable. As you’ll see in the code, I create a strongly typed List that gets populated with the contents of the file. I then use a helper method to convert the List to a DataTable. It absolutely would be more efficient to read straight in to a DataTable instead of a list, but for ease of example and code brevity I’ve gone this route instead. The code for the helper method is also not included for brevity as it isn’t germane to the example.

  SqlConnection conn = null;
    string connString =
    conn = new SqlConnection(connString);

    // Convert string list to objects
    string providerNumber = review.Organization.ProviderNumber;
    string fiscalYear = "2010";
    Guid profileId = ProfileUtility.GetProfileId(userName);
    List<EcrRecord> records = new List<EcrRecord>();
    foreach (string line in ecrContent)
  var ecr = new EcrRecord()
  LineNumber = lineNumber = line.Substring(8, 5).Trim(),
  LineNumberSubscript = line.Substring(13, 2).Trim(),
  ColumnNumber = line.Substring(15, 3).Trim(),
  ColumnNumberSubscript = line.Substring(18, 2).Trim(),
  FiscalYear = fiscalYear,
  ProviderNumber = providerNumber,
  in_ReviewId = review.in_Id,
  Value = line.Substring(20).Replace("r", "").Trim(),
  WorksheetCode = line.Substring(0, 8).Trim(),
  UpdateUser = userName

    // Take the objects and convert them to a DataTable
    DataTable ecrRecords =

    SqlCommand cmd = null;
    cmd = new SqlCommand("ECRImport_File", conn);
    cmd.CommandType = CommandType.StoredProcedure;
    cmd.Parameters.AddWithValue("ReviewId", review.in_Id);
    cmd.Parameters.AddWithValue("fiscalYear", fiscalYear);
    cmd.Parameters.AddWithValue("ProviderNumber", providerNumber);
    cmd.Parameters.AddWithValue("UserName", userName);
    cmd.Parameters.AddWithValue("ProfileId", profileId.ToString());                 
    cmd.Parameters.AddWithValue("ecrRecords", ecrRecords);
    if (conn != null)
  if (conn.State != ConnectionState.Closed)

Reaping the Benefits

The particular scenario I was facing within an application is inserting data from a data file with thousands of data rows. For reasons that I won’t debate here assume there are reasons the data file is being processed in code rather than something like DTS or SQL Server Integration Services. The original code was using LINQ to Entities to remove all the existing records then followed by inserting the new records. The data file has as many as 2000 records in it, which means using LINQ to Entities there would be several thousand calls to the database. The performance actually wasn’t too bad for all of this to happen, however there were concerns about scalability. The data being imported is specific to a user. The data import was for a process that happens once a year around the same deadline. Hundreds of users would be importing this data to help pre-populate data in to a process. That many users generating that many calls gave concern that the solution would not scale. The code was refactored and table-valued parameters were implemented to ensure the scalability is not an issue.


We explored how you can make use of table-valued parameters within your applications. We explored creating the user-defined type, creating a stored procedure, and then calling the stored procedure from .NET. This can be an extremely valuable performance improvement for certain scenarios such as inserting a large number of records in to a table.

Future Columns

The topic of the next column is yet to be determined. If you have something else in particular that you would like to see explained here you could reach me at through http://markstrawmyer.com.

More by Author

Must Read