Temporary Tables vs. Table Variables and Their Effect on SQL Server Performance

There are three major theoretical differences among temporary tables:

create table #T (…)

and table-variables

declare @T table (…)

The first one is that transaction logs are not recorded for the table-variables. Hence, they are out of scope of the transaction mechanism, as is clearly visible from this example:

create table #T (s varchar(128))
   declare @T table (s varchar(128))
   insert into #T select "old value #"
   insert into @T select "old value @"
   begin transaction
      update #T  set s="new value #"
      update @T  set s="new value @"
   rollback transaction
   select * from #T
   select * from @T

s
   ---------------
   old value #

s
   ---------------
   new value @

After declaring your temporary table #T and your table-variable @T, you assign each one with the same “old value” string. Then, you begin a transaction that updates their contents. At this point, both will now contain the same “new value” string. But, when you roll back the transaction, as you can see, the table-variable @T retained its value instead of reverting back to the “old value” string. This happened because, even though the table-variable was updated within the transaction, it is not a part of the transaction itself.

The second major difference is that any procedure with a temporary table cannot be pre-compiled, whereas an execution plan of procedures with table-variables can be statically compiled in advance. Pre-compiling a script gives a major advantage to its speed of execution. This advantage can be dramatic for long procedures, where recompilation can be too pricy.

Finally, table-variables exist only in the same scope as variables. Contrary to the temporary tables, they are not visible in inner stored procedures and in exec (string) statements. Also, they cannot be used in an insert/exec statement.

Comparing Both in Terms of Performance

First, prepare a test table with one million records:

create table NUM (n int primary key, s varchar(128))
   GO
   set nocount on
   declare @n int
   set @n=1000000
   while @n>0 begin
      insert into NUM
         select @n,"Value: "+convert(varchar,@n)
      set @n=@n-1
      end
   GO

Now, prepare your test procedure T1:

create procedure T1
   @total int
   as
      create table #T (n int, s varchar(128))
      insert into #T select n,s from NUM
         where n%100>0 and n<=@total
      declare @res varchar(128)
      select @res=max(s) from NUM
         where n<=@total and
            not exists(select * from #T
            where #T.n=NUM.n)
   GO

Called with a parameter, which you will vary from 10, 100, 1000, 10,000, 100,000, up to 1,000,000, it copies the given number of records into a temporary table (with some exceptions, it skips records where n is divisible by 100), and then finds a max(s) of such missing records. Of course, the more records you give, the longer the execution is.

To measure the execution time precisely, I use the code:

declare @t1 datetime, @n int
   set @t1=getdate()
   set @n=100 – (**)
   while @n>0 begin
      exec T1 1000 – (*)
      set @n=@n-1 end
   select datediff(ms,@t1,getdate())
   GO

(*) is a parameter to your procedure; it is varied from 10 to 1,000,000
(**) if an execution time is too short, I repeat the same loop 10 or 100 times.
I ran the code several times to get a result of a “warm” execution.

The results can be found in Table 1 (see below).

Now, try to improve your stored procedure by adding a primary key to the temporary table:

create procedure T2
      @total int
   as
      create table #T (n int primary  key, s varchar(128))
      insert into #T select n,s from NUM
         where n%100>0 and n<=@total
      declare @res varchar(128)
      select @res=max(s) from NUM
         where n<=@total and
            not exists(select * from #T
         where #T.n=NUM.n)
   GO

Then, create a third one. With a clustered index, it works much better. But, create the index AFTER you insert data into the temporary table—usually, it is better:

create procedure T3
      @total int
   as
      create table #T (n int, s varchar(128))
      insert into #T select n,s from NUM
         where n%100>0 and n<=@total
      create clustered index Tind on #T (n)
      declare @res varchar(128)
      select @res=max(s) from NUM
         where n<=@total and
            not exists(select * from #T
            where #T.n=NUM.n)
   GO

Surprise! It not only takes longer for the big amounts of data; merely adding 10 records takes an additional 13 milliseconds. The problem is that “create index” statements force SQL Server to recompile stored procedures, and slows down the execution significantly.

Now, try the same using table-variables:

create procedure V1
      @total int
   as
      declare @V table (n int, s varchar(128))
      insert into @V select n,s from NUM
         where n%100>0 and n<=@total
      declare @res varchar(128)
      select @res=max(s) from NUM
         where n<=@total and
            not exists(select * from @V V
            where V.n=NUM.n)
   GO

To your surprise, this version is not significantly faster than the version with the temporary table. This is a result of a special optimization SQL Server has for the create table #T statements in the very beginning of a stored procedure. For the whole range of values, V1 works better or the same as T1.

Now, try the same with a primary key:

create procedure V2
      @total int
   as
      declare @V table (n int primary key, s varchar(128))
    insert into @V select n,s from NUM
         where n%100>0 and n<=@total
      declare @res varchar(128)
      select @res=max(s) from NUM
         where n<=@total and
            not exists(select * from @V V
            where V.n=NUM.n)
   GO

The result is much better, but T2 outruns this version.

Table 1, using SQL Server 2000, time in ms

Records T1 T2 T3 V1 V2
10 0.7 1 13.5 0.6 0.8
100 1.2 1.7 14.2 1.2 1.3
1000 7.1 5.5 27 7 5.3
10000 72 57 82 71 48
100000 883 480 580 840 510
1000000 45056 6090 15220 20240 12010

The real shock is when you try the same on SQL Server 2005:

Table 2

N T1 T2 T3 V1 V2
10 0.5 0.5 5.3 0.2 0.2
100 2 1.2 6.4 61.8 2.5
1000 9.3 8.5 13.5 168 140
10000 67.4 79.2 71.3 17133 13910
100000 700 794 659 Too long! Too long!
1000000 10556 8673 6440 Too long! Too long!

In some cases, SQL 2005 was much faster then SQL 2000. But, in many cases, especially with huge amounts of data, procedures that used table variables took much longer. In four cases, I even gave up waiting.

Conclusion

  1. There is no universal rule of when and where to use temporary tables or table variables. Try them both and experiment.
  2. In your tests, verify both sides of the spectrum—small amount/number of records and the huge data sets.
  3. Be careful with migrating to SQL 2005 when you use complicated logic in your stored procedures. The same code can run 10-100 times slower on SQL server 2005!

For other SQL articles, please visit http://www.lakesidesql.com.

More by Author

Get the Free Newsletter!

Subscribe to Data Insider for top news, trends & analysis

Must Read