Taking it back to SQL 101 today because I recently saw something that floored me. I’m a big fan of temp tables. I use ’em all over the place to stash intermediate results, break a dataset down into more manageable chunks, or even share them between procedures. And as it turns out, there’s more than one way to create them.
The Traditional Way
If you’re planning out your temp table usage, you’re probably creating them just like any other table, then populating them.
create table #MyTempTable ( [number] int not NULL, [name] nvarchar(35) NULL ); insert into #MyTempTable ([number],[name]) select [number], [name] from master.dbo.spt_values;
As you’d expect, this works really well. Even better, it very clearly describes intent. Everything is laid out in detail, and very easy to understand.
The Quick Way
Contestant #2 is pretty easy to follow as well, but comes with a little baggage.
select [number], [name] into #MyTempTable from master.dbo.spt_values;
This is a bit of a shortcut. Rather than declaring our table & the field types before populating it, we’re creating the table on the fly based on the data types that come out of the select
statement. If you’re using user-defined types, this can get a little muddy. But generally, this works OK. It still signals intent – I want to take the results of this query and drop them into a temp table.
The “Wait…what? Why!?” Way
The third way is kind of bonkers.
select 0 as [number], cast('' as nvarchar(35)) as [name] into #MyTempTable from sys.databases where 0=1; insert into #MyTempTable ([number],[name]) select [number], [name] from master.dbo.spt_values;
This one really gums up the works for the person reading your code in the future (that person may be you!). At first, it looks like it’s selecting from a regular table into a temp table. But it’s doing a CAST()
, and not actually selecting any of the fields on the table. But hey, that’s alright! Because thanks to where 0=1
, it’ll never return any rows!
Does this create the temp table we want? Yes. Is the intent of the code clear? Not in my opinion. It’s creating an empty data set with the types that we want, but in a roundabout way, and referencing a table that may have nothing to do with the operation we’re performing. If you’re doing this, please leave a note.
Please read Jeff Moden’s excellent comment below which explains this reasoning behind and potential benefits of this method.
You may be thinking “I should omit the FROM
and WHERE
. After all, they’re just clutter here, right?” If you do, you aren’t creating an empty set anymore, you’re selecting those values and creating a one-record set!

It turns out that you can do it without the FROM
and WHERE
if you use TOP 0
. But as above, it’s kind of unclear in my opinion.
select TOP 0 0 as [number], cast('' as nvarchar(35)) as [name] into #MyTempTable
Yes, I Have a Preference
There are three (and a half?) ways to create a temp table presented here (and there’s probably more that I’m not aware of – dynamic SQL doesn’t count though). If I’m playing Kiss, Marry, Kill with them, I’m marrying the first one (CREATE TABLE
), kissing the second one (SELECT INTO #TempTable
), and throwing that last one into the volcano unless there’s a good reason for having it (again, please read Jeff’s comment below).
I’ve recently uncovered a problem with the quick way. It doesn’t always include all columns.
@Dan White
Do you have a code example of where that problem is demonstrated>
There’s one more:
use tempdb
go
create table MyTable
(
[number] int not NULL,
[name] nvarchar(35) NULL
);
I use this for testings. And later I can just change the ‘use’.
Will this automatically destroy the table when the scope it was created in terminates?
First, thanks for taking the time to write an article. Anyone that shares knowledge is ok in my book.
On the subject of the “Wait…what? Why!?” method, there actually is a reason for doing such a thing even if people don’t want to change a datatype in the creation of a Temp Table.
The first reason is that some people still believe in the old wives’ tale that SELECT/INTO blocks TempDB for it’s entire duration. That hasn’t been true since a hot-fix in the RTM of SQLServer 6.5 and later became a permanent fix with SP1 of 6.5.
Still, SELECT/INTO still does put some locks on a system table or two and while it doesn’t block the use of TempDB, it does stop some programs (like the Explorer Window in SSMS) that look at certain system tables in their tracks.
Since it’s still real handy to have the Temp Table be created from data rather than having to explicitly define it, people seriously cut down on the amount of time the table creation takes with SELECT/INTO by having it do nothing by using the WHERE 1=0 trick.
Thee problem with that trick is the lock escalation that occurs when you follow that with INSERT/SELECT and then, if needed, people add a Clustered Index after they’ve populated the table and we know that doubles the memory requirement and the disk space (if there’s a spill) used AND makes it more likely that there actually will be a spill.
Since TempDB is ALWAYS in the SIMPLE Recovery Model, a lot of all that can be side stepped and performance can be increased (as well) by adding two simple little words to the INSERT of INSERT/SELECT and those are WITH(TABLOCK). That enables minimal logging, which can take place even with a Clustered Index in place (especially if the INSERT is in the same order as the key(s) on the Clustered Index).
Of course, since all of that blows people minds because they don’t know any of these tricks, the easiest and best thing to do is to …. add a bloody comment!
There’s another trick you can do with WITH(TABLOCK) if the table has just been created or truncated (deletes won’t do it here) and it doesn’t matter which Recovery Model you’re in… Contrary to popular belief, if you design the Clustered Index with a Fill Factor lower than 100, the INSERT WILL FILL THE PAGES ONLY UP TO THE FILL FACTOR. It only works on that first insert and it only works if you use WITH(TABLOCK). One possible reason for doing such a thing might be when you’re importing data and you know you’re going to have to do “ExpAnsive” updates to some of the columns… having that lower Fill Factor can help with avoiding page splits, which will make the process faster no matter which Recovery Model you’re in.
For more info on that special first insert, please see the following article, which also has repeatable/demonstrable test code.
https://www.sqlservercentral.com/articles/some-t-sql-inserts-do-follow-the-fill-factor-sql-oolie
Thank you for the extensive and informative reply, Jeff! It’s going to take me a little while to digest all of this but I think I see some experimentation in my future!
You bet, Andy. Thank you for the feedback.