A User Database As A TempDB

  • I have a situation where a user database is being used as a psuedo tempdb, the database can be dropped and recreate without issue - it is used during imports of data in an ETL .

    Two questions:
    Similar to the actual temp db, would I benefit by adding additional data files to this database? What can I do to ensure maximized IO for this user database?

    Is it possible to migrate this type of database into memory? What would be the appropriate steps? Would it be a show stopper if I couldn't change the existing table structure of the database?

    Thanks!

  • Without seeing the setup and size and nature, my answers are going to be pretty generic.

    Yes, more files on more disks can generally mean better I/O. Just make sure when you're trying to tune for I/O that you're really adding disks. I see a lot of people on a SAN adding "disks" and it's just more allocations of the same storage. The performance improvements then are pretty trivial.

    In-memory tables can load a ton faster than disk tables and is a great way to do a staging database (which is what you have). Just make sure you have enough memory to support the load. There's no allowances made to write extra data to disk for temporary storage when dealing with in-memory tables. If it's out of memory, you can't put more data in, period. I had someone once bitterly complain that while they only allocated 16gb of memory that SQL Server let them try to load 32gb of data and then failed. It won't stop you if you attempt to do stuff like that, it'll just fail.

    Hope that helps.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Thanks for the response - I appreciate your feedback. 

    To go a little deeper:

    The user temp database is set in simple mode, there is a stored procedure that drops both the log and data file on the same disk as the primary database. We're already working on updating the database create scripts to drop the data and log files to separate disks upon recreate. Then we run a few other sprocs to populate a simple schema and some fixed values before using the user temp db with the application. Then the UserTempDB is ready for use with our import mechanism.

    I understand your point about disks vs. SAN - I've learned the hard way not to trust my SAN operator. But sadly I don't have any other options in terms of storage arrays. My primary question was - can I add additional .NDF files to this UserTempDB and expect to see behavior similar to that of a system tempdb (where I currently have 1mdf and 7ndf files).

    The one thing I do have at my disposal is RAM, so I am excited by the prospect of taking this UserTempDB out of disk and into memory. the issue is, I don't know where to begin. Is there some hack I can use to just "trick" the UserTempDB into memory (i read about this somewhere and now I can't find the link), or will I have to breakdown the full schema and then update with all with 2016 SP1 Hekaton consideration in mind? 

    Ian

  • You definitely sound like a good candidate for the in-memory tables. To get started, go here. Pay the most attention to the restrictions. It's a huge performance boost and sounds well suited to what you're doing.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply