Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 123»»»

Keeping It All In Memory Expand / Collapse
Author
Message
Posted Saturday, January 30, 2010 4:30 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: Administrators
Last Login: 2 days ago @ 3:11 PM
Points: 31,368, Visits: 15,837
Comments posted to this topic are about the item Keeping It All In Memory






Follow me on Twitter: @way0utwest

Forum Etiquette: How to post data/code on a forum to get the best help
Post #856727
Posted Sunday, January 31, 2010 12:05 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Wednesday, May 7, 2014 9:48 AM
Points: 3, Visits: 49
In memory tempdb would be nice, except when you don't have enough memory.

I'm working with a good sized database (~300+GB with indexing). Our server is a bit aged, but has 16GB available for the reporting instance (32GB in the server, 16GB reserved for cluster failover)

Even if I managed to convince management to buy a single server with 128GB for just that system, I don't believe there would be enough space to cache data and store tempdb in memory with all the reporting/extracts running at the same time (10 extracts, up to 20 reports concurrent). With some rather large tables in the system, I just don't see how to squeeze all of that into the memory space you are advocating.

I do see a HUGE benefit to sticking the tempdb onto an SSD. Not sure about the longevity of an SSD with tempdb beating on it all the time, but you would get big iops numbers while it is there.
Post #856845
Posted Sunday, January 31, 2010 1:32 PM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Tuesday, August 27, 2013 4:57 AM
Points: 28, Visits: 124
I can't comment on what the SQL Team are thinking now. But can say the idea of an in-memory DB is not new & is something Microsoft has invested a lot of time exploring.
In the past they found that the benefits were marginal for most situations. In fact when you tried to cater for the transactional demands & the need to stay working when you run out of RAM, the design of the in-memory DB converged with the existing SQL server product. So they choose to take some of tht work to continue to optimise SQL so that it will perform well as an IMDB for those niche situations that would benefit from one.
If you think about how SQL's Buffercache works it, the tables (or pages) that you use often, remain in memory. If you do have more RAM than data, you will find SQL Server does behave as an IMDB with similar perf gains.
The issue, as you point out, is the need for log writes & persisting changes to disk. Even TempDB, while it may get Reset on Start, needs to log its changes if it is to be able to rollback any individual transaction to a savepoint.

If you have an application that is suited to the niche gains of an IMDB; (i) more RAM that data & (ii) essentially Read-Only. Consider setting the DB Option to Read-Only, this reduces the locking overhead, & frees RAM used by locks. Also using SQL 2008's Data Compression & table partition features may help. Putting your data on Read-Only Partitions may assist you reduce your maintenance window, while batch refreshing the non-Readonly bits of your database. It is also possible to spread your DB over 2 databases, where one is read-only. the usual issue of keeping the backups in sync is a non-issue as the RO db doesn't change.

Clearly H/W technologies like SSD can also give big gains. NB: Becareful here, SSD's are not "just RAM" many have a nasty drop in performance when they become near full. As a write few, read many device they are brilliant. Streaming large block writes they also handle well. But small random writes, can cause perf to degrade below a high speed disk. This perf drop varies from vendor to vendor & needs to be researched well.
Post #856859
Posted Sunday, January 31, 2010 1:38 PM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Tuesday, August 27, 2013 4:57 AM
Points: 28, Visits: 124
Part2:
The other common use for IMDB is Complex Event Processing. The desire to process data first & only log some, none or all of the changes async. ie have minimal latency when passing messages or similar types of transactions.
For this use, read up on the StreamInsight feature of the upcomming SQL 2008 R2 Release.
Dave
Post #856861
Posted Sunday, January 31, 2010 2:52 PM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Today @ 7:37 AM
Points: 351, Visits: 407
Interesting article but the last couple of lines is the most interesting I thought.

The idea of getting better performance, just from slipping your database into RAM is intriguing, but I'm not sure that's the best way to improve performance. I'd argue training, time spent on resources like SQLServerCentral, and better code being written is the way to make your system run better.


The focus should be on writing better code but with the improvements in technology, this art is becoming somewhat of a lost cause. Flip side to this, should our focus be in writing good code or should it be to focus on solutions that solve real world problems?

Just a thought on a rainy Sunday afternoon. I truly believe the focus should be both...




Post #856868
Posted Sunday, January 31, 2010 4:00 PM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Monday, June 2, 2014 3:30 PM
Points: 11, Visits: 80
I'm with David. If you have enough RAM then the important (ie popular) data stays in memory anyway. If you don't have enough RAM then a memory DB will not help.

I have trouble seeing where the advantages are.
Post #856874
Posted Sunday, January 31, 2010 6:16 PM


SSCoach

SSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoachSSCoach

Group: General Forum Members
Last Login: Today @ 2:39 PM
Points: 18,064, Visits: 16,108
I think it is an intriguing idea. I was just speaking with some EMC people about this sort of thing. In benchmarking their SSD / Flash drives they were able to clock the disks up to 50,000 IOPS (but classify them at 2500 IOPS). Take this next part with a grain of salt - They said the disks performance gains stopped at 30,000 due to the bottleneck now being in the controller cache.
If I could get 30,000 IOPS out of the Drives, I wouldn't bother with an in-memory database. With disks becoming so much faster, would there be any point in shelling out the research for this type of database? I think I would rather see other things developed first.




Jason AKA CirqueDeSQLeil
I have given a name to my pain...
MCM SQL Server, MVP


SQL RNNR

Posting Performance Based Questions - Gail Shaw
Post #856894
Posted Sunday, January 31, 2010 11:07 PM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Tuesday, August 27, 2013 4:57 AM
Points: 28, Visits: 124
Jason,
As hinted above, be very careful with SSD manufacturer quoted perf numbers.
SSD drives all perform brilliantly when first installed. But over time you start to notice a massive perf drop. This is due to their read/write characteristics. While they can read in small chunks ie 4K they need to erase/write in much larger blocks. So even changing 1 bit could require copying 512K, erasing it all & then writing.
Sure physical disks also fragment when they are nearly full, but we solve that by allocating a few huge files to SQL when the drive is fairly empty.
In short, do your benchmarking carefully, keep ~50% of the SSD drive free, or your random write pref can really suck.
Post #856944
Posted Sunday, January 31, 2010 11:25 PM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Wednesday, May 7, 2014 9:48 AM
Points: 3, Visits: 49
With regards to SSD performance tailing off...

This has been improved with the implementation of the trim command. I know Windows 7 uses it, but don't recall if any server OS has it implemented yet.

As to the 512k re-write, the new sandforce controller does some interesting things in that regard. Have to see how it performs in the real world.
Post #856946
Posted Monday, February 1, 2010 7:21 AM
Old Hand

Old HandOld HandOld HandOld HandOld HandOld HandOld HandOld Hand

Group: General Forum Members
Last Login: Friday, December 19, 2014 8:18 AM
Points: 318, Visits: 1,128
Love the topic and interesting points.

Yes I agree with others here that if you have an older server in memory databases would not be good, however if you added SSD drives for just tempdb you would experience a performance increase by at least 200%.

Back in the days of SQL 7.0 I was playing with a 3rd party tool that created a RAM drive in memory on Windows NT4.0 (yes this is old stuff) and I moved the tempdb in this RAM drive. The increase in performance was outstanding, averaging 200-400%. Now jump to today. We can easily get a servers with over 100Gigs of RAM and could allocate a slice for a RAM drive. Then we could run a script to move tempdb to this RAM drive every time SQL server starts. Only problem, there is not RAM disk software for Windows 2008.

Simple yet effective way to get a large performance boost.

Yes, I know, I too have old SQL servers here and this technology will not help them. But any new servers (64 BIT) would be able to take advantage of this setup.

Just my thoughts,

Rudy



Post #857166
« Prev Topic | Next Topic »

Add to briefcase 123»»»

Permissions Expand / Collapse