Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase «««4,2634,2644,2654,2664,267»»»

Are the posted questions getting worse? Expand / Collapse
Author
Message
Posted Friday, January 24, 2014 2:20 PM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 1:46 PM
Points: 5,384, Visits: 7,458
Lynn Pettis (1/24/2014)
Thanks to all who answered my question so far. I was just curious as I overheard one of our SAs make the comment that it was a best practice to have sufficient memory to hold the entire database in memory. I just wonder where they come up with these ideas.


Probably after reading a SQL CAT paper which had something along these lines:
"We made sure that the database was small enough to entirely fit in memory with plenty of room for the expected cache for our plans, memory usage, and tempdb usage, to attempt to reduce the impact of I/O in determining the impact of these modifications..."

which ends up as:
"Put everything in memory and we don't care about disks!!1! B3ST PR4CT!C3!!1!"



- Craig Farrell

Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

For better assistance in answering your questions | Forum Netiquette
For index/tuning help, follow these directions. |Tally Tables

Twitter: @AnyWayDBA
Post #1534648
Posted Friday, January 24, 2014 2:26 PM
Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Yesterday @ 1:15 PM
Points: 590, Visits: 6,754
Actually, I thought I faintly remembered reading something similar at some point; after doing some digging, it turns out it was (incorrectly) attributed to Brent Ozar's site in my head:

Blog Entry

There's a snippet in there saying that, if you can fit the database into memory, the I/O need overall should be reduced (paraphrasing). Somehow or another, that got twisted into the "it's best to fit the database into RAM" idea in my head. Not too big of a logical leap to make; perhaps others have made similarly misconstrued assumptions.




-
Post #1534653
Posted Friday, January 24, 2014 6:03 PM


Hall of Fame

Hall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of Fame

Group: General Forum Members
Last Login: Yesterday @ 11:32 PM
Points: 3,420, Visits: 5,351
HowardW (1/24/2014)
Lynn Pettis (1/24/2014)
Has anyone else ever heard that it is a best practice to have enough server memory to hold your entire database in memory?


No, but I've heard plenty of people try to justify that you can't possibly use more RAM than the sum of the database size.


You can never have too much memory.

Wait... what was I saying?



My mantra: No loops! No CURSORs! No RBAR! Hoo-uh!

My thought question: Have you ever been told that your query runs too fast?

My advice:
INDEXing a poor-performing query is like putting sugar on cat food. Yeah, it probably tastes better but are you sure you want to eat it?
The path of least resistance can be a slippery slope. Take care that fixing your fixes of fixes doesn't snowball and end up costing you more than fixing the root cause would have in the first place.


Need to UNPIVOT? Why not CROSS APPLY VALUES instead?
Since random numbers are too important to be left to chance, let's generate some!
Learn to understand recursive CTEs by example.
Splitting strings based on patterns can be fast!
Post #1534687
Posted Saturday, January 25, 2014 1:34 PM


SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: General Forum Members
Last Login: Yesterday @ 4:57 PM
Points: 7,745, Visits: 9,493
Evil Kraig F (1/24/2014)
Lynn Pettis (1/24/2014)
Thanks to all who answered my question so far. I was just curious as I overheard one of our SAs make the comment that it was a best practice to have sufficient memory to hold the entire database in memory. I just wonder where they come up with these ideas.


Probably after reading a SQL CAT paper which had something along these lines:
"We made sure that the database was small enough to entirely fit in memory with plenty of room for the expected cache for our plans, memory usage, and tempdb usage, to attempt to reduce the impact of I/O in determining the impact of these modifications..."

which ends up as:
"Put everything in memory and we don't care about disks!!1! B3ST PR4CT!C3!!1!"

Or maybe something that dates from a lot earlier. Between 1986 and 1995 I was involved in 3 projects aiming at in store databases (all parallel systems using interesting networks to handle the RAM), there was a lot of interest in that sort of thing as part of the general push into High Performance Computing and Networking in Europe during that decade, a lot of people were looking into it, some systems were even sold (not very many, I think), national governments had been subsidising research HPCN research in response to the perceived economic threat from the Japanese computer industry from 1984 or thereabouts and the CEC ran with it from a bit later, various things were published by various people and various things were published. I don't think anyone ever claimed that it was always best practise to have the whole DB in RAM, though - but for databases up to about a Terabyte (including log files as well as data files) it was eminently feasible for suitable applications using a network of say 256 pretty pedestrian cheap computers (given the right network, of course, and a lot of conditions that probably applied for OLTP but didn't cover complex queries involving more than a small percentage of the data). Picking up some of the results from that work from published reports might well have given some people some pretty crazy ideas, perhaps ending up with a all databases best practise claim.


Tom
Post #1534745
Posted Saturday, January 25, 2014 4:57 PM


SSC-Dedicated

SSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-DedicatedSSC-Dedicated

Group: General Forum Members
Last Login: Yesterday @ 11:18 PM
Points: 35,267, Visits: 31,759
Shifting gears a bit, I've seen a trend starting to develop that will define the next wave of performance and memory problems. Even some people that I think have a good handle on T-SQL are starting to say things like "It doesn't matter because our database fits in memory" or "It doesn't matter because we have SSDs" with "it" being good code.

The Bread'n'butter database that we have at work all fits in memory and yet we have some serious performance problems with some of the legacy code because of logical reads. Some recent fixes were to relatively simple code that did (and this is not a typo) 10 to 16 TRILLION reads per 8 hour period. And it wasn't just one piece of code, either. Some of the code was high hit front end code and some of it was batch code. I have to tell you that even memory has a problem with multiple pieces of code that consume 10 to 16 Trillion reads each even when it is spread out over an 8 hour period.

Adding more memory isn't going to fix that kind of code for performance or throughput.


--Jeff Moden
"RBAR is pronounced "ree-bar" and is a "Modenism" for "Row-By-Agonizing-Row".

First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column."

(play on words) "Just because you CAN do something in T-SQL, doesn't mean you SHOULDN'T." --22 Aug 2013

Helpful Links:
How to post code problems
How to post performance problems
Post #1534758
Posted Sunday, January 26, 2014 7:31 AM


Say Hey Kid

Say Hey KidSay Hey KidSay Hey KidSay Hey KidSay Hey KidSay Hey KidSay Hey KidSay Hey Kid

Group: General Forum Members
Last Login: Friday, September 26, 2014 1:39 PM
Points: 670, Visits: 6,720
Jeff Moden (1/25/2014)
Shifting gears a bit, I've seen a trend starting to develop that will define the next wave of performance and memory problems. Even some people that I think have a good handle on T-SQL are starting to say things like "It doesn't matter because our database fits in memory" or "It doesn't matter because we have SSDs" with "it" being good code.

The Bread'n'butter database that we have at work all fits in memory and yet we have some serious performance problems with some of the legacy code because of logical reads. Some recent fixes were to relatively simple code that did (and this is not a typo) 10 to 16 TRILLION reads per 8 hour period. And it wasn't just one piece of code, either. Some of the code was high hit front end code and some of it was batch code. I have to tell you that even memory has a problem with multiple pieces of code that consume 10 to 16 Trillion reads each even when it is spread out over an 8 hour period.

Adding more memory isn't going to fix that kind of code for performance or throughput.


It would be interesting to see the $ value associated with your code fixes.
Adding more memory many times is much more costly than someone expects.
After several rounds of the server guys having to toss out chips just to add more, I became part of the process.
Configuration was not always the cheapest way to start, but we had options to make it to the end of lease that were cheaper overall.

A real eye opener for those on a tight budget running smaller machines (2 socket, 32 GB RAM) - Price out 4 or 8 socket machines, with over 64 GB of RAM.
We had a consolidation project that those in charge didn't have a good handle on this.
Projected savings took a hit.
Post #1534787
Posted Sunday, January 26, 2014 10:01 AM


SSCarpal Tunnel

SSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal TunnelSSCarpal Tunnel

Group: General Forum Members
Last Login: Yesterday @ 10:48 AM
Points: 4,126, Visits: 3,427
Greg Edwards-268690 (1/26/2014)
Jeff Moden (1/25/2014)
Shifting gears a bit, I've seen a trend starting to develop that will define the next wave of performance and memory problems. Even some people that I think have a good handle on T-SQL are starting to say things like "It doesn't matter because our database fits in memory" or "It doesn't matter because we have SSDs" with "it" being good code.

The Bread'n'butter database that we have at work all fits in memory and yet we have some serious performance problems with some of the legacy code because of logical reads. Some recent fixes were to relatively simple code that did (and this is not a typo) 10 to 16 TRILLION reads per 8 hour period. And it wasn't just one piece of code, either. Some of the code was high hit front end code and some of it was batch code. I have to tell you that even memory has a problem with multiple pieces of code that consume 10 to 16 Trillion reads each even when it is spread out over an 8 hour period.

Adding more memory isn't going to fix that kind of code for performance or throughput.


It would be interesting to see the $ value associated with your code fixes.
Adding more memory many times is much more costly than someone expects.
After several rounds of the server guys having to toss out chips just to add more, I became part of the process.
Configuration was not always the cheapest way to start, but we had options to make it to the end of lease that were cheaper overall.

A real eye opener for those on a tight budget running smaller machines (2 socket, 32 GB RAM) - Price out 4 or 8 socket machines, with over 64 GB of RAM.
We had a consolidation project that those in charge didn't have a good handle on this.
Projected savings took a hit.

In my/our case the decision went the other way: to meet the performance criteria, we got a 64-core, 384 GB server. We could implement predictive caching and we still may have to do that; however, the new server was in place and up in three weeks while design and implementation of caching will take months.
Post #1534794
Posted Sunday, January 26, 2014 11:56 PM


SSChampion

SSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampionSSChampion

Group: General Forum Members
Last Login: Yesterday @ 11:32 PM
Points: 13,017, Visits: 10,801
hisakimatama (1/24/2014)
Actually, I thought I faintly remembered reading something similar at some point; after doing some digging, it turns out it was (incorrectly) attributed to Brent Ozar's site in my head:

Blog Entry

There's a snippet in there saying that, if you can fit the database into memory, the I/O need overall should be reduced (paraphrasing). Somehow or another, that got twisted into the "it's best to fit the database into RAM" idea in my head. Not too big of a logical leap to make; perhaps others have made similarly misconstrued assumptions.


I remember seeing a session of Brent Ozar on channel9 (I think it was TechEd North America) where he does say for OLTP databases to just load the whole thing in memory. He then add that this of course doesn't work for data warehouses.




How to post forum questions.
Need an answer? No, you need a question.
What’s the deal with Excel & SSIS?

Member of LinkedIn. My blog at LessThanDot.

MCSA SQL Server 2012 - MCSE Business Intelligence
Post #1534855
Posted Monday, January 27, 2014 2:10 AM
SSCommitted

SSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommittedSSCommitted

Group: General Forum Members
Last Login: Yesterday @ 7:18 AM
Points: 1,594, Visits: 5,633
Jeff Moden (1/25/2014)Some recent fixes were to relatively simple code that did (and this is not a typo) 10 to 16 TRILLION reads per 8 hour period.


Wow...by my estimates that's half a billion reads per *second*! I dread to think what sort of monster machine you'd need to provide that sort of throughput even for logical reads.

I recommend tasking whoever wrote that to produce a spreadsheet program on a ZX Spectrum 48K--hopefully that'll teach them the advantages of optimisation.
Post #1534894
Posted Monday, January 27, 2014 5:55 AM


Right there with Babe

Right there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with BabeRight there with Babe

Group: General Forum Members
Last Login: Yesterday @ 10:23 AM
Points: 728, Visits: 5,231
Jeff Moden (1/25/2014)
Shifting gears a bit, I've seen a trend starting to develop that will define the next wave of performance and memory problems. Even some people that I think have a good handle on T-SQL are starting to say things like "It doesn't matter because our database fits in memory" or "It doesn't matter because we have SSDs" with "it" being good code.


/Me slamming shoe on desk
GOOD CODE ALWAYS IS REQUIRED!
Post #1534953
« Prev Topic | Next Topic »

Add to briefcase «««4,2634,2644,2654,2664,267»»»

Permissions Expand / Collapse