SQL Clone
SQLServerCentral is supported by Redgate
 
Log in  ::  Register  ::  Not logged in
 
 
 


Fitting Into RAM


Fitting Into RAM

Author
Message
Steve Jones
Steve Jones
SSC Guru
SSC Guru (602K reputation)SSC Guru (602K reputation)SSC Guru (602K reputation)SSC Guru (602K reputation)SSC Guru (602K reputation)SSC Guru (602K reputation)SSC Guru (602K reputation)SSC Guru (602K reputation)

Group: Administrators
Points: 602354 Visits: 21101
Comments posted to this topic are about the item Fitting Into RAM

Follow me on Twitter: @way0utwest
Forum Etiquette: How to post data/code on a forum to get the best help
My Blog: www.voiceofthedba.com
Richard
Richard
Ten Centuries
Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)Ten Centuries (1.2K reputation)

Group: General Forum Members
Points: 1189 Visits: 1407
Ha, I dream of 64GB! I currently have 6 servers with 4GB, 7 servers with 8GB and the rest (23) ranging from 12GB to 50GB.

Our SAP and BW servers are in the cloud now but even those only had 32GB.!
Graeme100
Graeme100
SSCertifiable
SSCertifiable (6.8K reputation)SSCertifiable (6.8K reputation)SSCertifiable (6.8K reputation)SSCertifiable (6.8K reputation)SSCertifiable (6.8K reputation)SSCertifiable (6.8K reputation)SSCertifiable (6.8K reputation)SSCertifiable (6.8K reputation)

Group: General Forum Members
Points: 6755 Visits: 824
Hi,
A good discussion topic I think.
I have been having these conversation with my superiors over the last 10 year.
I have discovered that no matter how much benefit this would give, it will always come second place to budget.
We have recently moved to the Amazon Cloud, lots or pros and definitely lots of cons, but now we have resource based costs.
All this means is that we work on the principle of " what is the least amount of resources we can work to".
In response to the topic, we have many small to medium size databases and our systems would be dramatically improved but loading completely into RAM but it just won't happen.
The alternative is to find alternative technologies that suit certain elements of our product, for example we are using Amazon Elastic search to store certain data thus reducing the number of calls to the main databases.
Rather than design to the capabilities of SQL Server and provide the resources (particularly RAM) to make it run well, I see in our products in particular a move to find cheaper "bolt on" solutions that solve immediate problems.
I'm not convinced
Give me more RAM

Graeme



hammackk
hammackk
SSC-Enthusiastic
SSC-Enthusiastic (172 reputation)SSC-Enthusiastic (172 reputation)SSC-Enthusiastic (172 reputation)SSC-Enthusiastic (172 reputation)SSC-Enthusiastic (172 reputation)SSC-Enthusiastic (172 reputation)SSC-Enthusiastic (172 reputation)SSC-Enthusiastic (172 reputation)

Group: General Forum Members
Points: 172 Visits: 44
Wow 64GB of RAM is like a dream. Right now, we use VMs, which we could get more RAM for, but there's pretty much no business case for more RAM unless the server is getting overloaded on a regular basis. Our biggest database is currently running on 24GB, while the database size is ~300GB. So yes, we could put the whole thing in 1TB RAM, but that's not even a remote possibility.
Summer90
Summer90
One Orange Chip
One Orange Chip (28K reputation)One Orange Chip (28K reputation)One Orange Chip (28K reputation)One Orange Chip (28K reputation)One Orange Chip (28K reputation)One Orange Chip (28K reputation)One Orange Chip (28K reputation)One Orange Chip (28K reputation)

Group: General Forum Members
Points: 28577 Visits: 4221
I wish............
On the topic of memory AND storage.... keep hearing it is cheap... it is only cheap until you ask for more....
Most of our SQL Servers I attempt to ask for a reasonable amt. of memory based on the system. If I always asked for over 20 gig the server team would highly question it.
I wish I could get 64 gig of memory for nearly all production SQL Servers.... that just isn't going to happen.
Jeff Moden
Jeff Moden
SSC Guru
SSC Guru (870K reputation)SSC Guru (870K reputation)SSC Guru (870K reputation)SSC Guru (870K reputation)SSC Guru (870K reputation)SSC Guru (870K reputation)SSC Guru (870K reputation)SSC Guru (870K reputation)

Group: General Forum Members
Points: 870324 Visits: 47424

We have a couple of 2.2 TB database and it's that large because of a very common problem in the industry... people use RDBMSs as a "keep it forever" file system. The other problem is that, much like dust gathering on available horizontal surfaces, people abhor a vacuum. If you have the space, why not use abuse it?

It's just like CPU power. People used to do some incredible things with computers that we'd now consider to be a "toy" but people have gotten lazy with the power of today's machines. They've also been pressured by schedules to produce code as fast as they can instead of doing it right by planning on increased usage and scalability.

Would I turn down extra CPU power or RAM or Disk Space? Not on your life! Will it make me a lazy programmer? Again, not on your life because I know that stuff grows because of increasing numbers of customers and capabilities and that, someday, we'll need the headroom. They key is to not waste that headroom.

I'll also remind folks that while hardware is certainly important when it comes to being able to write fast code, writing code that runs fast/efficiently (and, of course, correctly, which is a whole 'nuther story Sick) is much more important. We've been through this recently (about 3 years ago). We went from 16 CPUs and 128 GB to a "monster" machine with 32 CPUs and 256 GB of RAM with full up SSDs for a new SAN. Everyone (except me) was claiming and looking forward to the incredible performance gains that would be realized. The net result? Some of the hours long batch jobs ran about twice as fast (most didn't) and none of the GUI code ran faster. Compared to the code changes we've made, which have averaged a 60X performance gain and up to a 1,000,000X performance gain (NOT a misprint, much GUI code has been reduced from 22 seconds to micro-seconds), the hardware changes were almost a joke.

Even most MPP appliance vendors will advertise only a 30X performance gain... and it comes at the cost of not only some hairy prices but you still DO have to change your code to be able to use them. Ironically, a lot of such code changes would benefit even a monolithic box.

And, no... except for the occasional "trick" of using a Tally Table or nearly equivalent Tally function, the only real "trick" we used to change our code to run so fast was to simply write better code to "LET" SQL Server do what it already knows how to do. Except for the occasional use of a Tally function, we just used what SQL Server already knows how to do if you let it. It's not difficult, either. You just have to know what SQL Server and (especially) what T-SQL is capable of and then you won't need so many CPUs and so much RAM.

And that all brings us back to people. The only way we were able to achieve such things is for everyone to take the bloody chip off their shoulder and work together and to embrace experimentation. In other words, a culture change (the latest buzzword term for this is DevOps, which has already been bastardized into meaning something that it is not) and that culture change needs to be made from the CEO right down to the person that swamps out the toilets.

Hardware is good but code is where the performance is. To write good code, the culture must change or you're going to need a shedload of hardware, a ton of expensive software (which frequently makes things worse) and hundreds of totally unnecessary and unsupportable tricks

For you good folks stuck with 4GB or 8GB of RAM, my heart goes out to you because, due to poor coding by the people that built it, the operating system itself usually requires at least that much.



--Jeff Moden

RBAR is pronounced ree-bar and is a Modenism for Row-By-Agonizing-Row.
First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
If you think its expensive to hire a professional to do the job, wait until you hire an amateur. -- Red Adair

Helpful Links:
How to post code problems
How to post performance problems
Forum FAQs
Summer90
Summer90
One Orange Chip
One Orange Chip (28K reputation)One Orange Chip (28K reputation)One Orange Chip (28K reputation)One Orange Chip (28K reputation)One Orange Chip (28K reputation)One Orange Chip (28K reputation)One Orange Chip (28K reputation)One Orange Chip (28K reputation)

Group: General Forum Members
Points: 28577 Visits: 4221
Jeff Moden - Friday, September 21, 2018 8:05 AM

We have a couple of 2.2 TB database and it's that large because of a very common problem in the industry... people use RDBMSs as a "keep it forever" file system. The other problem is that, much like dust gathering on available horizontal surfaces, people abhor a vacuum. If you have the space, why not use abuse it?

It's just like CPU power. People used to do some incredible things with computers that we'd now consider to be a "toy" but people have gotten lazy with the power of today's machines. They've also been pressured by schedules to produce code as fast as they can instead of doing it right by planning on increased usage and scalability.

Would I turn down extra CPU power or RAM or Disk Space? Not on your life! Will it make me a lazy programmer? Again, not on your life because I know that stuff grows because of increasing numbers of customers and capabilities and that, someday, we'll need the headroom. They key is to not waste that headroom.

I'll also remind folks that while hardware is certainly important when it comes to being able to write fast code, writing code that runs fast/efficiently (and, of course, correctly, which is a whole 'nuther story Sick) is much more important. We've been through this recently (about 3 years ago). We went from 16 CPUs and 128 GB to a "monster" machine with 32 CPUs and 256 GB of RAM with full up SSDs for a new SAN. Everyone (except me) was claiming and looking forward to the incredible performance gains that would be realized. The net result? Some of the hours long batch jobs ran about twice as fast (most didn't) and none of the GUI code ran faster. Compared to the code changes we've made, which have averaged a 60X performance gain and up to a 1,000,000X performance gain (NOT a misprint, much GUI code has been reduced from 22 seconds to micro-seconds), the hardware changes were almost a joke.

Even most MPP appliance vendors will advertise only a 30X performance gain... and it comes at the cost of not only some hairy prices but you still DO have to change your code to be able to use them. Ironically, a lot of such code changes would benefit even a monolithic box.

And, no... except for the occasional "trick" of using a Tally Table or nearly equivalent Tally function, the only real "trick" we used to change our code to run so fast was to simply write better code to "LET" SQL Server do what it already knows how to do. Except for the occasional use of a Tally function, we just used what SQL Server already knows how to do if you let it. It's not difficult, either. You just have to know what SQL Server and (especially) what T-SQL is capable of and then you won't need so many CPUs and so much RAM.

And that all brings us back to people. The only way we were able to achieve such things is for everyone to take the bloody chip off their shoulder and work together and to embrace experimentation. In other words, a culture change (the latest buzzword term for this is DevOps, which has already been bastardized into meaning something that it is not) and that culture change needs to be made from the CEO right down to the person that swamps out the toilets.

Hardware is good but code is where the performance is. To write good code, the culture must change or you're going to need a shedload of hardware, a ton of expensive software (which frequently makes things worse) and hundreds of totally unnecessary and unsupportable tricks

For you good folks stuck with 4GB or 8GB of RAM, my heart goes out to you because, due to poor coding by the people that built it, the operating system itself usually requires at least that much.



Reminds me of a previous boss of mine in the 1990s when talking about upgrading our Mainframe.... Throwing hardware at a problem simply masks the problem... bad code.
Jeff Moden
Jeff Moden
SSC Guru
SSC Guru (870K reputation)SSC Guru (870K reputation)SSC Guru (870K reputation)SSC Guru (870K reputation)SSC Guru (870K reputation)SSC Guru (870K reputation)SSC Guru (870K reputation)SSC Guru (870K reputation)

Group: General Forum Members
Points: 870324 Visits: 47424
Summer90 - Friday, September 21, 2018 8:12 AM


Reminds me of a previous boss of mine in the 1990s when talking about upgrading our Mainframe.... Throwing hardware at a problem simply masks the problem... bad code.


Agreed except it frequently doesn't even mask the problem because it frequently doesn't actually help. I've also seen it actually make the problem worse when unexpected parallelism comes into play.

--Jeff Moden

RBAR is pronounced ree-bar and is a Modenism for Row-By-Agonizing-Row.
First step towards the paradigm shift of writing Set Based code:
Stop thinking about what you want to do to a row... think, instead, of what you want to do to a column.
If you think its expensive to hire a professional to do the job, wait until you hire an amateur. -- Red Adair

Helpful Links:
How to post code problems
How to post performance problems
Forum FAQs
Eric M Russell
Eric M Russell
SSC Guru
SSC Guru (108K reputation)SSC Guru (108K reputation)SSC Guru (108K reputation)SSC Guru (108K reputation)SSC Guru (108K reputation)SSC Guru (108K reputation)SSC Guru (108K reputation)SSC Guru (108K reputation)

Group: General Forum Members
Points: 108453 Visits: 14551
The primary database I work with could be best described as an operational data store; containing time versioned copies of tables from multiple OLTP data sources. There are high volume call center applications and mobile apps that utilize this database between 9am - 6pm weekdays, nightly ETL, and sporatic historical reporting. The database is 4+ TB with 200 GB max memory reserved for SQL Server, so that means only 5% of the database could fit in memory.

However, I don't see stacking more memory as necessarily worthwhile. Only the data pages needed to cover queries are really needed in the buffer cache, and that's only a small fraction of the database. The vast majority of the data is historical, only referenced for historical reporting, so I keep the daily usage queries down to less than 100 ms by tuning the SQL to prevent full or partial table scans, hash match joins, and sorting. There are some queries joining multiple tables where the average latency is 10 ms or less, and that's with dozens of requests per second, because it has a very high cache hit ratio.


"The universe is complicated and for the most part beyond your control, but your life is only as complicated as you choose it to be."
djackson 22568
djackson 22568
SSCrazy Eights
SSCrazy Eights (9.7K reputation)SSCrazy Eights (9.7K reputation)SSCrazy Eights (9.7K reputation)SSCrazy Eights (9.7K reputation)SSCrazy Eights (9.7K reputation)SSCrazy Eights (9.7K reputation)SSCrazy Eights (9.7K reputation)SSCrazy Eights (9.7K reputation)

Group: General Forum Members
Points: 9734 Visits: 1298
My largest SQL Server database is 360GB. Most of them are under 100GB, with an average of maybe 10GB.

My largest database is approximately 2TB. Yes, terabytes. I manage four databases this size, plus another 17 databases that are around 175GB. These are all running another product for the back end database, and use RHEL for the OS.

I believe our VM hosts have as much as 192GB of RAM, so we aren't going to load these into RAM. I do have lots of smaller production databases that could, but none of them need that. Our SQL Server Windows servers have anywhere from 4GB of RAM up to 16GB for the most part. If they have at least 8GB, we limit SQL Server to 4GB due to how badly Windows manager memory. If we have at least 12GB, we use all but 5GB for Windows. The database that is 360GB - it has 16GB of RAM and runs absolutely fine.

My desktop has 32GB of RAM, and I frequently max that out pretty well. My previous machine only had 16GB, and a very nice processor, but froze frequently due to having too many things running. When you add in Windows requirements and anti virus, well, 8GB doesn't give you any RAM for applications!

What is interesting is that my home machine has 48GB, and a SMOKING processor, plus a very nice graphics card. I have always had better equipment for my personal use than at work. I build my own machines, and I spend about half as much as work does, for about 10 times the performance that our stock machines have.

Dave
Go


Permissions

You can't post new topics.
You can't post topic replies.
You can't post new polls.
You can't post replies to polls.
You can't edit your own topics.
You can't delete your own topics.
You can't edit other topics.
You can't delete other topics.
You can't edit your own posts.
You can't edit other posts.
You can't delete your own posts.
You can't delete other posts.
You can't post events.
You can't edit your own events.
You can't edit other events.
You can't delete your own events.
You can't delete other events.
You can't send private messages.
You can't send emails.
You can read topics.
You can't vote in polls.
You can't upload attachments.
You can download attachments.
You can't post HTML code.
You can't edit HTML code.
You can't post IFCode.
You can't post JavaScript.
You can post emoticons.
You can't post or upload images.

Select a forum








































































































































































SQLServerCentral


Search