We have a couple of 2.2 TB database and it's that large because of a very common problem in the industry... people use RDBMSs as a "keep it forever" file system. The other problem is that, much like dust gathering on available horizontal surfaces, people abhor a vacuum. If you have the space, why not use abuse it? It's just like CPU power. People used to do some incredible things with computers that we'd now consider to be a "toy" but people have gotten lazy with the power of today's machines. They've also been pressured by schedules to produce code as fast as they can instead of doing it right by planning on increased usage and scalability.Would I turn down extra CPU power or RAM or Disk Space? Not on your life! Will it make me a lazy programmer? Again, not on your life because I know that stuff grows because of increasing numbers of customers and capabilities and that, someday, we'll need the headroom. They key is to not waste that headroom.I'll also remind folks that while hardware is certainly important when it comes to being able to write fast code, writing code that runs fast/efficiently (and, of course, correctly, which is a whole 'nuther story ) is much more important. We've been through this recently (about 3 years ago). We went from 16 CPUs and 128 GB to a "monster" machine with 32 CPUs and 256 GB of RAM with full up SSDs for a new SAN. Everyone (except me) was claiming and looking forward to the incredible performance gains that would be realized. The net result? Some of the hours long batch jobs ran about twice as fast (most didn't) and none of the GUI code ran faster. Compared to the code changes we've made, which have averaged a 60X performance gain and up to a 1,000,000X performance gain (NOT a misprint, much GUI code has been reduced from 22 seconds to micro-seconds), the hardware changes were almost a joke.Even most MPP appliance vendors will advertise only a 30X performance gain... and it comes at the cost of not only some hairy prices but you still DO have to change your code to be able to use them. Ironically, a lot of such code changes would benefit even a monolithic box.And, no... except for the occasional "trick" of using a Tally Table or nearly equivalent Tally function, the only real "trick" we used to change our code to run so fast was to simply write better code to "LET" SQL Server do what it already knows how to do. Except for the occasional use of a Tally function, we just used what SQL Server already knows how to do if you let it. It's not difficult, either. You just have to know what SQL Server and (especially) what T-SQL is capable of and then you won't need so many CPUs and so much RAM.And that all brings us back to people. The only way we were able to achieve such things is for everyone to take the bloody chip off their shoulder and work together and to embrace experimentation. In other words, a culture change (the latest buzzword term for this is DevOps, which has already been bastardized into meaning something that it is not) and that culture change needs to be made from the CEO right down to the person that swamps out the toilets.Hardware is good but code is where the performance is. To write good code, the culture must change or you're going to need a shedload of hardware, a ton of expensive software (which frequently makes things worse) and hundreds of totally unnecessary and unsupportable tricks For you good folks stuck with 4GB or 8GB of RAM, my heart goes out to you because, due to poor coding by the people that built it, the operating system itself usually requires at least that much.