There are also differences between transactional, data warehouse, and machines running cubes. Designing for the load peaks is critical. I was fortunate enough to have a couple server admins that worked through the bumps with me. And when corporate wanted to consolidate, they funded an engagement with Microsoft to outline an architecture. It was rewarding when it came back, as everyone else was expecting very little about the SSAS and Data Warehouse, and a great deal was about how different these workloads were. Something I had been pointing out for a long time. When they looked at the diagrams, it was so similar to what I had worked towards over several years and upgrades, it almost seemed like I paid them for some influence.
It was a great learning experience for both sides, as we had to work together even closer. I had no access to the host machines, which was a whole new layer to troubleshoot when things were slow. One time we tracked what was supposed to be our file server having issues to being it was being used for backups for all the VM's, causing us to have users complain at 6am. Another was when our SSRS reports - data driven, emailed to multiple people, would hang in the middle of a run, only sending out a partial run of a report. That ended up a configuration issue on their end. We had been spreading out the schedule, which helped, but it got worse over time. When the server configuration was fixed, we could send a hundred reports out in a minute with no failures.
So to a great degree, VM's are like trying to run SQL, a data warehouse, and a cube on the same machine. Each piece places different loads and pressures on the hardware, and you need to be able to balance this out. Sometimes a degree of isolation (or resource allotment) is needed to handle the peaks. Just throwing hardware at something rarely works for the long run.