True, but then the developer has to go back and fix it when it fails instead of building it in as they go. I'd absolutely have performance tests, but still would require they work against a fully populated database. I can tell you that 100% of the time we've done this in my past, we have had zero scalability issues and better architecture as a result.
In the end, the only person that can make a quality product is the developer, everything else, Specs, unit tests, SIT, SAT, UAT, etc. only serve to help make sure that that quality is there. The farther you move away from the developer, the more expensive the feedback loop to get it fixed. The key to the most efficient development is understanding and driving towards this. This is why unit tests integrated into the build environment are a very good thing.
But all too often, regardless of process, it's attitude that is the major driving factor. It's critical to make sure developers understand this and don't get the 'chuck it over the wall' mentality. As a development manager, I once received an e-mail from a dev team lead that was multiple paragraphs long blasting the QA group because they were repeatedly rejecting builds due to failing an acceptance test. The last line of the e-mail was 'it didn't work, there's a bug we're fixing now'. Classic.