Performance tuning is a challenge for many data professionals. Often, it's also a task that we struggle to find time to perform in many environments. Developers have new features to build and DBAs have plenty of other work. As a result, performance testing is usually done in a crisis.
Netflix is always looking at performance, since their customers and audience are very sensitive to delays. They wrote a post on examining the performance of a potential technology change, which was a good look at different ways they test the effects of something new. In his case, they talk about the changes they were considering for networking stack. There is some technical discussion of why, but the interesting part of the piece, for me, was the A/B testing section.
They planned an experiment, conducted it, and then measured the results. This wasn't just a test for a developer workstation, which I've seen most people do. This was a test with half a million users. Netflix has over 150mm users, so this isn't a significant number, but it's also not a tiny number. It's enough to look for potential issues, though I'd hope they'd expand this to a 2-5% of users to verify their results before they deploy to everyone.
Too often I see developers assuming a test on their workstation of a very small set of data determines if their approach makes sense. I do believe that is a good place to start, but before getting too far along deploying your changes, some significant scale test ought to occur, with repeatable measurements. If you want to ensure your auditing trigger or update code or anything else that might impact lots of users will cause issues, test on a larger set of data. Then repeat the test and verify that your results make sense.
Testing is a skill. A bit of an art, but very much a science. You should make sure you use test harnesses that are large once you think your code works, just to be sure it does. If you want ideas on how, Jeff Moden and Dwain Camps have a few articles that might help.