The first key to performance tuning: Don't panic.
The second key, as mentioned in the article, is to have enough knowledge to be able to do very fast root cause determination that's still accurate.
Personally, I usually go in more or less this order (varying based on domain knowledge/gut instinct/tacit knowledge), but rarely use all the steps:
If you're panicking, take a deep breath, hold for a count of five, release. Repeat until you are no longer panicking.
Check with actual users to verify actual symptoms... and how it's different from before!
Check overall server stats
Windows 2008 R2 or better Resource Monitor
Disk latency and throughput, CPU, memory, network throughput
SSMS Activity Monitor
Adam Machanic's Who Is Active
Perfmon (you can save an entire setup, and I love report mode instead of graphs)
Your company's performance reporting software/queries
Disk and network error counters!
Recent configuration changes (MaxDOP, cost threshold of parallelism,
Check with hardware/networking/SAN/etc. teams about global problems or recent changes
Check with software owners/developers about whatever area the users show symptoms in
Open up Profiler and take a look, both globally (briefly) and narrowly
SQL:BatchCompleted and RPC:Completed
You need to have a good feel (baseline) for what's out of place.
Check your server's management software for issues (disk failure, RAM failure, overheating, etc.)
Specalized: Check on your Hypervisor
Specialized: Open up Wireshark and watch the network traffic
Specialized: Check on your SAN administration screens
Specialized: Check on your network traffic, IDS/IPS appliances, throughput rate benchmarking, DNS, etc.
When you're finished, wash your towel.
Note that the first few steps relate to requirements gathering/problem identification. Permanent fixes are better in the long run than bandaids every N days/weeks, and permanent fixes require root causes be accurately determined.