I've seen this problem more often on SQL 2005 than on SQL 2000 instances, but I must agree servers tend to get more cores everyday. On some instances used mainly for OLTP I set the maximum number of processors per query to one, thereby effectively disableing parallel queries. This sometimes improves both query performance and throughput but it is not an options if you want to perform some serious reporting on a server.
But there is no general solution. I/O seems to be the bottleneck but in some cases I could greatly reduce query execution time (and CXPACKET waits) by using the OPTION LOOP JOIN. No two cases are the same and if a SAN is involved, things are getting more complicated because most SAN experts are not used to the demands of SQL server regarding caching, write order, and many other aspects of disk I/O. Getting a SQL server up to speed on a SAN is far from simple, and requires a specialist with knowledge and experience on both subjects.
As mentioned by others, up-to-date statistics won't ever harm your queries. More memory does not always improve performance; a very large table might still not fit into memory, and thus a table scan on this table might still require massive disk I/O. In this case profiling will offer you some insight in missing indexes that could increase performance, but as always, it depends ...