Thanks !! I did dump a trace file into a trace table, updated just the database name ( since the database IDs were the same on prod and dev ) and replayed from the table.
What I've been asked to do is test sql 2012 compression on some of our busiest tables to see if it improves IO. I'm still thinking it might be easier to find some of our most intensive select statements, drop data buffers and plan cache, record the runtime of the "select" before compression and after --- instead of trying to accomplish the same via profiler trace replays.
My manager thinks, due to previous experience using the profiler GUI on a production stock market-related database, that profiler doesn't "always hose production."
I'm sticking with server-side because we have seen in the past that unless dialed down to a very few events with filters, and run just for the duration of one problematic query, it does hose production. So far server-side, using the tsql replay template, has no noticeable affect on production. I'm dumping 100mb trace files to a spot on our Netapp filer in production, and it creates a new 100mb file about every 5 seconds.
Given that, I don't think we're going to be able to trace an entire business days traffic and replay it on a much less powerful QA box.