For any data you had to delete, anything with personally identifiable information, there are a couple of tools that can generate data for you to use in testing and development.
I've not used this site personally but have it bookmarked in case I ever need to.
I'm actually pretty good about being able to easily generate tons of "Random Constrained Data" for all sorts of testing. I'm working on updating a presentation on "Crosstabs and Pivots - Reporting on Steroids". The machines nowadays along with the code run so bloody fast that I needed to generate 100 million rows that are 529 bytes wide (not including the 2 bytes for each row in the slot array.
And, with the help of "Minimal Logging" the inserts are done to an empty table that has a Clustered Index on a Datetime and ProductID column in only 2 minutes and 41 seconds. The final table is 52.3 GB and the log file comes out at only 600 MB.
Without "Minimal Logging", it takes 3 seconds short of a whopping 11 minutes and the log file explodes at 146.7 GB!
As a bit of a promotion for an event, the Ohio North Data Training group (formally, the Ohio North PASS chapter) has fired up an SQL Saturday (formally, SQL Saturday, Cleveland) in Akron, Ohio and the even occurs on May the 2oth. I'm doing the presentation that I'm revamping for that event. It also covers a technique called "Pre-Aggregation" (I credit Peter Larrson as the originator of that term) where I create a "Crosstab" report from 20 Million of the 100 Million rows in about 930 milliseconds using "conventional methods". Then, I demonstrate how the proper use of an "Indexed View" can negate the need for a "Data Warehouse" and ALWAYS be up to date without even breathing the abbreviation of "ETL" in 1 millisecond! And, yep... I introduce how to create such data and "Minimal Logging", as well.
Here's the info link for that event...