Builds, Deployments, Test Data, Oh My.....

  • ok, So lets start by saying I've been around the block a bit, but still, as of yet, dont have what I would consider a perfect solution to meeting the needs of Data Architects AND QA folks with regards to Builds and deploying to test environments.

    The process I typically use to deploy a given build to Environment X is to FIRST Replace All DBs in "X" with Baselines that mimic current production.

    AFTER That has been done, THEN run the deployment scripts in the correct order and BAM deployment done.

    The problem is, that QA SCREAMS then when I blow away their db with my baseline, they lose all their test data.

    we've gone round and round and round and still dont really have a great solution.

    anyone with the magic bullet ?

    Greg J

    Gregory A Jackson MBA, CSM

  • Red Gate has some tools that can help with that. Things like their data generator and Compare.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • we used to have "refreshes" of test environments scheduled by the PM (and test team) well in advance so that they were aware of the change that was coming.

    IT policy was also that each release had to have a minimum number (depending on what was being changed, etc) of refreshes in the test environment.

    Your other option... is to create the DB with the basic structure (as per production) and then push in a load of test data (scripted using Redgate Data Compare :-D). Problem with this is that they dont test the PROD data all the time, which can lead to issues as well.

  • exactly.

    I'm leaning towards pushing back on them to take ownership of their test data and of their process.

    I'll schedule Baslines ahead of time and they'll have to prepare accordingly.....

    not too many other options other then me creating test data for them but I'm certain that wont adequately meet their needs anyway.

    G~

    Gregory A Jackson MBA, CSM

  • Depending on what unique keys you use, we used to use identities, there is the option of using a range of numbers that would never be used... like negative numbers.

    Then script it and put the data in as part "refresh" process.

    It is best that they understand why you are doing the refreshes and they need to be on-board with it. Once we became so strict with our release processes (in the DEV and TEST environments) our PROD release became so much simpler and nowhere near the amount of issues with DEVs having to "fix" on the night, etc.

  • Good idea Graham....

    I currently have a utility they can run prior to the Refresh or Baselining....

    the script dumps every table out to a txt file. then After the baseline, they can use the same utility to pump the data back in.

    its too cumbersome for them to use.....they just want to have their cake, they wanna eat it AND they dont wanna gain weight.

    thanks for the input.

    G~

    Gregory A Jackson MBA, CSM

  • Let us know how you get on and the chosen solution... I never got to implement the negative numbers option, was never given the time to do it.

  • I'll let you know what we come up with......

    I'll probably just write an article on it as it seems to be a common problem....

    G~

    Gregory A Jackson MBA, CSM

Viewing 8 posts - 1 through 7 (of 7 total)

You must be logged in to reply to this topic. Login to reply