• Eric M Russell (2/5/2016)


    roger.plowman (2/5/2016)


    Eric M Russell (2/3/2016)


    As a DBA, if I get a 2am call that a critical application is down, because a select in a stored procedure is missing a join operator, then I'm making that change myself and then notifying the development team.

    "As a DBA, if I get a 2am call that a critical application is down, because a select in a stored procedure is missing a join operator, then I'm making that change myself and then notifying the development team."

    Ok, THAT is your problem right there. How in the world did the missing join operator get through testing?

    Something that breaks the system shouldn't be able to make it into production in the first place. If it did, then your development process is severely broken. Either you're trying to shave pennies by having developers be your QA team (a horrifically stupid idea) or your development team hasn't got a representative dataset to test with. (also a horrifically stupid idea).

    I've heard the arguments about "protecting" real data from the prying eyes of developers and it's completely bogus. If you can't trust your developers to see the real data you can't trust your developers, period, and should never have hired them in the first place.

    If it's a matter of the development server not being secure enough then *SECURE IT*. Best practices are best practices.

    Sorry, pet peeve there...

    A select with a missing join operator would be a scenario where TableA should join to TableB on columns A, B, and C, but it was inadvertently coded to join only columns A and B. So, in DEV and QA the procedures compiles, runs without error, and returns a result with some duplicated rows. Of course it's failure in the development and QA process, which is a broader issue, and that can be addressed over the following days.

    But if the bad code causes a breakage in production (maybe a 2 AM ETL process that loads a staging table with a unique index on A, B, C), then the DBA is the first responder. You've got to rollback to prior release, or if the fix is obvious then just get it done, send an email to the development team and let them address it the following morning.

    A more routine example of a botched stored procedure deployment would be when the developer forgets to add the GRANT EXEC ON <OBJ> TO <ROLE> statement, and the DEV and QA environments are not setup properly with the same least privilege authorization as production, and the DBA doesn't own the DEV and QA environments. Again, it's a failure in the development and release process, but it's an obvious fix and the DBA can and should remedy it as soon as it reveals itself as a bug in production.

    You also have to understand that, in many (well most) organizations, the DBA isn't the IT sheriff, he's more like the security guard at the bank. He stands at the door to the vault blocking access, but he doesn't make the rules, and he can't force folks to stop breaking the rules.

    True - but even then - you don't just leave the bank vault open because it's more convenient that way, do you?

    I guess I can't figure out if the data is really that important or not from this conversation. It's not important enough to safeguard (it's then okay to leave code unencrypted and editable in production, change the code on the fly between cycles, maybe keep a somewhat accurate edition of what was done when etc...) but somehow all life is suppose to stop at the first sign of any issue. This is kind of binary option for me - which one is it?

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?