Three Rules for Database Development

  • I may just be old school, but I can't STAND not having an integrated development environment amongst all the developers.

    Kraig - I completely agree. In an ideal scenario, you test and develop locally and integrate to a central server. That server can have real data for performance and real world testing.

  • Steve Jones - SSC Editor (9/17/2012)


    Jeff Moden (9/16/2012)


    I've found that such "committee decisions" can be horribly wrong especially when people who might not know as much about databases outnumber those that do (think lynch mob, gang rape, and Lemming mentality combined). Everyone working the same way has some very nasty ramifications if they're all doing the same wrong thing. 😉

    Rule 1 for database development should be, "Find someone that actually knows how to design databases correctly and then follow their rules."

    This isn't about technical design decisions. It's more about process. We agree with naming conventions, we agree that there is a QA process. We agree developers will turn over unit tests or edge cases. We agree that new tables get a review by the group, or by a person.

    We decide to work in a way that complements, rather than each person deciding how they like to work.

    The problem I've seen emerging lately is that the database is no longer part of the equation by using Entity Framework or other similar framework. In those cases it is even harder to optimally design the database. Last issue I found was triple the number of calls where most of them should have been cached on application load.

    Currently my db dev team of three have a pretty large shared db and that works just fine as long as they do what I [we] decide and as professional that I require. Nothing more, nothing less. The devs are also aware of what is expected of them...

  • Phil Factor (9/17/2012)


    The idea of saying that one should never do shared database development is ridiculous, especially nowadays when you can have the best of both worlds and use database source control whilst doing either shared or single-user database-development, or both. The reason for doing your development work on your own server is merely to allow you to work on different versions of the database 'at once', which means that you can re-create a version from source control in your own private server and fix a bug or add a feature, without affecting your colleagues, or damaging other versions. I've always advocated doing both types of database development, depending on the task. The private databases are fine for prototyping, trying things out and doing sandbox development, whilst the shared model gives us extremely rapid integration, since we have schemas that prevent developers from banging heads too badly whilst sharing, source control allow us to roll back from a development mistake, virtualization to allow us to work on as many versions as we can tolerate.

    I don't understand why there should be objection to the mixed-model of database development. I understand the build and integration problems that come from database application developments that chose to ignore the best-practice of having defined interfaces between application and database, but to try to anathemise or prescribe the perfectly reasonable practice of shared database development as a consequence, is like insisting that everyone should use crutches merely because you've shot yourself in the foot.

    What he said.

  • djackson 22568 (9/17/2012)


    Testing sometimes finds bugs. Occasionally. Once in a while. Maybe only one or two...

    If testing doesn't find a bug the chances are the testing hasn't been thorough enough.

  • Not to look down on test like unit-test but even if the tests are well-written they hardly find more than 40% of the bugs anyway. There's a fine line of when tests are just to overly complicated. You'll need extensive user testing to get above 80%.

  • Kristian Ask (9/27/2012)


    Not to look down on test like unit-test but even if the tests are well-written they hardly find more than 40% of the bugs anyway. There's a fine line of when tests are just to overly complicated. You'll need extensive user testing to get above 80%.

    I don't know that I can agree with that. We've been struggling with these kinds of tests for some time and in most cases when the tests became "overly complicated" to implement, it usually came down to code written to "swallow the whole cow in one shot". It may come down to making the code more modular. Test the pieces first THEN the whole.

    If you break the code down to bite-sized sections, the testing can become very very good indeed. You still need to set up some higher-level end-to-end tests, but at least if you know that the chunks do what you expect, then you should be able to hone in on where the issues are with the multi-tiered testing approach.

    Still - it's up to each org to figure out how much test coverage is adequate, or how costly those bugs might be to catch later.

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • Heh... whatever happened to the idea that you can write bug-free database code? Am I the only one that still thinks that should be the rule rather than the exception?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Matt Miller (#4) (9/27/2012)


    Kristian Ask (9/27/2012)


    Not to look down on test like unit-test but even if the tests are well-written they hardly find more than 40% of the bugs anyway. There's a fine line of when tests are just to overly complicated. You'll need extensive user testing to get above 80%.

    I don't know that I can agree with that. We've been struggling with these kinds of tests for some time and in most cases when the tests became "overly complicated" to implement, it usually came down to code written to "swallow the whole cow in one shot". It may come down to making the code more modular. Test the pieces first THEN the whole.

    If you break the code down to bite-sized sections, the testing can become very very good indeed. You still need to set up some higher-level end-to-end tests, but at least if you know that the chunks do what you expect, then you should be able to hone in on where the issues are with the multi-tiered testing approach.

    Still - it's up to each org to figure out how much test coverage is adequate, or how costly those bugs might be to catch later.

    Just don't get me wrong, unit-testing is a force of good but the boundaries are narrow regardless of how well they are written. It's a machine we're talking about and not an erratic user person....

    I don't think the cost will be very high if there is no test for it but there is a regular release cycle. With proper user and system testing it will be found first time the feature is tried anyway. Some people say it's very expensive but I wouldn't know...

  • Jeff Moden (9/27/2012)


    Heh... whatever happened to the idea that you can write bug-free database code? Am I the only one that still thinks that should be the rule rather than the exception?

    No, you're not.... Most of the time it's not really rocket science so what can go wrong?

  • Jeff Moden (9/27/2012)


    Heh... whatever happened to the idea that you can write bug-free database code? Am I the only one that still thinks that should be the rule rather than the exception?

    Well - the dev department here has 40 devs who all insist they write 100% bug free code...:)

    The hard part I think really comes down to "what's bug free". If you have fluid or squishy requirements, and don't document what it SHOULD do, DOES do, and the background that led you to choose for it to do those things, it becomes a turkey shoot when something bad happens.

    ----------------------------------------------------------------------------------
    Your lack of planning does not constitute an emergency on my part...unless you're my manager...or a director and above...or a really loud-spoken end-user..All right - what was my emergency again?

  • Steve Jones - SSC Editor (9/17/2012)


    Jeff Moden (9/16/2012)


    I've found that such "committee decisions" can be horribly wrong especially when people who might not know as much about databases outnumber those that do (think lynch mob, gang rape, and Lemming mentality combined). Everyone working the same way has some very nasty ramifications if they're all doing the same wrong thing. 😉

    Rule 1 for database development should be, "Find someone that actually knows how to design databases correctly and then follow their rules."

    This isn't about technical design decisions. It's more about process. We agree with naming conventions, we agree that there is a QA process. We agree developers will turn over unit tests or edge cases. We agree that new tables get a review by the group, or by a person.

    We decide to work in a way that complements, rather than each person deciding how they like to work.

    For me, Jeff's comment is valid for process decisions as well as for technical design decisions. I've seen catastrophic results of group decisions on process and I don't like them. It is far better to let the development, testing, validation, QA, and release processes be determined by someone who has a clue because he/she has been in charge of a major development including everything from determining and finalising the requirements through limited releases to general release and providing maintenance and upgrades than to count the votes of those who believe they know it all despite not having that experience because they learned it in school and think that testing is not the responsibility of developers, that the waterfall process is viable in an environment where requirements are unstable, and suffer from many of the other delusions of the clueless.

    Tom

  • Phil Factor (9/17/2012)


    The idea of saying that one should never do shared database development is ridiculous, ... ... ... to try to anathemise or prescribe the perfectly reasonable practice of shared database development as a consequence, is like insisting that everyone should use crutches merely because you've shot yourself in the foot.

    Well said - you have my 100% agreement.

    Tom

  • Evil Kraig F (9/21/2012)


    I'm also not a particular fan of TFS, SourceSafe, or the rest in the database world. I'm VERY frickin' biased against this, particularly because of the idea of Continuous Integration and Databases do not meld well. Oh, sure, store yourself a version control so you can see what was and what is, no problem. It's also rather handy for lockouts so multiple developers don't step on the same object simultaneously. We used to use Excel spreadsheets for that.

    I don't see the difference in principle between using a spreadsheet, using good source control, configuration management, and version control software, or using an old fashioned T-card rack.

    So I strongly disagree with the idea that software shouldn't be used to help with source control etcetera.

    Personally I have worked with databases where the requirements change rqapidly, new features are required by customers that require schema changes, and my team was expected to release new features rather frequently as minor upgrades rather than full releases. So in that environment continuous integration and continuous was the norm - and we were pretty successful. I learnt continuous integration/continuous release in a non-database environment, and it was very successful there too; and I've also worked in situations where continuous integration would have been a mistake. It's no good saying continuous integration and databases don't meld well, any more than it is good to say that database development always requires continuous integratiopn; the only sane viewpoint is that continuous integration should be used in environments where it is appropriate and aovoided in environments where it isn't appropriate, and claining that everything involving a database falls into one of those heads and not into teh other is just plain stupid.

    Tom

  • L' Eomot Inversé (9/28/2012)


    I don't see the difference in principle between using a spreadsheet, using good source control, configuration management, and version control software, or using an old fashioned T-card rack.

    I'm not against using TFS (or others) for lockout and version histories. The problem is it leads to other assumptions about how things should be written.

    So I strongly disagree with the idea that software shouldn't be used to help with source control etcetera.

    Fair enough, I know I'm biased due to how it's been used in my experience instead of best practices.

    It's no good saying continuous integration and databases don't meld well, any more than it is good to say that database development always requires continuous integratiopn; the only sane viewpoint is that continuous integration should be used in environments where it is appropriate and aovoided in environments where it isn't appropriate, and claining that everything involving a database falls into one of those heads and not into teh other is just plain stupid.

    If you're talking procs and functions, I agree that continuous integration can work, but most of those changes require schema modifications. The amount of code that goes into supporting schema level continuous integration gets to be inane though. Think of having to hit all the system tables to confirm that the columns for an index are existing for the expected version and in the right order. I'm doing that now because some of our projects enforce CI in both the app and database tiers, and it's a nightmare.

    Add to that data adjustments are usually one-shot deals but are required as you go between versions. How do you know which data to pull back out if it was based on a previous value? How can you rollback to previous data without either table copies or massive insert lists which might not support the rest of the data that's now also in play in other areas of the hierarchy? You're going to end up going to backup at that point.

    CI works very well for front end coding because most of their structure is not persistant, it's only important what's there at runtime. I'm sorry you feel my opinion on continuous integration in a database is "stupid", but I'm afraid I stand by that. The only place CI really works from my experience in a database is at the function and proc level, and then only if they don't require underlying schema/data changes, which usually means you're in the warehouse/reporting environment.


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

  • Jeff Moden (9/27/2012)


    Heh... whatever happened to the idea that you can write bug-free database code? Am I the only one that still thinks that should be the rule rather than the exception?

    Nope.

    While I don't write as much pure code as I once did, I still rely on good habits developed during that time. I frequently have bugs when I do something new. So I have learned to reuse code as often as possible. I have some processes that receive incoming files and send them downstream to a medical record app. From the start I knew that the finger would be pointed at me when things were missing. So I log everything I touch, and can prove when it wasn't sent. When there is a failure in my process, it is not only logged, but it pages and emails me to let me know. Pretty simple stuff.

    The first process I wrote is not nearly as good as the last one. The last one is simply a copy of the previous one, with some registry entries changed to point to the correct key, and some small changes unique to that process. 90% of it is reusable code, so the number of errrors approached zero, in fact so far I know of no issues at all.

    I strive for perfection in accuracy, I design for performance, and I produce code extremely quickly. All because of reuse. There are other ways to ensure bug free code, and while I do not claim to succeed at that every time I code, I do find bugs are usually due to unreasonable schedules and not my methodology.

    Dave

Viewing 15 posts - 16 through 30 (of 30 total)

You must be logged in to reply to this topic. Login to reply