Bronze Age Development

  • Comments posted to this topic are about the item Bronze Age Development

  • I think the term bronze age development really does sum up writing t-SQL, there are some tools which are starting to emerge and get better but the tooling is a long way behind that of c# or other languages.

    A big issue for me is debugging, with SQL server you can step into a stored procedure but can't see the contents of a table, want to check the results of a cte?? Add an additional select and run the cte twice in your procedure!

  • "Never test! What's the point, the client will tell us what they really want and then we will look great as a company when we fix it and deliver it."

    Paraphrased from various vain-glory managers/directors of software houses that I have worked from over the years. Try implementing ITIL or any good practice in those environments is an uphill struggle to say the least.

  • I have been an advocate of developer unit testing for what seems like forever as I had my eyes opened up to the value of it at college both through empirical studies and experience writing trivial applications. When automated unit testing came along with TDD (or were the early tools a bit before?) into the Windows development world I eagerly embraced and encouraged their use. It makes development take a bit longer but you remain focussed on only developing what you need. The amount of time in integration, component and acceptance testing seems to be reduced. Unfortunately, I have not been in a position to measure the results but I have not seen the same sort of problems that have continually delayed releases of most projects that didn't do the testing.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • I will suggest that the ratio of testing time/effort over development time/effort should be getting larger over time.

    I'll assert that analysis and requirements definition is a process of understanding what is really needed and stating that in a clear and understandable manner. Similarly, Testing is a process of making sure that what you have is fit for purpose. Both of these are likely to take a significant time for any system that is non-trivial. The size of these is controlled by the problem-space of the software. Development is different. The use of Open Source code, reuse of components and modern development tools should mean that the development needed to solve a particular problem should be reducing over time.

    On that basis I we could suggest that over time a ratio of test time over development time could be used as a performance indicator of the maturity of the development approach. The bigger the better.

  • david.howard (8/21/2014)


    I will suggest that the ratio of testing time/effort over development time/effort should be getting larger over time.

    I'll assert that analysis and requirements definition is a process of understanding what is really needed and stating that in a clear and understandable manner. Similarly, Testing is a process of making sure that what you have is fit for purpose. Both of these are likely to take a significant time for any system that is non-trivial. The size of these is controlled by the problem-space of the software. Development is different. The use of Open Source code, reuse of components and modern development tools should mean that the development needed to solve a particular problem should be reducing over time.

    On that basis I we could suggest that over time a ratio of test time over development time could be used as a performance indicator of the maturity of the development approach. The bigger the better.

    I agree with almost all that you said except that, from what I have seen, most improvements in development tools and Open Source libraries have been targeting new technical complexities not ones that have existed long term. There are many examples of JavaScript libraries that are there to simplify development of a modern UX which just didn't exist before. There are some BI JavaScript libraries too, to further the examples, that provide facilities that there just wasn't a demand for ten years ago. Additionally, I don't see a vast improvement in the tooling that would speed up development except Continuous Integration Servers and Unit Testing tools. Better support for x64 (which didn't exist before) and multi-threading debugging are advances but mainly to cover only what no one or a minority were doing more than a few years ago. Also, I am not seeing companies better able to leverage existing systems or components than occurred 20 years ago.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • A simple counter example would be that using Open Source software I could put together a Customer Relationship management solution with little to no Development. The time taken to understand the actual need and to test that the solution works correctly and answers that need completely would still be fairly significant. There are similar examples in many areas.

    I'd agree that much of the development tool advancement has gone into doing things that wouldn't be done before. There are, however, cases of reuse and tool improvements that improve productivity for deliver systems that could have been produced previously as well.

  • Lets just say that the developer may have said no bugs, but his users didn't. (Unless they "tested" using production data.) The other thing this tells me is that the developer made some forms that look just like the paper he was replacing.

    Good software does more than the typical user can grasp. It does it more efficiently then they will come up with on their own. Most UIs put way too much on the screen. And users typically want more than is there. And a developer can't move them to the next level without mistakes. Because the developer can't completely understand the problem.

    Quite simply software is, has always been, and will always be an iterative process. Now if you have people who will start using the software bits (in production where they really understand), then you can discover and tweak during testing. But the user can't replicate the process without working in production. Nor can a few users actually predict what that one user (everyone has one of those users) will somehow do in spite of your best efforts to make them do what you want. That user will even want to do that very thing, but somehow that user ALWAYS finds a way to muck it up. And that IS a bug.

    One thing that would help is an awareness of what is "good enough." Combine that with getting others to understand what is good enough. I was lucky to work on a project for a manager who wanted good enough. And he (and the users) were very happy with the results (knowing about the bugs). We quickly resolved the worst of them and slowly took out most the rest.

  • I have been a developer for a long time and have always spent much more time testing than writing code. I think double would be pretty close. It got me in trouble with one employer who said that testing was for programmers who didn’t know what they were doing to begin with. The only lesson I learned from that was that testing was for those who DO know what they are doing. That shop spent more time fixing their untested code than moving on to new projects.

    Long before agile was a buzzword I used some of its principles. I like to use the term “iterative development”. Iterations involve the customer and other stakeholders. An iteration lasts about two weeks and involves testing. Each iteration results in cleaner code until the user says we’re finished. I don’t know if the iterative process shortens the development lifecycle. What I do know is that it results in happy customers because they get what they want and they know it works properly. There is very little fixing to do afterwards. Another thing that’s good about the iterative process is that it keeps me on track. I tend to procrastinate and having short term deadlines helps keep me focused.

    Tom

  • david.howard (8/21/2014)


    A simple counter example would be that using Open Source software I could put together a Customer Relationship management solution with little to no Development. The time taken to understand the actual need and to test that the solution works correctly and answers that need completely would still be fairly significant. There are similar examples in many areas.

    That is a very good example. You are perfectly correct. Particularly if you look at some systems more generically.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • OCTom (8/21/2014)


    I have been a developer for a long time and have always spent much more time testing than writing code. I think double would be pretty close. It got me in trouble with one employer who said that testing was for programmers who didn’t know what they were doing to begin with. The only lesson I learned from that was that testing was for those who DO know what they are doing. That shop spent more time fixing their untested code than moving on to new projects.

    Long before agile was a buzzword I used some of its principles. I like to use the term “iterative development”. Iterations involve the customer and other stakeholders. An iteration lasts about two weeks and involves testing. Each iteration results in cleaner code until the user says we’re finished. I don’t know if the iterative process shortens the development lifecycle. What I do know is that it results in happy customers because they get what they want and they know it works properly. There is very little fixing to do afterwards. Another thing that’s good about the iterative process is that it keeps me on track. I tend to procrastinate and having short term deadlines helps keep me focused.

    Tom

    Those who created and signed up to the Agile Manifesto said, that in the main, there was little new to see in terms of process and tasks but more to do with attitudes, stances and the selection of which processes.

    There were new suggestions / offerings too but mainly based on what had gone on before.

    ...and as for procrastination, I would respond to that but I am sure that I just need to check my email first 😉

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • I believe another key is to get testing as close to the development process as possible. For example, through multiple experiences we learned that when you develop a database app, the first thing to do is to build a database that is larger than expected and have the developers work against that database, not versions on their machines with only a few rows in each table. By doing this, in writing their code the developers have to live with any performance issues they might be introducing and will naturally address them out of self interest.

  • Speaking of bronze-age development, this is one of the very few threads I can post to. Attempting to reply to other threads (which look kinda "flat") raises an error. In true end-user fashion I've forgotten what the error message says...

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • wayne.jared (8/21/2014)


    I believe another key is to get testing as close to the development process as possible. For example, through multiple experiences we learned that when you develop a database app, the first thing to do is to build a database that is larger than expected and have the developers work against that database, not versions on their machines with only a few rows in each table. By doing this, in writing their code the developers have to live with any performance issues they might be introducing and will naturally address them out of self interest.

    Another way is to write performance integration tests and have them fail on durations that exceed preset agreed durations. That way the developers are still focussed on performance but not hindered by it.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • This is all "perfect world" stuff, but we don't live in a perfect world. Project managers and stakeholders simply won't accept or sanction delays of a few weeks let alone months for extra testing, even if they do understand the benefits. The software is delivered, the project wraps up and the bug riddled software starts the long process of being fixed issue by issue (or not in many cases). The next project starts up and so the cycle continues.

Viewing 15 posts - 1 through 15 (of 38 total)

You must be logged in to reply to this topic. Login to reply