Where's the Unit Testing?

  • Kim Crosser (7/25/2016)


    “Trying to improve software quality by increasing the amount of testing is like trying to lose weight by weighing yourself more often. What you eat before you step onto the scale determines how much you will weigh, and the software-development techniques you use determine how many errors testing will find.”

    ? Steve McConnell, Code Complete

    More Unit Testing isn't the answer, although it may catch some of the more egregious errors. I blame development managers that have adopted Agile and used that as an excuse to drop design/code/test reviews.

    Anyone who remembers SEI (Software Engineering Institute) will remember that the incorporation of reviews in Level 3 is where you got real gains in productivity and quality. With Design Reviews and Code Reviews (and Test Reviews), you not only catch obvious bugs, but you often uncover misunderstandings that can lead to more deep-seated problems - ones that don't show up until you are in production.

    Active reviews, where the developer actively presents their designs and code to one or more reviewers, foster good techniques throughout the organization as well as catching problems early in the development cycle. New developers may not realize that there are excellent existing algorithms or other techniques that are simpler and more robust than their design approaches. Plus, just the act of "presenting" the design/code approaches often results in the developer realizing they missed something, or had intended to handle some situation, but somehow forgot/overlooked it.

    Agile doesn't forbid the use of reviews, but too many proponents seem to think that reviews aren't necessary because of the short "sprint" methodology. And thus, I get handed systems that "passed testing", but fail or perform inefficiently in the real world, where if any experienced developer had looked at the coding, they would have recommended simpler and better approaches.

    I'd agree and disagree with Mr. McConell to a point. The Unit Tests grow, mostly because you enhance the code, or you change requirements slightly, which could mean some tests leave, but more come. I think also you add additional tests as you encounter bugs or issues in how your developers write code.

    However, just passing tests isn't enough. Most of the methodologies (CMMI, Agile, SEI, etc) are all based on the principles of Six Sigma, ISO, etc. where you improve and react. You get better over time. That's what code reviews do. They share information and help ensure each of your staff gets better and grows.

  • Chris Harshman (7/25/2016)


    Where I work, the developers do a lot of test driven development for their C# work, but a limiting factor for them doing more on the database side is the tools available. They evaluated some methods, and tried tSQLt for a while, but had more problems with it than problems that it solved.

    I'd be curious what were the failings. There are holes and issues, but it's helpful. Part of the trick is learning how to write tests that work in your situation.

    There's the MS Unit Testing framework that works as well, allowing for SQL Server tests.

  • I am trying to get unit tests working on an application that depends on various frameworks.

    The code as written is convoluted, poorly structured and obscure. I am not sure if the test failures are genuine or whether the framework itself has been misconfigured.

    If you want to have a useful set of tests then you have to design your software to be testable. The requirement for testable software will lead to certain design decisions being taken. My experience is that those design decisions are to write tightly focused, small units of code. Or as those of us in the older age bracket will call it, "doing it properly".

    Tacking on tests onto an existing behemoth can be a nightmare. You may find yourself uttering the heresy "is this worth the trouble"? In such a case this is an indication that the software is in need of refactoring.

    I was initially sceptical about the value of automated testing but have been utterly converted. I found that the test-code-test-code inch pebble approach actually lead to faster development because I discovered the bugs as they happened rather than taking 10 steps forward and then spending ages back tracking to find out the source of the problem.

    I think the difficulty faced by database developers is in identifying what it is you are trying to test. You don't want to find yourself testing whether the SQL commands do what they do but you do want to test that the application to which you have put them works.

  • David.Poole (7/25/2016)


    I think the difficulty faced by database developers is in identifying what it is you are trying to test. You don't want to find yourself testing whether the SQL commands do what they do but you do want to test that the application to which you have put them works.

    Yes. I think lots of what people want to do is have a perfect set of tests they can write, or a step by step tutorial. You have to practice and learn, but also understand you're tackling this like moving a pile a rocks. A bit at a time.

    I also advocate not testing things you do well. Insert into a table? I would never test the case where I insert all fields, or even those that don't allow NULLs. In those cases, it's too trivial. But if it allow's nulls, I might test if I have a calculated column or a trigger. what happens?

  • Steve Jones - SSC Editor (7/25/2016)


    Eric M Russell (7/25/2016)


    When I'm writing a new SQL query, I'll mockup 10x the amount of data currently in production for the subset of tables that are relevent to query, so I can confirm that, in addition to producing an accurate result, it will also scale over time. Also, when I refactor an existing production stored procedure, something like a non-functional performance optimization, I will confirm that the same input parameters produce an identical output resultset. I don't want my performace "fix" to break anything funtionally. These are forms of unit testing.

    Sometimes I'll even deploy the new stored procedure to production under a different name, so I can test both the original and new version side by side in identical environmental conditions, measuring not only duration but also CPU but I/O. I know what you're thinking, but yes, I sometimes do test in production, but only after it's been unit tested in development and QA, and you must know what you're doing to keep it isolated. When attempting to resolve a high profile issue, I don't want to mark a ticket as resolved only to discover post-production that it doesn't work the same in production.

    Winners make their own luck. 😉

    Those are great ideas. Unit testing should help here, with ensuring that your tests are captured and repeatable.

    I like the 10x thing, but that's not possible in some cases. Ever have a 100GB table? Moving to a 1TB table isn't necessarily easy. However, I admire this. My vote is that at some point, at least in a load test, you always go to 1.2-2x production data sets.

    From what I've seen, if you're dealing with 100 GB+ sized tables in production (I have a few that are multi-TB), then it's either a mature OLTP system that has already accumulated several years of data, or it's a data warehouse with several years of historical data loaded retroactively and will then grow at a slower periodic rate going forward. I only use 10x unit testing on newer databases where the production data volume could potentially grow 10x within the next year or two. So, if a new database system goes live with 100,000 existing customers, then I'll stage a testing environment with 1 million customers.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • Steve Jones - SSC Editor (7/25/2016)


    tabinsc (7/25/2016)


    Since most of my SQL coding is set-based (1 query operation per procedure), unit testing seems pointless since there is only one unit. I do use SQLTest for some things, such as data integrity and data comparisons, but most of my code can't be broken up into units.

    Parameters? Then you have edge cases. NULLs,0s/strange dates/etc. can occur in a single query, so having a few tests to ensure your code works well help when you might need to modify, or tune, the query.

    Also one case = one case that can go wrong.

    It is great that tabinsc has made the effort to avoid unnecessary complexity, however, if a stored procedure gets changed how will everyone know that it is still working as expected?

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Steve Jones - SSC Editor (7/25/2016)


    Kim Crosser (7/25/2016)


    “Trying to improve software quality by increasing the amount of testing is like trying to lose weight by weighing yourself more often. What you eat before you step onto the scale determines how much you will weigh, and the software-development techniques you use determine how many errors testing will find.”

    ? Steve McConnell, Code Complete

    More Unit Testing isn't the answer, although it may catch some of the more egregious errors. I blame development managers that have adopted Agile and used that as an excuse to drop design/code/test reviews.

    Anyone who remembers SEI (Software Engineering Institute) will remember that the incorporation of reviews in Level 3 is where you got real gains in productivity and quality. With Design Reviews and Code Reviews (and Test Reviews), you not only catch obvious bugs, but you often uncover misunderstandings that can lead to more deep-seated problems - ones that don't show up until you are in production.

    Active reviews, where the developer actively presents their designs and code to one or more reviewers, foster good techniques throughout the organization as well as catching problems early in the development cycle. New developers may not realize that there are excellent existing algorithms or other techniques that are simpler and more robust than their design approaches. Plus, just the act of "presenting" the design/code approaches often results in the developer realizing they missed something, or had intended to handle some situation, but somehow forgot/overlooked it.

    Agile doesn't forbid the use of reviews, but too many proponents seem to think that reviews aren't necessary because of the short "sprint" methodology. And thus, I get handed systems that "passed testing", but fail or perform inefficiently in the real world, where if any experienced developer had looked at the coding, they would have recommended simpler and better approaches.

    I'd agree and disagree with Mr. McConell to a point. The Unit Tests grow, mostly because you enhance the code, or you change requirements slightly, which could mean some tests leave, but more come. I think also you add additional tests as you encounter bugs or issues in how your developers write code.

    However, just passing tests isn't enough. Most of the methodologies (CMMI, Agile, SEI, etc) are all based on the principles of Six Sigma, ISO, etc. where you improve and react. You get better over time. That's what code reviews do. They share information and help ensure each of your staff gets better and grows.

    I really think that Steve McConnell is being misquoted here as he didn't say that “Trying to improve software quality by testing is like trying to lose weight by weighing yourself more often" but he did say that “Trying to improve software quality by increasing the amount of testing is like trying to lose weight by weighing yourself more often".

    Basically, he didn't say don't test. Not at all. I believe that he was making the point that tests, like many other things, shouldn't be used as a security blanket.

    Tests don't fix issues but they can highlight them. Tests, like all other tools at our disposal, can be misused as an inappropriate crutch.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Chris Harshman (7/25/2016)


    Where I work, the developers do a lot of test driven development for their C# work, but a limiting factor for them doing more on the database side is the tools available. They evaluated some methods, and tried tSQLt for a while, but had more problems with it than problems that it solved.

    For me, this is the issue.

    The same could be said of UIs too. It has always been simplest to test code without a complex system to interact with (humans, databases, hardware, etc.) and that is why we stub, mock and simulate these complexities.

    Much work has been done to improve UI testing in recent years (particularly in the JavaScript/HTML world), however, for me SQL Server is lacking vendor support for testing. There needs to be a way to inject something between stored procedures and the data, for example.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Steve Jones - SSC Editor (7/25/2016)


    Tom Fischer (7/25/2016)


    Maybe we need to ask the question, 'What justifies unit tests in practice?' And I’d recommend avoiding answers of the 'because you should' ilk because they clearly haven’t changed the status quo.

    Avoiding regressions, formalizing (very slightly and lightweight), the work you should already do as a developer to ensure your code works before commit, and ensuring the tests can be a) automated and easily-re-run, and b) helping others to ensure they don't break your code when it is modified.

    IMHO, this is like OOP development. A bit more work upfront, but as you get going, many changes become easier, and more reliable because you have documentation through tests on how code needs to perform.

    I change the status quo every day. Developers whose work I review doesn't get passed without comment unless there is suitable unit test coverage. Sometimes it doesn't pass at all.

    I have to be pragmatic but it doesn't mean that even what I need to let pass does so with advisory comments.

    I recently had one of the guys reporting to me thank me for my attention to detail in reviewing his code. Initially he thought I was being unnecessarily pedantic and there was some friction. I pointed out that I wanted us as a team to produce high quality software that was easy to maintain and that meant putting the effort in for the next person and that sometimes we, ourselves, were that next person. I basically gave him no choice and said to do it my way. He was sceptical at best. He now really enjoys the ease with which he can work on the codebase.

    I could have taken the easy path but I was prepared to fight the good fight and where there was one under my banner there are now two. He will move on with his better working practices and exchange ideas with other professionals. Hopefully it will make a bigger difference than one person. It is the old saying that each journey starts with a single step...

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • Andrew..Peterson (7/25/2016)


    From my observations, this is a management issue. And as you stated:

    "...perhaps a few bugs aren't a problem. Maybe the impact is low enough that training developers to write tests and making the investment isn't valuable enough."

    Like continuous integration, devops, etc. a few want a solid product, the many will wait to fix the bugs after the fact. I guess it makes it easy to prioritize which bugs to fix?

    I am not convinced. Yes, management is not always the positive influence it could be but an individual can start by writing unit tests that are only run locally, not automated and the rest of the team ignores. You only get some of the benefits of unit testing immediately but you get some. Once this starts being transformed into results then the weight of evidence should sway the rest (or at least some) of the development team and management will support initiatives that make them look better. Reduced issues and quicker delivery of changes achieves that...and unit testing can aid those.

    Good quality initiatives tend to gather momentum.

    Gaz

    -- Stop your grinnin' and drop your linen...they're everywhere!!!

  • I get the point about 10x data loads but that is stress testing rather than unit testing.

    Stress testing is incredibly important. The reason for separating out unit, integration, NFR testing etc is down to the speed at which the tests can be run.

    The point of unit testing is that you are testing a wide range of functions at high speed. If you don't pass this stage then you'll never get to the integration stage. Unit testing gives fast feedback to the developer. If they have to sit around for 5 minutes or longer waiting for a build to compile and run all the tests then pretty soon they'll start reducing the scope of the tests.

  • David.Poole (7/26/2016)


    I get the point about 10x data loads but that is stress testing rather than unit testing.

    .....If they have to sit around for 5 minutes or longer waiting for a build to compile and run all the tests then pretty soon they'll start reducing the scope of the tests.

    Or they could learn to setup tests to run over lunchtime, in the background or even overnight.

    Cant think of the last time I had the luxury of having the time to wait for one task to completed well until the next wave....

  • I think tooling is a large part of the problem but I also think there are some fundamental differences between database and middle tier development that make unit testing much harder for database developers.

    E.g. I want to test that an order behaves a certain way. If I'm developing my middle tier Order object I simply create an Order mock, including appropriate mocks of any entities it depends on, feed that into my test and I can reliably run it and expect the same result every time. If I'm a database developer I must create an Order record, along with any dependent entity records and run my test against that. 1. The overhead of creating and tearing down that test data is much higher (at some point I'll be tempted to pre-create a whole dataset to run all my tests again, it will be brittle as heck and will be constantly out of synch with the actual tests as more requirements require change in the dataset) and 2. Unless I run all my tests in serial - which is very inefficient - I cannot guarantee that my test will run in isolation from other tests. Some other test, which happens to count the number of orders, at the same time as my test ran will now fail.

    Another issue is that database code tends to benefit from being a bit monolithic. e.g. Database respond a lot better to big set based queries with several where filters than they do to big loops with lots of individual If statements. But unit testing works better against small bits of logic. I want to be able to test each of those if separately. My DBA, quite rightly, wants a single, well formed query that he can optimise.

    I'm a firm believer in all things Agile and I tend to proselytise about it but I do recognise that it's a much harder problem to crack at the database level than it is for the middle tier (which is where I spend most of my time these days).

  • Gary Varga (7/25/2016)


    Chris Harshman (7/25/2016)


    Where I work, the developers do a lot of test driven development for their C# work, but a limiting factor for them doing more on the database side is the tools available. They evaluated some methods, and tried tSQLt for a while, but had more problems with it than problems that it solved.

    For me, this is the issue.

    The same could be said of UIs too. It has always been simplest to test code without a complex system to interact with (humans, databases, hardware, etc.) and that is why we stub, mock and simulate these complexities.

    Much work has been done to improve UI testing in recent years (particularly in the JavaScript/HTML world), however, for me SQL Server is lacking vendor support for testing. There needs to be a way to inject something between stored procedures and the data, for example.

    I think the problem my company had on the database side was that the data is highly context sensitive, so testing the same data or the same operations at different points in time would expect different results. In the UIs I've seen a number of times where the developers changed the name of a control or moved something on a form and it caused the smoke test the QA department did to get confused. Usually they remember to inform QA so they can update the smoke test setup, but there isn't a easy way to automate detecting those changes.

  • ZZartin (7/25/2016)


    I tend to find that if something can't be unit tested easily and effectively it's because of some fundamental design problem. Usually i see this when a developer tries to make one thing that does everything and then ends up wondering why the thing they spent months building and just ran for the first time breaks.

    Actually, the fundamental design problem in that case is that the developer hasn't a clue how to go about software development. If he started doing design with testability, error management, security, and how his bit has to interact with the rest of the system in mind and thinking about how to break down his bit into sensible self-contained subunits he wouldn't have much trouble designing unit tests, dummy interfaces and placeholders (I think those are now called "mocks"? but I stick to the terminology I know) and organising his placeholders so that they can be told to pass erroneous junk to teh code under test so that he can see his error detection (and maybe the rest of his error management) working (or discover that it's not working), and he wouldn't have a horrible over-complex heap without internal functional division to break in a mysterious and incomprehensible manner as soon as it hits the machine.

    But then a lot of modern developers usually haven't heard of error management, their training has been about how to throw together spaghetti in C++ with no attention to engineering principles or to computer science.

    Tom

Viewing 15 posts - 16 through 30 (of 32 total)

You must be logged in to reply to this topic. Login to reply