No Time for Testing

  • Comments posted to this topic are about the item No Time for Testing

  • The title had me intrigued. If there's no time for testing, then it's no time to code. We all encounter emergency situations where people want things done right now. However, what good is it if it's wrong? So you got it done on time...you didn't get it done right. The first rule is to make it work. How can you know if it works if you don't check it?

    I'm also not a fan of test-driven development. I find that some people are more concerned with the test than they are about getting the task done, making it correct or making it fast. That being said, the checking for NULLs, negative numbers or whatever else might trip up your particular piece of code has to be handled if you want to call it right.

    I do take the time to check and test before release to production because I'd rather publish nothing than publish something that's wrong.

  • From the article:


    What I don't get is why we don't just mock up a quick test that we can run in an automated fashion.

    First, I loved your observations in the article. It takes so little time to "test-as-you-go" that I don't understand why more people don't do it. To wit, I do things pretty much as you've described. I read the requirements, write a section of code, test it, make adjustments and maybe fix some obviously missing requirements (requires a discussion with the owner of the requirements) and once that section is done and documented, then I can move on to the next section without having to look back at code already completed.

    I've seen what happens when people "just-write-code" and the results are terrible. In a previous company, we had change controls where the code being promoted wouldn't even pass a basic "blue-checkmark-syntax-check". I just don't get that.

    Shifting gears to the "apparent" problems of writing code, there are three. The "Sweat-shop-syndrome", the "Get-it-off-my-plate-syndrome", and the "Its-just-a-job-syndrome". The three problems are incredibly similar and result in poor quality code that suffers frequent QA test/production failures and/or rework but are quite different as to cause.

    The "Sweat-shop-syndrome" is where those companies measure an individual's performance and resulting "bonus or pay" only by the number of tickets/tasks they can close or forward to another. Even an excellent programmer with the right attitude is pressured into delivering lines of code instead of something that is known to work never mind with any performance. Of course, rework is a bitch because a lot of people also take the shortcut of not including any embedded documentation.

    The Get-it-off-my-plate-syndrome" is different from the "Sweat-shop-syndrome" only in cause. The end result is the same. The cause is either a bad attitude (arrogance or an incorrect perception of self-worth) or someone that is simply overworked (can be a truly dedicated person that no longer wants to do 60-80+ hours a week anymore but still wants to try to be productive).

    Then there's the "Its-just-a-job-syndrome". These are 9-to-5'ers that don't want to put any effort into what they're doing.

    I say these are "apparent" problems because the real problem in all three situations is ignorance of one form or another. Whether the fault lay with management or the individual or both, there are two forms of ignorance at play and it's common that both forms are present in each situation.

    The first form of ignorance is an incorrect perception of what is valuable. Managers and programmers that think the number of lines of code or closed tickets are a measure of success need to have a serious "Come to Jesus" meeting with someone that can explain the realities and costs of rework and the long term distrust customers (and resulting possible loss in sales due to word of mouth amongst similar customer companies) can develop when even little things go wrong at critical times.

    The second form of ignorance, IMHO, has a basis in stupidity and/or laziness. There's a whole lot of people out there that wanted to "get into IT" because there's some pretty good money to be had in IT but they aren't willing to put any effort into it to actually be good at what they do. These are the types of people that warp the "Agile Manifesto" to mean that code doesn't need to be documented or that there's always time to repair things if you just "get it done" and the same people that use Knuth's famous quote about pre-optimization being the root of all evil as an excuse to write crap code. It also includes those that think they only have to learn on the job instead of doing a bit of self-study or that the company that they're working for owes them some form of education instead of trying to increase their skill on their own time.

    All of that brings us back to the subject of the article, "No Time for Testing". There's always time for testing but you actually have to know how (and become both good and quick at it) and be motivated to do so. And show some intellectual curiosity, while you're at it. Yes, it's true that lives might not depend on your code, but someone's livelihood might... and it might be yours. I know that the "DBAs" that I've interviewed in the last several years that don't know how to do a native restore nor even how to get the current date and time in T-SQL have recently been made keenly aware of that. 😉

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden (9/27/2015)


    I know that the "DBAs" that I've interviewed in the last several years that don't know how to do a native restore nor even how to get the current date and time in T-SQL have recently been made keenly aware of that. 😉

    😀 Yes, Absolutely, that these type of working methodologies exists and persists.

    :w00t: Meanwhile, your response to article can be swapped with the actual Editorial.

    Furthermore much informative!

  • I can understand the what the author means in the article, especially in the BI realm on my end. Sometimes there is just not enough time due to last minute requests.

    However, the idea that we cannot test in data is a fabrication. You can very much well test anything you do, especially with the Microsoft stack and the various other reporting front-ends that are trending now.

    I think the biggest issue I have from a BI end with testing when it's done is verification of the results with the end user when the data source is not controlled by the business. While we can verify the code is working as intended to the best of what you can test, you cannot verify the results are accurate to the data source if you don't know if the source is right. This is where as a BI person, you go in circles because if the data is wrong, then your code could be mistaken as wrong and you are wrong.

    What makes it 10x worse is that while you can verify everything is good with your end result, someone can take that end result (i.e.: data) and use it in a way that makes it wrong (i.e.: putting it in Excel with their own formulas).

  • I think the biggest issue I have from a BI end with testing when it's done is verification of the results with the end user when the data source is not controlled by the business. While we can verify the code is working as intended to the best of what you can test, you cannot verify the results are accurate to the data source if you don't know if the source is right. This is where as a BI person, you go in circles because if the data is wrong, then your code could be mistaken as wrong and you are wrong.

    This is my biggest issue. I often don't have access to the original source data and the client isn't skilled on generating any report to help verify the work. And then you have volume issues. What do you do when you have hundreds of columns and millions of records? Most examples in the SQL Server community deal with a handful of columns. I'm having to develop tools from scratch that sample and validate very wide and large source data before it gets into a database. Whelmed and overwhelmed.

    As for tests, I generate them when I can. Being the only true programmer in a department that is short handed with a non-technical management all the way up makes for a real challenge. Don't ask how well we would do as an organization on the Joel Test.

  • xsevensinzx (9/28/2015)


    What makes it 10x worse is that while you can verify everything is good with your end result, someone can take that end result (i.e.: data) and use it in a way that makes it wrong (i.e.: putting it in Excel with their own formulas).

    This makes baby jesus cry....

    You bought a 6+ figure BI solution and all you use it for is to dump data into excel and then run reports that are wrong......

  • Test driven development is just another way of saying that requirements need to be known upfront. The real value of test driven development is when acceptance tests are known - how will the client validate the system? If these tests are specified at the beginning, the system can be designed from the start to conform.

  • TDD is one of those subjects I'm torn on. On the one hand it is unequiocally a good idea, of COURSE you want to test everything!

    On the other hand, as my company's sole IT person it's nearly impossible--even when you write the code in small testable chunks.

    On the VB.NET side the front end has an array of automated tests to make sure all the t's are crossed and i's dotted simply because I don't have any one to QA the program! So automated repeatable tests are pretty much the only testing there can be, short of end-user production complaints. (and oh, how Einstein-like users are in discovering new and interesting ways to trash a system...)

    But it mainly tests to make sure all the properties of objects are properly set. Unit tests and integration tests and all the rest are possible because it's all generic, all OOP. You're testing that blanks have been filled out more than you are new code.

    In a way that's desirable, of course. The less code you need to write the less that needs testing.

    ------------

    And then there's the T/SQL side. (facepalm)

    To the good, each function or SP is pretty much single-purpose by necessity. In theory that makes everything easier to test.

    But to the bad, T/SQL is PATHOLOGICALLY MURDEROUS to the concept of generic code. Everything needs to be custom written in every SP or performance just falls off a cliff. Generic code is testable code, folks! The less code the less testing needed!

    Not to mention T/SQL is aggressively crude. By which I mean it offers almost no reuseability capabilities, unless you want to go the dynamic SQL route, which is unwise from both a performance and security standpoint.

    Yes, there are workarounds, but T/SQL takes an almost gleeful delight in making the programmer do drudge work, copy and paste becomes *desirable* as a technique. (shudder).

    In fact, writing programs to create SPs is almost a necessity for any database of any scale. And lets not even mention T/SQL's sheer verbosity. I'm no fan of C-syntax brevity-to-the-point-of-obscurity, but my God! T/SQL is way too close to the COBOL end of the scale for my taste.

    In short, TDD in a database is kind of difficult, and the database fights you. It wants a static definition of code structure (constraints, etc) rather than a dynamic one.

    Also, TDD works, but as the the editorial points out it requires knowing what you want in the first place. Something users aren't really a fan of... 🙂

    Thus my conflicted feelings on the subject.

  • Agree with Jeff Moden. Especially about DBA's and others that just don't really know what they are doing. It is really kind of scary.

    The more you are prepared, the less you need it.

  • ZZartin (9/28/2015)


    xsevensinzx (9/28/2015)


    What makes it 10x worse is that while you can verify everything is good with your end result, someone can take that end result (i.e.: data) and use it in a way that makes it wrong (i.e.: putting it in Excel with their own formulas).

    This makes baby jesus cry....

    You bought a 6+ figure BI solution and all you use it for is to dump data into excel and then run reports that are wrong......

    To some extent. The problem is not being able to keep up with the users and when they want to iterate on existing data.

  • I'm not a fan of Test Driven Development, as John is. Usually this is because I'm not always 100% sure of the results I want or have been given. I've often been given a request to do x and as I get involved, I find that the requirements might be incomplete, or even wrong, and they'll change. As a result, I like to write a little code, get some idea of what I want to return or change, and then write a test that verifies what I've done is correct.

    Maybe I don't understand the context here. Certainly there are occasions where the requirements will state something like: "Insert 3 square pegs into each square hole.", and then you discover all of the pegs are round or all of the holes are round.

    It's like the Simpsons episode where an angry and delusional Mr. Burnes holds his assistant Smithers at gun point and demands that Smithers get inside the air plane; ... the model airplane sitting on Burnes's desk.

    But if the requirements provided by the business are wrong, or the business has some wrong assumptions about the data, then unless the requirements are revised, the code will ultimately fail to pass QA, User Acceptance Testing, or Production.

    "Do not seek to follow in the footsteps of the wise. Instead, seek what they sought." - Matsuo Basho

  • xsevensinzx (9/28/2015)


    ZZartin (9/28/2015)


    xsevensinzx (9/28/2015)


    What makes it 10x worse is that while you can verify everything is good with your end result, someone can take that end result (i.e.: data) and use it in a way that makes it wrong (i.e.: putting it in Excel with their own formulas).

    This makes baby jesus cry....

    You bought a 6+ figure BI solution and all you use it for is to dump data into excel and then run reports that are wrong......

    To some extent. The problem is not being able to keep up with the users and when they want to iterate on existing data.

    Excel is a great tool, but like any powerful tool, users can use it incorrectly. Like back in high school, when the football coach decided to use his lawn mower as a hedge trimmer (true story) - His fingers were a bit damaged.

    Anyway, IT's job is to ensure the environment is functional. And that the tools, based on the budget, let the business manage and use the data. That does not mean that database staff help, and the ones I know do more than their fair share. But business managers need to take responsibility for the data, and how they use it. Or they will blame the database people when things go wrong, and take credit when the go well.

    The more you are prepared, the less you need it.

  • Andrew..Peterson (9/28/2015)


    xsevensinzx (9/28/2015)


    ZZartin (9/28/2015)


    xsevensinzx (9/28/2015)


    What makes it 10x worse is that while you can verify everything is good with your end result, someone can take that end result (i.e.: data) and use it in a way that makes it wrong (i.e.: putting it in Excel with their own formulas).

    This makes baby jesus cry....

    You bought a 6+ figure BI solution and all you use it for is to dump data into excel and then run reports that are wrong......

    To some extent. The problem is not being able to keep up with the users and when they want to iterate on existing data.

    Excel is a great tool, but like any powerful tool, users can use it incorrectly. Like back in high school, when the football coach decided to use his lawn mower as a hedge trimmer (true story) - His fingers were a bit damaged.

    Anyway, IT's job is to ensure the environment is functional. And that the tools, based on the budget, let the business manage and use the data. That does not mean that database staff help, and the ones I know do more than their fair share. But business managers need to take responsibility for the data, and how they use it. Or they will blame the database people when things go wrong, and take credit when the go well.

    Unfortunately, that's easier said than done and a no brainer.

    When you transition reporting to a centralized solution in a major company, you cannot ideally enforce methodologies like that easy across all users. It's a process, an education that takes time.

    Even then, I don't see that changing. When people need something now, especially for a client in a service industry with data reporting, they are very limited on following the same methodologies and process that are in place to prevent fires from happening.

  • Never enough time to do it right, but always enough time to have emergency fixes and enhancements.

    Testing helps avoid this, and sharing results derived from testing with real data can lead to a more productive shop. Developers never having enough time to test, and users never taking the time to define requirements so developers and users have the same vision, leads to longer time and more cost to improve the business.

    Measuring number of tasks can be flawed. One large task, which leads to having to smaller inventory levels that better meet customer demands, with no fixes needed, can easily be overlooked just looking at a task count. Sometimes there are other factors needed in the data to come to valid results, and an understanding that not all tasks are positive. I am sure many of you have seen shops where it seems to be constant crisis fixes, as well as those that seem to be plodding along. Sometimes those that plod along get far more done, where changes go in almost seemlessly, and changes are something new, not another emergency.

    Testing does not have to be a huge involved process. Even in a shop with no formal QA environment, stub testing can be done. Although quality and thoroughness can vary quite a bit depending on the experience of the developer doing this. Which is the risk the business has to weigh in the quest for speed.

Viewing 15 posts - 1 through 15 (of 31 total)

You must be logged in to reply to this topic. Login to reply