Test Driven Development

  • I'm trying to incorporate TDD principles into my daily work, but I'm winging it on my own, so looking for advice. I'm working on a stored procedure that is going to process data in a table and return the results to SSRS for display. I came up with a list of tests I want it to pass in order to consider it successful,

    1. captures all calls assigned to,Tier 1 buckets at any point in time, for states in which [vendor] covers Tier 1
    2. correctly identifies "returned" vs "worked" calls
    3. TAT calculation correct, even for calls that were copy/pasted in the notes
    4. correctly counts receipts
    5. correctly counts inventory
    6. correctly counts processed

    So, how do I only write the code to pass the tests, if I'm not writing the stored procedure in such a way that Test1 --> Test2 --> Test3? Do you (the royal "you", who follow TDD) first identify the tests, then order them in a way to write enough of the stored procedure to pass test1, but not complete the procedure until it passes? Then move on to test #2?

    Or, does the activity of identifying the test inform your process enough to get Draft1 of the procedure complete and then run the tests, tweaking until they all pass?

    I can see how TDD would work really well in OOP, where you can break function a little more atomically, but in my head I said "I'm going to save time by only writing the code to make it work. Wait, how do I do that?"

    Please follow Best Practices For Posting On Forums to receive quicker and higher quality responses

  • The best way is to have curated data that you use to test the procedure.  These data sets would have a known set of values that you can use to validate that the procedure works or not.

  • Right, but if I'm trying to code minimally in order to pass tests, even with test data, do you for instance just start with a shell for your SP that just satisfies the first condition and that's it? Then, and only when it works, do you add the code for whether it satisfies the 2nd test?

    https://en.wikipedia.org/wiki/Test-driven_development "The programmer must not write code that is beyond the functionality that the test checks."

    Or do you write a first pass of the procedure, and then see if it passes, and the act of having identified the tests is pushing you in a "directionally correct" process that's good enough?

    Please follow Best Practices For Posting On Forums to receive quicker and higher quality responses

  • No you shouldn't write code with just the goal of not failing test cases.  For your personal tests you should be testing it enough to verify that it meets the requirements, that also means testing that it fails in an expected way when something doesn't work properly.

  • True, but those are on my list of default tests I didn't include here, checking for existence, graceful failure on invalid data parameters, resultset within bounds of expected baseline, etc. Those still seem like tests to be passed/failed, although those get evaluated after the business requirement tests are satisfied.

    Please follow Best Practices For Posting On Forums to receive quicker and higher quality responses

  • This was removed by the editor as SPAM

Viewing 6 posts - 1 through 6 (of 6 total)

You must be logged in to reply to this topic. Login to reply