• Jeff Moden (10/8/2013)


    Cadavre (10/7/2013)


    simon.crick (10/7/2013)


    I will probably get shot down again for saying this, but I believe people should stick to core technology and basic tecnhiques unless there is a very good reason for using exotic technology and advanced techniques.

    Why? Because no one person (even those of you who think you are expert experts) can ever know more than a tiny fraction of all the available technology and techniques, and if we all go off on our own paths following our own preferences for exotic technology and advanced techniques, then our systems will end up being a completely unmaintanable mish-mash of technologies and techniques.

    I am not dumb. I have consistently achieved very high grades and won awards for outstanding achievement, etc., but the more I learn, the more I realize it is impossible to know everything, and therefore, out of consideration for our colleagues, we really ought to be sticking to core technology and basic techniques wherever possible.

    Simon

    I disagree completely. You go with the "best" technique (measurable by performance tests) for the job, regardless of complexity. Then make sure that you document how the logic works.

    I could certainly be wrong but I didn't take Simon's comment as avoiding the "best" technique. What I took it as is that a lot of people will take to what they know instead of learning what they need. For example, at a previous company, someone decided to parse some rather complicated files use PERL and DTS (is was a while back) to control things. This also required a split path in DTS that would later merge so there was some special code that needed to be written to remerge the paths. Since PERL couldn't actually do it all, they also wrote some Active-X and some VBS to complete the task of just getting a file ready for import. It took 45 minutes for each file and God forbid if they made a change because you needed to be an expert at Perl, DTS, Active-X, VBS, AND T-SQL to make a change. As Simon stated, it was a "a completely unmaintanable mish-mash of technologies and techniques". Enter SQLServer 2005 and the ability to write SQLCLR which only served to add yet another technology to the mish-mash. Some (and I don't use the word often) idiot actually wanted me to implement a CLR to do "Modulus" because he didn't take the time to learn that SQL Server has a modulus operator.

    I think that's the kind of mess that I think Simon is talking about.

    That's exactly the kind of mess I had in mind! 🙂

    As so frequently happens, it turned out that some strong yet simple "core" knowledge of T-SQL solved 99% of the problem and a well formed splitter (you can probably guess which one I used) covered the other 1%. What used to take 45 minutes to just get a file ready for import now took only 2 minutes to stage, import, validate, and process 8 files to completion.

    And that is what I think Simon is talking about when it comes to "core" and "simple" techologies. I don't believe he's suggesting avoidance of the "best" solution, which might involve some "complex" T-SQL. I think he's suggesting what I added to my signature line a couple of weeks ago. It's a play on words of a horribly overused phrase...

    [font="Arial Black"]"Just because you CAN do something in SQL, doesn't mean you SHOULDN'T!" [/font]:-P

    Like you say, core technology normally covers 99%+ of what we need to do, and nearly always does a very good job when used correctly. In the 1% of cases where there is no immediately obvious way to solve our problem using core technology, we should be very cautious about introducing exotic technology, as "core + X" can quickly degenerate into "core + X + Y + Z + P + Q + R", and before we know it, we've got another unmaintainable system. Most of the time, I think we are better off sacrificing a little bit of performance in order to avoid introducing exotic technology, as the overheads of maintaining the exotic technology will normally outweigh the benefits in the long run.

    Simon