• I’ve never been involved in a project that went that far south but I have seen some real $hit $hows. Usually, when it happens because senior level managers have larger portions of their compensation based on completion bonuses. It doesn’t have to work well it just has to work well enough to put the little check in the little box that says “complete”. This is especially true of outside contractors who know they’ll never have to maintain the steaming pile that they created on a long term basis... Or worse... They intentionally leave flaws in so that they will be able to generate lots of hours on the subsequent support contracts. Another thing I’ve seen a lot of lately is contractors bringing in unqualified people from overseas. I’m not saying that to bag on people from any specific country. I’ve worked with some VERY talented individuals from India and Russia. Aside from language barriers, they are usually a pleasure to work with. It just seems that, from my own experience (I know, anecdotal evidence isn’t evidence), that the truly horrific SQL code, the stuff that actually makes your blood pressure go through the roof, are being supplied by devs from one particular part of the world. They also tend to have backgrounds in Oracle, rather than SQL Server (not sure if there’s a causal correlation there or not). For example, a few months back, I was asked to find out why a production database was grinding to a halt for 20 mins every 2 hours. Turns out there was a view, written by a contractor, that was intended to capture certain data changes and create an extract file that would be consumed by an application they had been contracted to develop. The view, the way it was written, had no filtering (not even inner joins) in the view itself, and every output column was an expression. So, every two hours, they would attempt to capture the last two hours of changes, and in the process, they would scan more than two terabytes of data, completely decimate the data cache and plan cache, and then, in an outer query, filter it down to less than a single meg of actual data... All because they had no idea that the filters on the outer query wouldn’t pass through and filter at the base table level. When I was asked for a possible solution, I told them that a quick and dirty fix could be done by simply switching to a parameterized stored procedure, that could apply the necessary filters at the table level... From there it would be a fairly simple matter to dump the SP results into either a holding table (preferably on in a separate database that would be kept in Simple recovery) and then join to that in the outer query... Less than a single day of dev work... Easy peezy...My simple idea was rejected because change would make it necessary to put that part of the application through QA again and QA wouldn’t be available until after the “bonus deadline”. I remember thinking, “This must be what it’s like working for the government”...