Luis Cazares - Monday, December 18, 2017 11:01 AM
Understood. The whole table update was meant to demonstrate that table width does matter. As for there being "less than 5ms" difference for a singleton, that's fine but only if you're not updating thousands of rows per hour. The other problem is that the table might not be used only for that one "thing".
While "It Depends" is certainly true, I've found that (most of the time) super-wide tables cause more trouble than they're worth. Another example of problems I've seen (and had to repair) is that someone thought they were doing the right thing by making a CLR for "field" level audits that could be copied to any table. It would automatically figure things out based on the content of the INSERTED and DELETED logical tables. It worked absolutely great... until they applied it to an insane 137 column wide table. The people that installed it didn't understand the two logical trigger tables had to be fully materialized to work in the CLR because they were out of scope for the CLR. It was taking more than 4 minutes to update just 4 columns on just 10,000 rows.
I'm also not trying to convince anyone to totally avoid super-wide tables (we fixed the trigger problem rather than "sister" the table because of other requirements)... Rather I'm trying to make folks understand that they can be quite the problem and that they need to test "insitu" to make sure the problems that can be present, aren't. For example, a lot of folks never test for the inevitable flush to disk. Although some would consider it to be some form of "death by pre-optimization", this type of thing is a really important consideration at design time because it's either a huge pain or an impossibility to change further down the road.
--Jeff Moden
Change is inevitable... Change for the better is not.