Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase ««12345»»»

Checking Up on Developers Expand / Collapse
Author
Message
Posted Friday, May 8, 2009 2:36 AM


Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Thursday, October 4, 2012 9:20 AM
Points: 583, Visits: 1,060
Michael Valentine Jones (5/7/2009)
The most common and worst mistake is failing to develop a normalized data model that fits the real world data.

Most developers seem to just throw tables together with little thought about how it actually models the real world data. Once a bad table "design" is in place, the application is doomed to an endless series of hacks to work around the design problems.

You can fix bad indexes and poorly written stored procedures, but a bad table design is with you forever.



Thats exactly the issue I face daily - hastily built databases that are data stores for websites which haven't been designed.

Extendeded functionality leads to extra tables being added without any review of impact on the database.

Also LINQ gets thrown in and it all goes down hill with the devs then having no idea how to navigate through the data.

But hey they employ me to bail them out so its all good in a way


Hiding under a desk from SSIS Implemenation Work
Post #712716
Posted Friday, May 8, 2009 2:39 AM


SSC-Forever

SSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-ForeverSSC-Forever

Group: General Forum Members
Last Login: Yesterday @ 3:46 PM
Points: 42,462, Visits: 35,525
Phil Factor (5/8/2009)
Denormalisation (you'll live to regret it when the application expands)


Agreed.

All too often I find that when people say that they 'denormalised for performance' they actually mean 'didn't feel like normalising completely in the first place'

Denormalising for performance means that you've normalised fully, have then done tests on large volumes of data and have found that the fully normalised design is not capable of meeting the performance requirements. It does not mean "Oh, I think there are a few too many tables, let me denormalise a bit"

Oh, and I don't mean normalise to Domain-Key normal form. 3rd or BCNF is usually enough, maybe 4th if you've got odd tables that show that particular data anomaly.



Gail Shaw
Microsoft Certified Master: SQL Server 2008, MVP
SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

We walk in the dark places no others will enter
We stand on the bridge and no one may pass

Post #712718
Posted Friday, May 8, 2009 2:57 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Friday, June 14, 2013 3:32 AM
Points: 28, Visits: 55
Phil Factor (5/8/2009)

Denormalisation (you'll live to regret it when the application expands)


Don't agree fully. Denormalization is very usefull sometimes. For example, I have ~100 tables which is representing some entities. Few entity types are primary and there are 5 complex searches with ~50 criterias. We decided to create denormalized form of data to provide search. Note, we kept an old structure as primary but populate denormalized data periodically (denormalization takes ~4-5m per day). Search is working very fast...

Is any idea how to avoid it without of using 20-30 joins (5 of them are very conditional: CASE WHEN, subquries, etc)?

I think it is pretty good decision.
Post #712726
Posted Friday, May 8, 2009 3:09 AM


Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Friday, July 25, 2014 3:19 AM
Points: 577, Visits: 2,503
Phil Factor (5/8/2009)
Denormalisation (you'll live to regret it when the application expands)


sasha.khomich
Don't agree fully. Denormalization is very usefull sometimes. For example, I have ~100 tables which is representing some entities. Few entity types are primary and there are 5 complex searches with ~50 criterias. We decided to create denormalized form of data to provide search. Note, we kept an old structure as primary but populate denormalized data periodically (denormalization takes ~4-5m per day). Search is working very fast...

Ah. Actually, there are several techniques to get around this sort of problem with searches. I'm not sure that you are really denormalising as I understand the term, but using a type of inversion 'index' table. This is a classic approach (Gary Kildall developed it for searching CDs!). It is usually called the ‘Inverted’ or ‘Inversion’ index technique. (see http://en.wikipedia.org/wiki/Search_engine_indexing#Inverted_indices ). I've had an article sitting in a corner I wrote a while back awaiting a bit of a tidy-up. Maybe I'll dust it out....



Best wishes,

Phil Factor
Simple Talk
Post #712734
Posted Friday, May 8, 2009 3:34 AM
SSC-Addicted

SSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-AddictedSSC-Addicted

Group: General Forum Members
Last Login: Wednesday, July 23, 2014 7:02 AM
Points: 484, Visits: 803
I know it's not really specific to SQL Server but in my experience developers are often forget that someone has to use their end product so they make too much effort to provide a technical solution, throwing in some random indexes for fun (because they clearly didn't consider what would be useful to the actual users), a bit of denormalisation because they think it will help inspite of the bodge they had to do to make this module over here work and don't worry the users will never use that feature anyway.

Being a bit more serious the biggest developer issue I hit is providing an application to end users where the users are allowed to make changes to key information so breaking relationships between tables. Why oh why, I spend my life at the moment having to fix the end result and it really bugs me.
Post #712753
Posted Friday, May 8, 2009 3:54 AM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Thursday, October 10, 2013 8:54 AM
Points: 87, Visits: 253
Developer failings from my experience (SQL/BI developer):

1. Excessive use of cursors. Some developers seem to prefer to work with one row of data at a time then wonder why the application runs slowly.
2. Indexes - too many or not enough. In my last couple of contracts I've spent some time showing developers how to view Execution Plans, and use the Index Tuning Wizard and Database Engine Tuning Advisor. Some of them didn't know the tools existed!
3. Use of the "sa" login everywhere
4. Using inappropriate data types. eg. CHAR(255) for surname fields which are obviously variable length, or not allowing enough characters for a field based on the sample data that they created
5. Documentation. What's that?
Post #712761
Posted Friday, May 8, 2009 4:03 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Friday, June 14, 2013 3:32 AM
Points: 28, Visits: 55
As for me, the most evil thing happened when developers should agree their plans with customers and correct them according to their caprices (They wanted triggers, but I knew it will be madness...)
Post #712764
Posted Friday, May 8, 2009 4:18 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Thursday, June 26, 2014 10:04 AM
Points: 207, Visits: 955
A few examples I've seen personally, (some of which have already been mentioned);

1) Thinking that cursors are a perfectly reasonable thing. (Overheard someone say "I didn't realise cursors were bad").
2) Not allowing auto-generating primary keys - thus, you have to generate each key value, one at a time, and can't use "INSERT INTO".
3) Having about 20 columns of the form "{name}1", ..., "{name}20", so I had to write a lot of code to convert that into the proper set-based lay-out.
4) Inappropriate data-types. Far too many use "varchar"s for holding single character keys.
5) Too much use of dynamic SQL, even for common functionality, so there's far too much copy-and-paste.
6) No documentation of db objects.


Paul

Post #712777
Posted Friday, May 8, 2009 5:16 AM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Friday, December 6, 2013 8:07 AM
Points: 109, Visits: 489
Perhaps my experience is unusual, but I've recently determined that clustered indexes do not perform nearly as well as non-clustered indexes. Perhaps this is because the tables I was working with were very wide, but non-clustered indexes which were covering significantly outperformed the clustered index (this is on SQL Server 2005). I now keep my eye out for cases where there is a clustered PK and consider creating a non-clustered version and add any columns not in the key which are typically needed to be included in the index.
Post #712792
Posted Friday, May 8, 2009 5:17 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Monday, April 29, 2013 5:23 PM
Points: 3, Visits: 67
Oh dear.

I'm a developer rather more than a DBA, but I'm far from ignorant about databases.

Normalisation - yep, know how to do that (and about denormalisation, though I might suggest in testing that's becoming less and less relevant in real-world environments).
SQL injection - been carefully protecting all my code against it for years
Indexes - well, I know the difference between clustered and nonclustered indexes and their different applications, and that a PK is alread a CI
Rows that fit into pages? Certainly know about that one.
(Rare) Appropriate use of cursors? Yes, know that.
Appropriate use of UDFs to avoid RBAR queries? Yes.

Am I perfect? No, but then I'm still learning. I've only been doing this full time for about 10 years so far. I've no doubt I'm still making mistakes that I'll cringe at in a few years but that's life in any discipline.

I've worked with people (or inherited their code) who clearly couldn't design a database schema and code for toffee. Their non-DB code in whatever language we were working in on that project tended to suck too. Sure, there are lazy, incompetent buffoons who shouldn't be let near a coding environment of any description - they exist in any paradigm, and the good at one, when they've got to work at least slightly cross-discipline, are usually good at the other after a little acclimatisation too.
Post #712793
« Prev Topic | Next Topic »

Add to briefcase ««12345»»»

Permissions Expand / Collapse