SQLServerCentral Article

Time Bomb Design - A Longer Fuse


When I wrote "Time bomb Coding" my aim was to give concrete examples based on databases I have experienced over a number of years.

I've found that a demonstrable example is far more convincing than any amount of theory no-matter how authoritative the source. I doubt whether there are any developers of any stripe who have not faced the situation where a performance problem in the production environment failed to manifest itself in the test environment.

It is harder than you might think to produce a system that behaves exactly as it does in the production environment.

  • Is the test environment as it is in the production environment?
  • Are the volumes of data similar to those that will exist in the production environment?
  • Is the distribution of data representative of the distribution in the production environment?
  • Can you produce a realistic simulation of the production user load in the test environment?
  • Do you run your standard backup, purge, reindex and other production jobs while you stress test your application?

The cost, both in resource and time, of solving all of the above can be prohibitive.

Time bomb coding has both a cause and an effect. The causes can be ignorance, apathy, badly thought out business processes, unrealistic deadlines and any number of other influencing factors.

This article below is mainly focussed on coding activity but also aims to some aspects of business processes that cause time bomb coding.

Data types and sizes

One of the hardest things to convince an inexperienced developer is that database data types and sizes are still really important. I started programming in an era where RAM was measured in KB where as even the cheapest PC today measures it in GB.

My first job involved sticking 20MB Western Digital file cards into Amstrad PC1640s. The HP3000 mini-computer I worked on had a 300MB hard disk the size of an industrial washing machine. The idea that a PC would have disk space measured in GB, much less TB was quite literally science fiction.

So how can we prove that in the TB world a couple of bytes makes all the difference.

The following code demonstrates the point quite nicely

Try the following script on any installation of SQL Server from SQL 7.0 through to SQL 2008.

CREATE TABLE dbo.LessThanHalfPage(
    SampleField CHAR(4038) NOT NULL 
CREATE TABLE dbo.MoreThanHalfPage(
    SampleField CHAR(4040) NOT NULL 
INSERT INTO dbo.LessThanHalfPage
FROM syscolumns
INSERT INTO dbo.MoreThanHalfPage
FROM syscolumns
EXEC sp_spaceused 'dbo.LessThanHalfPage',true
EXEC sp_spaceused 'dbo.MoreThanHalfPage',true

The output you should see will produce figures similar to the ones shown below.

LessThanHalfPage5002056 KB2000 KB8 KB48 KB
MoreThanHalfPage5004040 KB4000 KB8 KB32 KB

A 2 byte increase in the size of a single field in a 500 record table makes sounds like it should make 1,000 bytes difference and yet we can see that it actually makes 1.9MB difference!

This translates into a lot higher read activity as running a select against each table with SET STATISTICS IO ON reveals.

(500 row(s) affected)
Table 'MoreThanHalfPage'. Scan count 1, logical reads 500, physical reads 0, read-ahead reads 498, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
(500 row(s) affected)
Table 'LessThanHalfPage'. Scan count 1, logical reads 250, physical reads 0, read-ahead reads 242, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.


Changing data types

A heated debate I often have concerns when is a good time to specify the correct data type and size. As a DBA I always say from day one.

You do get changing business requirements but there are many things where a bit of common sense applied early saves a lot of hassle later on. For example

  • For most applications an interest rate is going to be a maximum of 3 digits to the left of the decimal point and not too many to the right.
  • No-one is going to live beyond the bounds of a TINYINT field.
  • The most children born to a single person was 69.
  • ...etc

Consider the example below.

CREATE TABLE dbo.FinancialStatsLarge(
AnnualInterestRate DECIMAL(18,2) NOT NULL
CONSTRAINT DEF_FinancialStatsLarge_AnnualInterestRate DEFAULT(0.00),
MonthlyInterestRate  DECIMAL(18,2) NOT NULL
CONSTRAINT DEF_FinancialStatsLarge_MonthlyInterestRate DEFAULT(0.00),
AnnualFee  DECIMAL(18,2) NOT NULL
CONSTRAINT DEF_FinancialStatsLarge_AnnualFee DEFAULT(0.00),
MonthlyFee  DECIMAL(18,2) NOT NULL
CONSTRAINT DEF_FinancialStatsLarge_MonthlyFee DEFAULT(0.00),
AnnualCost  DECIMAL(18,2) NOT NULL
CONSTRAINT DEF_FinancialStatsLarge_AnnualCost DEFAULT(0.00),
MonthlyCost  DECIMAL(18,2) NOT NULL
CONSTRAINT DEF_FinancialStatsLarge_MonthlyCost DEFAULT(0.00)
CREATE TABLE dbo.FinancialStatsSmall(
FinancialStatsID int NOT NULL IDENTITY(1,1)
AnnualInterestRate  DECIMAL(5,2) NOT NULL
CONSTRAINT DEF_FinancialStatsSmall_AnnualInterestRate DEFAULT(0.00),
MonthlyInterestRate  DECIMAL(5,2) NOT NULL
CONSTRAINT DEF_FinancialStatsSmall_MonthlyInterestRate DEFAULT(0.00),
AnnualFee  DECIMAL(9,2) NOT NULL
CONSTRAINT DEF_FinancialStatsSmall_AnnualFee DEFAULT(0.00),
MonthlyFee  DECIMAL(9,2) NOT NULL
CONSTRAINT DEF_FinancialStatsSmall_MonthlyFee DEFAULT(0.00),
AnnualCost  DECIMAL(9,2) NOT NULL
CONSTRAINT DEF_FinancialStatsSmall_AnnualCost DEFAULT(0.00),
MonthlyCost  DECIMAL(9,2) NOT NULL
CONSTRAINT DEF_FinancialStatsSmall_MonthlyCost DEFAULT(0.00)
INSERT INTO dbo.FinancialStatsSmall(MonthlyCost)
SELECT TOP 500 xtype FROM syscolumns
INSERT INTO dbo.FinancialStatsLarge(MonthlyCost)
SELECT TOP 500 xtype FROM syscolumns
exec sp_spaceused 'dbo.FinancialStatsSmall'
exec sp_spaceused 'dbo.FinancialStatsLarge'
SELECT SUM(length) FROM syscolumns WHERE id=OBJECT_ID('dbo.FinancialStatsSmall')
SELECT SUM(length) FROM syscolumns WHERE id=OBJECT_ID('dbo.FinancialStatsLarge')
FinancialStatsSmall50048 KB32 KB16 KB0 KB
FinancialStatsLarge50064 KB48 KB16 KB0 KB

FinancialStatsSmall = 34

FinancialStatsLarge = 58

Again, running a select against each table with SET STATISTICS IO ON reveals the following.

Table 'FinancialStatsSmall'. Scan count 1, logical reads 6, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
Table 'FinancialStatsLarge'. Scan count 1, logical reads 8, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

OK, so we find out that the data type sizes are inappropriate to the values being stored in the fields so let us see what happens

when we execute the ALTER TABLE statements to shrink the columns in dbo.FinancialStatsLarge down to the same size as dbo.FinancialStatsSmall.

exec sp_spaceused 'dbo.FinancialStatsSmall'
exec sp_spaceused 'dbo.FinancialStatsLarge'
SELECT SUM(length) FROM syscolumns WHERE id=OBJECT_ID('dbo.FinancialStatsSmall')
SELECT SUM(length) FROM syscolumns WHERE id=OBJECT_ID('dbo.FinancialStatsLarge')
FinancialStatsSmall50048 KB32 KB16 KB0 KB
FinancialStatsLarge50064 KB48 KB16 KB0 KB

FinancialStatsSmall = 34

FinancialStatsLarge = 58

Although the data types have been shrunk the change at the table level is only at the metadata level and not at the physical level.


Perhaps if we rebuild the clustered index we should see some improvements?

ALTER INDEX PK_FinancialStatsSmall ON dbo.FinancialStatsSmall REBUILD 
ALTER INDEX PK_FinancialStatsLarge ON dbo.FinancialStatsLarge REBUILD 
SQL VersionAffect
SQL 2005 and earlierNo change. Fixed length datatype sizes are imutable. Changes are only at the metadata level.
SQL 2008Space is reclaimed


So if we have upgraded to SQL Server 2008 we don't have to worry about data type sizes?

Any experienced DBA will be very wary of a design task where "we will just change that in the next development iteration".

  • Other tasks may have a higher priority in the next development iteration meaning that the desired change gets descoped.
  • The amount and importance of data flooding into the table means that restructuring it requires a level of downtime that

    is unacceptable to the business. Perpetual descoping of tasks seen as DBA "nice to have but unimportant" compound this scenario.


Domain and referential integrity

By domain integrity I mean the size, type, nullability and permissible values that are allowed in a field.

Whenever there is poor domain or referential integrity a data quality problem will be the result. It is just a matter of time.

The only thing that continues to surprise me is just how quickly a data quality problem will present itself.

We have already seen that data type size affects the storage requirements and read performance so we should always choose the data

type and size most appropriate for our needs.

We can take this further by using CHECK constraints. Remember the primary role of the database should be to protect the quality and

integrity of the data it stores.

Some people feel that CHECK constraints are straying into the grey area where they are trying to enforce business logic in

the database layer. This is always something to be wary of.

The mitigating factors for having check constraints are as follows

  • Data quality.
  • The query optimiser does make use of foreign key and check constraints when deciding the optimum execution plan
  • Some value bounds are self evident.
    • The age of voters has a legal lower limit.
    • A person cannot have a negative weight
    • An email address must have an @ sign in it.
  • A CHECK constraint can be dropped and recreated easily enough and it can be added in such a way so as not to force validation on existing data.
  • For a small tightly defined set of values a CHECK constraint may be more appropriate and efficient than a foreign key relationship. For example ISO5218 specifies 4 values for gender
    • 0 = Unknown
    • 1 = Male
    • 2 = female
    • 9 = Not applicable


An AdventureWorks example

If I was to recreate the Sales.Currency table in Adventureworks I would create it thus

CREATE TABLE Sales.Currency2(
CurrencyCode CHAR(3) NOT NULL
CurrencyName VARCHAR(50) NOT NULL
CONSTRAINT CK_Currency2_CurrencyName CHECK(LEN(RTRIM(CurrencyName))>0),
CONSTRAINT unq_Currency2_CurrencyName UNIQUE (CurrencyName) ,
CONSTRAINT CK_Currency2_CurrencyCode CHECK(CurrencyCode LIKE '[A-Z][A-Z][A-Z]')

I will be touching on naming conventions later in this article but for now note that all the constraints created have been explicitly named.

I have not allowed the system to generate the name for me.

PK_Currency2_CurrencyCodeTo enforce the CurrencyCode field as the primary key
CK_Currency2_CurrencyNameIn addition to not allowing NULL entries a currency name cannot be a zero lengths string
unq_Currency2_CurrencyNameThe currency name must be unique
CK_Currency2_CurrencyCodeThe currency code must conform to the ISO 4217 pattern. This reinforces the CHAR(3) data type and size.


Performance notes on domain and referential integrity

Back in 2002 I wrote "The pitfalls of foreign keys"

where I noted that referential integrity does have a slight detrimental effect on performance.

On high throughput tables that effect can be unacceptably high, particularly when data is deleted.

If you have data that is populated mechanically from selections that can only be made from reference tables or from a user interface with a tightly

defined user journey then you may take the decision that the probability of poor referential integrity is so low that foreign keys can

be dispensed with.

I strongly suggest that foreign keys are only dispensed with in the above scenario on tables where their detrimental effect is high

enough to warrant it. Do not adopt a blanket abandonment of referential integrity.

Another important consideration is where is data maintained Vs where is data presented? For example, data for a web site may be maintained

in a separate database to the database that serves the web site and in turn will have a separate database for BI work.

This allows separation of the schema design between the two databases such that the back office database is optimised

for maintenance and data integrity where as the web facing database is optimised for presentation performance.

Different data purposes allow different data design

There would be little point in having heavy weight constraint checking in a database where the data integrity was determined elsewhere.


Domain and referential integrity constraints, the developer's friend

Early on in my career I was told that it was common practise in development houses to install SQL Server as case sensitive.

The theory being that if a database worked in case sensitive mode then it would also work in case insensitive mode but the reverse would

not necessarily be true.

I would opine that any software house using this approach must have had a vastly higher standard of work than 99.9999%

of those in the market place today.

A similar approach can be taken with domain and referential integrity constraints. During development put in place all

domain and referential integrity constraints and use the exceptions thrown by the application to flush out weaknesses in the

error handling and trapping.

Even if performance needs require that some constraints be dropped in production there is at least the mitigation that

a rigorous test in development should have thrown an exception in the event of a problem.


Domain integrity and the enterprise data dictionary

Data tends not to be something that stays in one place. In fact data is like blood in the veins of the enterprise and

can suffer from the business equivalent of the various diseases of the vascular system.

Let us suppose that you have designed a table to collect email addresses. You cheerfully read through RFC5321 and

decide to define your email address field as VARCHAR(254) which is dictated by the maximum length of a path element of an SMTP transaction.

That is fine when your application works in isolation but what happens if you are collecting that email address

to export to a system that expects email addresses to be 100 characters or less?

Not everyone researches the various data standards before leaping into database development.

This tells use two things are very useful for the organisation.

  • A data dictionary
  • A map detailing the flow of information throughout the organisation.
    • Where is data captured?
    • What systems does it flow through?
    • Where does it end up?

The data dictionary can also be useful when considering how well a 3rd party software product will integrate into your organisation.


Cinderella data users

My experience has been that there is always a department that needs to use data but is more or less invisible and starved of resource where

another great bloated ugly sister of a department is consuming great slices of the resource pie.

You need to find this department and play Prince Charming because ultimately the Cinderella department will become a problem for

both you and the organisation. Best case scenario is that the department becomes a bottleneck and eventually gets the investment and resource it needs. Such saccharin Disneyesque endings are rare.

A more Brother Grimm scenario is that Cinderella goes feral and does what she needs to do by hook or by crook with Ms Office as her pimp.

In which case expect to see at least some of the following

  • The covert installation and development of SQL Express / MS Access based systems.
  • Mission critical applications run by individuals on office PCs
  • Master data originating and being maintained in a rats nest of Excel spreadsheets.
  • Large scale data extractions being performed by Excel for opaque processes.

You may not have sufficient authority within the organisation to make the changes necessary to support Cinderella but in such a case you should bring it to the attention of your manager.


ORM tools, dynamic SQL and the proc cache

If your development uses an ORM tool or does not use stored procedures it is vital that you take a good look at what

is being thrown at the database.

When a query is parsed by SQL Server it checks to see if a cached execution plan exists for the query.

If one doesn't then an execution plan is compiled and stored in the proc cache.

Some ORM tools use sp_executeSQL and parameterised queries which should, in theory, make use of cached execution plans but there is a gotcha.

In nHibernate if you don't specify the length of the parameters it makes its best guess based on the values being passed.

If you search for peoples first names then searching for "Jim" will define an NVARCHAR(3) parameter, "Dave" will define an NVARCHAR(4) parameter and "Steve" will generate an NVARCHAR(5) parameter. That is 3 calls and 3 separate execution plans.

On 32 bit systems the proc cache is limited to the bottom 2GB but on 64bit systems the proc cache is limited only by the memory available to SQL Server. It is quite possible that the proc cache will grow to the point where the number of pages stolen from the buffer cache (where the data is held) is detrimental to performance. The less memory SQL Server has available for data the more it must make use of the disk and the more performance is impacted.

You may think that having up to 10 execution plans for a search on first name is no big deal however we may have a query with a number of parameters. Let us suppose that we are calling a query to store contact details. Title, FirstName, SurName, AddressLine1, AddressLine2, Town, County etc.

If firstname contains the names Jim through to Nathaniel then the variation is 7 characters.

If surname contains the names Jones through to Templeton-Adams then the variation is also 10 characters.

Theoretically we could end up with all combinations of FirstName and Surname meaning that there are 7 x 10 execution plans to cater for names alone. We then need to multiply by the variability in each of the other columns and before you know it you have thousands of execution plans.

Even on a small system it takes a while for this to manifest as a performance problem but slowly but surely performance will degrade.


Data standards

We live in a crazy world where people who are not prepared to celebrate their 99.999% commonality but will fight to the death over the 0.001% difference.

Amongst such madness data standards have been defined

  • International (ISO)
  • National
  • Industry, such as IATA (International Air Transport Association)

When such small nuggets of sanity exist it seems foolish to ignore them. Where possible I always try to adopt the appropriate data standards when designing a database. In fact the time bomb is in NOT adopting a published standard!

Some standards that are available are as follows

  • ISO3166 Codes for the representation of names of countries and their subdivisions
  • ISO4217 Codes for the representation of currencies and funds
  • ISO5218 Codes for the representation of human sexes

Adherence to published standards has a number of advantages

  • As they are published then by definition they are widely accepted.

    Often the standard is so widely known that the coding scheme has become human readable. Hell, LAX even crops up in pop songs!

  • Standards make it easier to exchange information between organisations. As an organisation grows the more likely it is that this will happen.
  • Standards do change but the upgrade path is always specified.
  • Some standards reinforce other standards by. For example TOPAS (Tour Operator Product Availability Standard) will contain IATA airport and airline codes in the information stream.

If you chose your own data encoding standard over a published one then to exchange information with an external source you are going to have to take on the additional development task of providing (and maintaining) a translation facility.


Designing systems to grow

Jeff Modem made the comment that people write time bomb code "because there's some sort of guarantee

that there will never be more than some small number of rows in the table".

A senior colleague opined that designing a system that isn't expected to grow is effectively designing a system

that the business already regards as a failure before the first line of code is written!

The thought behind this statement is that all businesses want to grow. No matter what the system is a growing business

will require systems that can increase in capacity. It doesn't matter whether that system is an accounts package, a theft and damage tracking system or an e-Commerce web site it has to grow with the business.

I've worked on system that was a quick bodge that was "only ever going to handle one supplier and 200,000 products per day".

Over the years the system was tweaked and nurse-maided so it was handling many suppliers and significantly more that 200,000 products but ultimately it reached its capacity before the business appetite for growth was sated.

Had the brief been "we are going to start off with 200,000 products and one supplier but expect it to grow"

the developers would almost certainly have produced a radically different design.


Support your friendly development community

The agile movement is going from strength to strength and one of the features of agile development is that while there

are specialists everyone is expected to be capable of doing a little bit of everything. That means that although you are

a "database specialist" other non-specialists will be performing development tasks on the database.

If there has been any historic separation between the role of a DBA and the developer community then there will

probably be some antagonism between the two camps. As a DBA, your best move is to take a proactive role in building bridges between the two camps.

If you sit back and expect the developers to come to you then you are in for a nasty shock.

In my organisation I took a planned approach which was as follows

  • Defined a skills assessment test for potential database developers
  • Used the skills assessment to justify and organise formal training for an initial tranche of developers.
  • Published DBA standards initially to codify best practises for the company DBAs but later adjusted to a more developer friendly format
  • Ran presentations and workshops for the developer community to highlight the DBA standards and why they existed.

I found that with very few exceptions this approach was successful. Simply giving the developer community the

opportunity to ask questions and making myself available to answer those questions was time well spent.

A professional developer is always looking to improve his or her skill set so taking the time to work out how best to

help them improve is in your best interest.

That said, there are always THOSE developers. Those developers that are a small highly visible minority that give the developer community a bad name. There is no easy answer to the stress and havoc that these people cause however the professional ones will help out


Naming conventions

Expressing an opinion on naming conventions is always sure fire way of provoking a heated debate.

Not having an established naming convention within your organisation is a time bomb because as your organisation grows the

need to communicate clearly and effectively across the enterprise becomes increasingly difficult. An established naming convention

acts as a means of clear communication.

In data warehousing terms one of the key design steps is to "conform" the facts and dimensions. A naming convention is

simply conforming names for database objects.

Fortunately there is an international standard for metadata naming conventions which provides advice relevant to database element naming.

ISO-11179 gives advice that is as follows: -

  • Each named item should have a unique purpose. For example CustomerID in one context should have exactly the same meaning in another context.
  • The name should not be a plural
  • The name should be descriptive and unambiguous of the concept that it is expressing.
  • Only commonly understood abbreviations should be used

The part that states that names should be descriptive and unambiguous of the concept they are expressing is especially important.

In the object orientated world having an object with properties of ID, name, description is no bad thing because you are always

viewing those properties within the context of the object.

In the database world fields called ID, name and description are pure poison. The instant you start writing queries that

join tables together ID, name and description are taken outside of their context and are therefore ambiguous and not descriptive of their purpose.

As the data model becomes more complex these ambiguous fields make the model harder and harder to read and it becomes

easier to introduce bugs into complex queries.

Debugging a multiple table query with an ID field is the equivalent of "Paris in the the spring".


Earlier on in this article I drew attention to the fact that I had given defaults, primary keys and check constraints explicit names rather

than allow SQL Server to generate them itself. This is mainly for reasons of maintainability.

I want my constraints to have identical names across environments. I know default, primary key and foreign key constraint names can be derived

by querying system tables but check constraints aren't so straight forward.


For reasons of maintainability I would also avoid prefixing tables with tbl, views with vw or similar.

One project I worked on involved shifting tables into a common enterprise database and to avoid application code changes, present the data

in the original database as a view with its original name. In short, we now had a tbl_Area object in the database that was in fact a view.

Similarly a partial rewrite project decided to refactor data from a legacy system by wrapping up the legacy tables in views to present the data

exactly as the new application needed it. Had these views been prefixed by vw then phase II, which involved migrating the legacy data into a physical

version of the view structure would have meant that things named as vw were in fact tables and not views.


Test and Auditing

If non DB specialists are going to be performing database development tasks then it is well to have a suite of automated tests so

any database code smells can be identified early and corrected.

If the developers are practising test driven development and continuous integration then including a few basic database

tests should be welcomed. The idea is to educate the developer community and help them to write quality code.

Some of the tests I put in place are as follows

  • Check for system generated object names rather than clearly defined objects
  • Missing primary keys
  • Missing clustered keys
  • Objects on incorrect filegroups
  • Non-partitioned indexes on partitioned tables.
  • Spaces or invalid characters in object names
  • Where partition switching is used for purging test to make sure that the schema of the main table matches its partitioned brethren.

The tests themselves are not particularly sophisticated; they are simply to give some basic health checks.

For example the following stored proc lists any user tables that are missing clustered indexes and if any are found it throws an

error which will break the build on the continuous integration server.

CREATE PROC dbo.TestForAbsentClusteredIndex
@ReturnValue INT 
T.name AS TableName
 FROM sysobjects AS T
LEFT JOIN sysindexes AS CI
ON T.id = CI.id
AND CI.indid=1 -- Indicates a clustered index.
WHERE T.xtype ='U' -- Indicates a user table.
AND OBJECTPROPERTY(T.id,'IsMSShipped')=0 -- Filteres out dtproperties and M$ replication tables 
SET @ReturnValue = @@ROWCOUNT
IF @ReturnValue >0
RAISERROR('*** FAILED ***: Clustered keys are missing in %i cases',16,1,@ReturnValue ) WITH NOWAIT

I am indebted to Lutz and Jeff Modem for pointing out that a list of SQL reserved words can be found in both Books online and


I used these lists to build a table of reserved words in the model database and the stored procedure shown below. This meant that whenever a new database was created or the continuous integration environment dropped and recreated the development database both the data and stored proc were present in the user database.

The stored proc and some other simple tests are in the attached script.

CREATE PROC dbo.TestForReservedWords 
@ReturnValue INT
O.NAME AS ObjectName,
CASE O.xtype
END AS ObjectType,
CASE O.parent_obj WHEN 0 THEN ''
+QUOTENAME(OBJECT_NAME(O.parent_obj)) END AS ParentObject
FROM sysobjects AS O
INNER JOIN dbo.ReservedWords AS RW
ON O.name = RW.ReservedWord
SET @ReturnValue = @@ROWCOUNT
IF @ReturnValue >0
RAISERROR('*** FAILED ***: Objects are named with reserved words in %i cases',16,1,@ReturnValue ) WITH NOWAIT
O.NAME AS ObjectName,
CASE O.xtype
END AS ObjectType,
C.name AS ColumnName
FROM syscolumns AS C
INNER JOIN sysobjects AS O
ON C.id = O.id
INNER JOIN dbo.ReservedWords AS RW
ON C.name = RW.ReservedWord

SET @ReturnValue = @@ROWCOUNT
IF @ReturnValue >0
RAISERROR('*** FAILED ***: Fields are named with reserved words in %i cases',16,1,@ReturnValue ) WITH NOWAIT



In this article I have attempted to go beyond mere code time bombs and highlight that applying thought to the design of

processes can also help prevent time bombs that afflict DBAs.

A good DBA quickly learns to think ahead and anticipate what is around the corner. Good judgement is the product of

experience and experience is the product of bad judgement.

I am increasingly of the opinion that helping others (and being seen to help) avoid the turds and bear traps that all DBAs

tread in over their careers is the most effective way of avoiding time bomb design and coding.



4.84 (114)




4.84 (114)