Keep It Simple

  • Comments posted to this topic are about the item Keep It Simple

  • It's interesting how people's interpretation of keeping it simple varies from organisation to organisation. I've worked in places where a developers concept of keeping it simple meant the simplest code to write (hard coding, code that is written for a specific server, or to do a particular thing in a particular circumstance). This for me is a scenario that I do everything I can to change.

    For me the notion of keeping it simple means simple to manage. To achieve this often requires the most difficult code and considerably more effort.

    I'd be inclined to agree with you in your point about it not being necessary to do the same work on every server just for the sake of it, however what I would want is exactly the same logic applied on all my servers. In a corporation where developers and operational staff / servers are kept in isolation, it is essential that even if the data is different, the logic stays the same. That way your developers know what to expect when its release time.

    Kindest Regards,

    Frank Bazan

  • Overthinking can kill you - too complex, and you could miss a key ingredient. And thinking tends to get me in trouble, so I avoid it as much as possible. :exclamationmark:

    Some databases - like the backend for RS or WSS - generally can be simple backups once a day. Risk to the Business and the question of how difficult to recreate what I might have lost are weighed.

    Transactional DB's need the constant just in case of disaster - how do I recover approach.

    So we have never used a one size fits all method of management. It varies by db.

    Working in a data warehouse gives me a different perspective than some. Our cube build pegs our server for several hours each night. A network admin looks at averages, and would say our server is under utilized. I look at the peaks during the night, and during the day, and manage to the peaks.

    I think our 'green' management is more influenced by working towards Hyper V and virtualized. But there again, green is only a part of it. A bigger driver is managing upgrades - they will be more driven by the application release cycles than hardware lease cycles.

    Greg E

  • That's an interesting angle - possibly sacrifice some performance for electricity savings.

    I'd still probably err on the performance side, but I hadn't really thought about it that way.

    I'd be curious to see what the power consumption of database servers is versus the entire data center - if it is a small percentage (depends on how many database servers you have, obviously) it wouldn't make too much sense to conserve.

    With all of the backups to tape (via tape libraries), etc. going on during the night for all of the servers we have, I doubt that (in my company) the hit for indexing and full backups is too high.

    Thought provoking...

  • Good comments, and good way of thinking about things. It certainly got me thinking...

    My takeaway is that it's important to not become complacent regarding our priorities and the methods we use to evaluate those priorities. Often times there are multiple factors, both internal and external, that affect our priorities, and the way we work.

    That being said, in order to truly evaluate what's important, we need to truly understand the cost of our actions. For example, in deciding not to back up or reindex every night in order to save energy must be balanced against the cost of not doing so (e.g. in terms of performance). Considering this specific example, it would be interesting to calculate what the actual savings, in terms of energy consumption, would be for altering the backup/reindexing strategy. It would also be interesting to see the benefit in terms of a better end-user experience in doing these things, and at what point (if any) that experience begins to degrade due to the altered strategy.

    Best regards,
    -- John.

    John Hoegy
    Technical Specialist

  • KISS also stands for Keep It Sql Server

    🙂



    Alvin Ramard
    Memphis PASS Chapter[/url]

    All my SSC forum answers come with a money back guarantee. If you didn't like the answer then I'll gladly refund what you paid for it.

    For best practices on asking questions, please read the following article: Forum Etiquette: How to post data/code on a forum to get the best help[/url]

  • I try to make my maintenance plans as lazy as possible. I set them up to do the minimum necessary to make sure everything works well. That means index rebuilds when they are needed, not every night, for example.

    I generally do full backups every night for all production databases. I've yet to manage a database where a greater amount of data loss would be acceptible, and I've found that long periods of diff backups make for a higher probability of data loss. I could see, in some cases, an argument for weekly full and nightly diff and every-four-hour tran (for OLTP). Disregard the tran for databases that aren't transactional, of course (ones loaded nightly with ETL processes, for example). But the nightly full backups is more of a habit than otherwise, and works pretty well.

    On the laxy maintenance, part of the goal is to free up as much processing time as possible, because I'm used to servers that have to do a lot of work overnight to load up reporting tables and such, but also have to do maintenance overnight.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Glad the ed got you to think.

    I wish we had more insight into power usage. I suspect that it's coming over time, but hard to get at times now.

    I found that moving to weekly Fulls, and daily diffs saved a lot of space. I'm sure it saved power as well since less to back up, less full load at night on the server, etc. We saved close to $10k in tapes at JDE back in 2001/2002 with this. Adding in the savings from compression at that time in backups, and we paid for our licenses in a year.

  • Everything works better when you're KIS'ing, as a developper I can tell you that keeping it simple doesn't mean to write simple code. You can build something and work hard, that's good, the simple part should be understanding it and modifying it when the next developper steps in your things.

    Same thing for database and table structures... you should knock your head on the wall making them as pure as possible, so it can be understood more quickly by the next kid who has to work in them.

  • Alvin Ramard (3/12/2009)


    KISS also stands for Keep It Sql Server

    🙂

    ROFL!:w00t:

  • Steve,

    Should we reindex all tables every night because we can?

    I'm surprised you asked this one.

    I got a script that checks fragmentation using dbcc showcontig and only runs reindex on the tables that have a scandensity of less the some percent - 80% seems to work well.

    It came from Kimberly Tripp and I've been running it since at least 2004.

    JohnS

  • JStiney,

    Can you post this script?

    Thanks

  • I run this stored procedure on all my databases on all my servers every night and have done so since 2004. The vast majority of the tables do not need to be reindexed on any given night.

    JohnS

    USE [master]

    GO

    /****** Object: StoredProcedure [dbo].[sp_dba_Reindex] Script Date: 03/12/2009 14:12:41 ******/

    SET ANSI_NULLS ON

    GO

    SET QUOTED_IDENTIFIER ON

    GO

    CREATE PROCEDURE [dbo].[sp_dba_Reindex]

    (

    @ScanDensity tinyint= 80,

    @Maxtime int= 100000,

    @TableNamesysname= NULL

    )

    AS

    -- Written by Kimberly L. Tripp of SYSolutions, Inc.

    -- For more code samples go to http://www.sqlskills.com

    --

    -- Modified by John Stiney - MEAG Power 12/01/2004

    -- 1. Correct case sensitivity object names.

    -- 2. Add ability to accept 101 as percent to get ALL indexes rebuilt. (Special Case)

    -- 3. Delete the called stored procedure to recreate clustered indexes (not really

    -- needed - per Kimberly), the regular reindex command can be used for

    -- clustered indexes, too.

    -- 4. Formats long running reindexes(ms>@Maxtime)to list on MEAG Central Reports.

    -- 5. Move TableName parameter to third from first.

    --

    -- This procedure will get the Fragmentation information

    -- for all tables and indexes within the database.

    -- Programmatically it will then walk the list rebuilding all

    -- indexes that have a scan density less than the value

    -- passed in - by default any less than 80% contiguous.

    --

    -- NOTE - This gathers density information for all tables

    -- and all indexes. This might be time consuming on large

    -- databases.

    --

    SET NOCOUNT ON

    select 'Server:',substring(@@Servername,1,10),

    'Database:',substring(DB_NAME(),1,15), 'Reindex ', getdate()

    IF @ScanDensity IS NULL

    SET @ScanDensity = 80

    IF @ScanDensity NOT BETWEEN 1 AND 101

    BEGIN

    RAISERROR('Value supplied:%i is not valid. @ScanDensity is a percentage.

    Please supply a value for Scan Density between 1 and 101.', 16, 1, @ScanDensity)

    RETURN

    END

    IF @TableName IS NOT NULL

    BEGIN

    IF OBJECTPROPERTY(object_id(@TableName), 'IsUserTable') = 0

    BEGIN

    RAISERROR('Object: %s exists but is NOT a User-defined Table.

    This procedure only accepts valid table names to process for index rebuilds.', 16, 1, @TableName)

    RETURN

    END

    ELSE

    BEGIN

    IF OBJECTPROPERTY(object_id(@TableName), 'IsTable') IS NULL

    BEGIN

    RAISERROR('Object: %s does not exist within this database.

    Please check the table name and location (which database?).

    This procedure only accepts existing table names to process for index rebuilds.', 16, 1, @TableName)

    RETURN

    END

    END

    END

    -- Otherwise the Object Exists and it is a table so we'll continue from here.

    -- First thing to do is create a temp location for the data returned from DBCC SHOWCONTIG

    CREATE TABLE #ShowContigOutput

    (

    ObjectNamesysname,

    ObjectIdint,

    IndexNamesysname,

    IndexIdtinyint,

    [Level]tinyint,

    Pagesint,

    [Rows]bigint,

    MinimumRecordSizesmallint,

    MaximumRecordSizesmallint,

    AverageRecordSizesmallint,

    ForwardedRecordsbigint,

    Extentsint,

    ExtentSwitchesnumeric(10,2),

    AverageFreeBytesnumeric(10,2),

    AveragePageDensitynumeric(10,2),

    ScanDensitynumeric(10,2),

    BestCountint,

    ActualCountint,

    LogicalFragmentationnumeric(10,2),

    ExtentFragmentationnumeric(10,2)

    )

    IF @TableName IS NOT NULL -- then we only need the showcontig output for that table

    INSERT #ShowContigOutput

    EXEC('DBCC SHOWCONTIG (' + @TableName + ') WITH FAST, ALL_INDEXES, TABLERESULTS')

    ELSE -- All Tables, All Indexes Will be processed.

    INSERT #ShowContigOutput

    EXEC('DBCC SHOWCONTIG WITH FAST, ALL_INDEXES, TABLERESULTS')

    PRINT N' '

    select 'Fragmentation Information has been gathered for all indexes: ', getdate()

    -- Quick test to see if everything is getting here correctly

    -- SELECT * FROM #ShowContigOutput

    -- Walk the showcontig output table skipping all replication tables as well as all tables necessary for

    -- the UI. This is also where you can list large tables that you don't want to rebuild all at one time.

    -- NOTE: If you take out a large table from rebuilding this script may have already checked density

    -- meaning that the expense in terms of time may have been expensive.

    -- Also, you should use a different procedure to rebuild a large table specifically.

    -- Even when you pass in the tablename it will be avoided here if MANUALLY added to the

    -- list by you.

    -- Test, Test, Test!

    DECLARE @ObjectNamesysname,

    @IndexNamesysname,

    @QObjectNamenvarchar(258),

    @QIndexNamenvarchar(258),

    @IndexIdtinyint,

    @ActualScanDensitynumeric(10,2),

    @InformationalOutputnvarchar(4000),

    @StartTimedatetime,

    @EndTimedatetime

    DECLARE TableIndexList CURSOR FAST_FORWARD FOR

    SELECT ObjectName, IndexName, IndexId, ScanDensity

    FROM #ShowContigOutput AS sc

    JOIN sysobjects AS so ON sc.ObjectId = so.id

    WHERE sc.ScanDensity < @ScanDensity

    AND (OBJECTPROPERTY(sc.ObjectId, 'IsUserTable') = 1

    OR OBJECTPROPERTY(sc.ObjectId, 'IsView') = 1)

    AND so.status > 0

    AND sc.IndexId BETWEEN 1 AND 250

    AND sc.ObjectName NOT IN ('dtproperties')

    -- Here you can list large tables you do NOT WANT rebuilt.

    ORDER BY sc.ObjectName, sc.IndexId

    OPEN TableIndexList

    FETCH NEXT FROM TableIndexList

    INTO @ObjectName, @IndexName, @IndexId, @ActualScanDensity

    WHILE (@@fetch_status <> -1)

    BEGIN

    IF (@@fetch_status <> -2)

    BEGIN

    SELECT @QObjectName = QUOTENAME(@ObjectName, ']')

    SELECT @QIndexName = QUOTENAME(@IndexName, ']')

    SELECT @InformationalOutput = N'Processing Table: ' + RTRIM(UPPER(@QObjectName))

    + N' Rebuilding Index: ' + RTRIM(UPPER(@QIndexName))

    PRINT @InformationalOutput

    IF @IndexId = 1

    BEGIN

    SELECT @StartTime = getdate()

    EXEC('DBCC DBREINDEX(' + @QObjectName + ', ' + @QIndexName + ') WITH NO_INFOMSGS')

    SELECT @EndTime = getdate()

    SELECT @InformationalOutput = N'Total Time to process (clustered ix) = ' + convert(nvarchar, datediff(ms, @StartTime, @EndTime)) + N' ms'

    Print N'DBCC SHOWCONTIG % = ' + convert(nchar, @ActualScanDensity)

    If (convert(nvarchar, datediff(ms, @StartTime, @EndTime))) > @Maxtime

    BEGIN

    PRINT ' *Reindex > '+convert(nvarchar,@Maxtime)+' ms: Table '+@QObjectName + ', ' + 'Index '+@QIndexName

    PRINT ' *Reindex '+@InformationalOutput

    PRINT ' *Reindex Start-'+convert(nvarchar,@StartTime)+', End-'+convert(nvarchar,@EndTime)+', Minutes:'+convert(nvarchar,(DATEDIFF(mi, @StartTime, @EndTime)))

    PRINT ' *Reindex '

    END

    else

    PRINT @InformationalOutput

    END

    ELSE

    BEGIN

    SELECT @StartTime = getdate()

    EXEC('DBCC DBREINDEX(' + @QObjectName + ', ' + @QIndexName + ') WITH NO_INFOMSGS')

    SELECT @EndTime = getdate()

    SELECT @InformationalOutput = N'Total Time to process(non-clust ix) = ' + convert(nvarchar, datediff(ms, @StartTime, @EndTime)) + N' ms'

    Print N'DBCC SHOWCONTIG % = ' + convert(nchar, @ActualScanDensity)

    If (convert(nvarchar, datediff(ms, @StartTime, @EndTime))) > @Maxtime

    BEGIN

    PRINT ' *Reindex > '+convert(nvarchar,@Maxtime)+' ms: Table '+@QObjectName + ', ' + 'Index '+@QIndexName

    PRINT ' *Reindex '+@InformationalOutput

    PRINT ' *Reindex Start-'+convert(nvarchar,@StartTime)+', End-'+convert(nvarchar,@EndTime)+', Minutes:'+convert(nvarchar,(DATEDIFF(mi, @StartTime, @EndTime)))

    PRINT ' *Reindex '

    END

    else

    PRINT @InformationalOutput

    END

    PRINT N' '

    FETCH NEXT FROM TableIndexList

    INTO @ObjectName, @IndexName, @IndexId, @ActualScanDensity

    END

    END

    PRINT N' '

    SELECT @InformationalOutput = N'***** All Indexes have been Rebuilt. ***** '

    PRINT @InformationalOutput

    DEALLOCATE TableIndexList

  • jStiney,

    Thanks and while many experienced DBAs have something similar (I had one of these in 2000-2001), most people just run the maintenance plan and reindex often.

  • Thanks

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply