Running Out of Space

  • I've run into this so many times, especially the Maint Plan failing because it couldn't do integrity checks because a "user" was in the database. It then won't delete the old backups. I ended up putting the script below in a scheduled job that runs every 6 hours. Since then I haven't had a big problem with it.

    declare @file_name varchar(100)
    
    declare FileList cursor for
        select  physical_device_name
        from msdb.dbo.BackupMediaFamily bmf 
        left join msdb.dbo.BackupSet bs on bs.media_set_id = bmf.media_set_id
        WHERE  datediff(hh,bs.backup_start_date,getdate()) > 48 
        and datediff(hh,bs.backup_start_date,getdate()) < 168 
        and physical_device_name not like '%BEXVDI%'
        ORDER BY bs.backup_start_date desc
    
    OPEN FileList
    FETCH NEXT FROM FileList INTO @file_name
     WHILE (@@fetch_status  -1) 
      BEGIN
        --print @file_name
        SELECT @file_name = 'DEL ' + @file_name
        EXEC XP_CMDSHELL @file_name
        FETCH NEXT FROM FileList INTO @file_name
      END
    DEALLOCATE FileList

    Solved a lot of my major headaches.



    ----------------
    Jim P.

    A little bit of this and a little byte of that can cause bloatware.

  • Here is MY running out of space scenerio:

    Over the 4th of July holiday weekend, my SAN ran out of space for backups. Becasue of this, the transaction logs began to grow. Eventually, they filled up the hard drives. Once this happened, DTS packages began to fail, due to the log files being full. When the DTS packages failed that load the databases, the file server that receives files from an FTP server filled up.

    As you can see, we MUST monitor the backup device to ensure it has plenty of space! I have talked with the Vice President of our IT group about this situation, but he is deaf to my claims. I tell him databases ony grow; they do not shrink. We must plan ahead for growth!

  • > Say you've got a 100g drive with a 40g database and for purposes of discussion the backup is also 40g, leaving you a cool 20g free. Add 10% to the db driving the size up to 44g, you now only have 12g free after backup - and that's not counting your log backups.

    Just a thought... one might consider using a different example, if the drive itself fails and is lost what of the backup?

  • I have jobs that tell me what is going on.  I have one that runs every hour that checks disk space and will both page and e-mail me when free space goes below a threshold that I set.  I have another one that runs every 2 hours and checks database and tlog files for file growth and will both page and e-mail me when growth happens. 

    I have left my defaults on and have carefully set the size of my databases to where I am not expecting them to grow(at least not much).  If I get notified of growth then I need to ask the business owners what is going on to make sure that I have planned for their growth needs.

  • Several things from my experiences:

    We had a database that measured 100G+, and that grew between 1-3G a week. I eventuall configured it to autogrow 10G at a time, which made it a bit easier to watch the volume fill up.

    We bought database backup compression software (SQL LiteSpeed, of course). With that, it suddenly became possible to backup up that 100G turkey and store a backup or two on disk.

    I built a utility that once a day tallys the size of the data and log files, as well as the amount of space used by data, indexes, and blobs within a database. It does this for every database on the server and logs it in a table. After a month we had a notion what was going on; after a quarter we could establish solid growth trends and patterns.

    Most important lesson learned from Mr. Turkey: archive your data! Once data stored in a databases is no longer relevant, get it out of there [purge, or archive and then purge]. Do this or your databases may grow like the blob and conquer the world. (If When management will say "we can't delete that, we need it", tell them how much it will cost in hard drives x Raid x [Disk Arrays or SANs to house it all]. Sadly, they won't listen, and there really is no joy in saying "I told you so".

       Philip

     

  • Andy,

    This is a great advice about placeholders.

    I also would like to mention Tempdb that may as well grow fast even with read-only reporting as I found out once.

    Yelena

    Regards,Yelena Varsha

  • Hello Andy,

    I am a fan of your articles. However I found "Running Out of Space" a touch troubling. A disk or disk array runs best at 0 to 65% utilization. At 65% to 85% utilization we can expect a fall off in performance. At about 85% utilization we may expect a perceptible degradation in performance. So overall your article neglects the subject of disk performance. You may underestimate your influence if you think many D.B.A. don't pay attention to what you write. Further the suggestion to "keep several 1g files as placeholders on each drive that might run out of space." is questionable at best. Even placeholder files take up space and affect Disk performance. Given a choice between a CYA strategy that can affect performance or proactive maintenance I will choose the proactive approach rather than slow my customers and my system down.

    Best wishes,

    Barry O'Connell

  • In reference to the paper "Running Out of Space". I believe that like other databases and other systems the space of data files (data and log) must be forecast and never unlimited. That is to say, If I have a database for example a database with a size 20 Gb, if it continues growing it's a big problem. The analyst of application or responsible or the salesperson must know the size and its growth in a certain percent.

     

    I believe that the politic must go towards in tow ways:

     

    1. To limit the maximum size.
    2. According to growth and the size we choose the growth in megabytes or by percent.

     

    In both cases is the same for the Data and Log.

     

    This is my opinion, it is not the opinion of my company it's only my opinion.

     

    Sincerely:

     

     

    Ignacio-J. Santos Forner

    Madrid (Spain)

  • I've got a data mart that's growing by about 3 gig a day and it drives me nuts.  I've only got about 400g to work with (without paying my IT dept a huge amount for a bigger space... but that's another story)

    Anyway, the entire system has be redesigned to save some space and increase speed.  Now, this is only for people looking to save space on massive data sets. 

    We have inventory data, coming in each day  date-store-product... and 6 othe items cost,price,model_stock, on_hand, in transit, sales qty.

    So, rather than keep a column with the date, we made a table for each day.  When running a query, we simply build the result set in a temp table as we go.  (35,000,000 recs a day)

    That saved 35,000,000 date fields each day.  The next step was to rename the columns and append them to each day.  So, we ended up with store-product-mon_cost, tue_cost, wed_cost, etc...  That saved us from repeating the storeID and productID 36,000,000*6*2 times.  Believe it or not, it makes running a query on a full week of data VERY fast.  Oddly enough, the only difference has been in difficulty, not speed.  Now that we've got compiled search procedures, it's a snap.

    In the end, even if we had more space, we ended up with a much more effective system - even being able to export entire weeks at a time to backup systems -

    So, if you're running into the wall, remember that you can redesign some or all of the DB in an unconventional format, keeping the active portion in a standardized layout. 

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply