Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase

Running Out of Space Expand / Collapse
Author
Message
Posted Thursday, July 07, 2005 12:00 PM
SSCertifiable

SSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiableSSCertifiable

Group: Moderators
Last Login: Yesterday @ 9:01 AM
Points: 6,705, Visits: 1,680
Comments posted to this topic are about the content posted at http://www.sqlservercentral.com/columnists/awarren/runningoutofspace.asp

Andy
SQLAndy - My Blog!
Connect with me on LinkedIn
Follow me on Twitter
Post #198268
Posted Thursday, July 14, 2005 6:36 AM


SSChasing Mays

SSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing Mays

Group: General Forum Members
Last Login: Today @ 8:54 AM
Points: 601, Visits: 1,739
I've run into this so many times, especially the Maint Plan failing because it couldn't do integrity checks because a "user" was in the database. It then won't delete the old backups. I ended up putting the script below in a scheduled job that runs every 6 hours. Since then I haven't had a big problem with it.

declare @file_name varchar(100)

declare FileList cursor for
select physical_device_name
from msdb.dbo.BackupMediaFamily bmf
left join msdb.dbo.BackupSet bs on bs.media_set_id = bmf.media_set_id
WHERE datediff(hh,bs.backup_start_date,getdate()) > 48
and datediff(hh,bs.backup_start_date,getdate()) < 168
and physical_device_name not like '%BEXVDI%'
ORDER BY bs.backup_start_date desc

OPEN FileList
FETCH NEXT FROM FileList INTO @file_name
WHILE (@@fetch_status <> -1)
BEGIN
--print @file_name
SELECT @file_name = 'DEL ' + @file_name
EXEC XP_CMDSHELL @file_name
FETCH NEXT FROM FileList INTO @file_name
END
DEALLOCATE FileList


Solved a lot of my major headaches.




----------------
Jim P.

A little bit of this and a little byte of that can cause bloatware.
Post #200989
Posted Thursday, July 14, 2005 7:12 AM
Valued Member

Valued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued MemberValued Member

Group: General Forum Members
Last Login: Wednesday, April 09, 2014 3:43 PM
Points: 50, Visits: 168

Here is MY running out of space scenerio:

Over the 4th of July holiday weekend, my SAN ran out of space for backups. Becasue of this, the transaction logs began to grow. Eventually, they filled up the hard drives. Once this happened, DTS packages began to fail, due to the log files being full. When the DTS packages failed that load the databases, the file server that receives files from an FTP server filled up.

As you can see, we MUST monitor the backup device to ensure it has plenty of space! I have talked with the Vice President of our IT group about this situation, but he is deaf to my claims. I tell him databases ony grow; they do not shrink. We must plan ahead for growth!




Post #201018
Posted Thursday, July 14, 2005 8:00 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Sunday, April 06, 2008 5:44 PM
Points: 17, Visits: 3

> Say you've got a 100g drive with a 40g database and for purposes of discussion the backup is also 40g, leaving you a cool 20g free. Add 10% to the db driving the size up to 44g, you now only have 12g free after backup - and that's not counting your log backups.

Just a thought... one might consider using a different example, if the drive itself fails and is lost what of the backup?




Post #201057
Posted Thursday, July 14, 2005 8:51 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Monday, April 22, 2013 3:11 PM
Points: 33, Visits: 459

I have jobs that tell me what is going on.  I have one that runs every hour that checks disk space and will both page and e-mail me when free space goes below a threshold that I set.  I have another one that runs every 2 hours and checks database and tlog files for file growth and will both page and e-mail me when growth happens. 

I have left my defaults on and have carefully set the size of my databases to where I am not expecting them to grow(at least not much).  If I get notified of growth then I need to ask the business owners what is going on to make sure that I have planned for their growth needs.

Post #201107
Posted Thursday, July 14, 2005 1:57 PM


SSChasing Mays

SSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing MaysSSChasing Mays

Group: General Forum Members
Last Login: Tuesday, October 29, 2013 12:35 PM
Points: 649, Visits: 205

Several things from my experiences:

We had a database that measured 100G+, and that grew between 1-3G a week. I eventuall configured it to autogrow 10G at a time, which made it a bit easier to watch the volume fill up.

We bought database backup compression software (SQL LiteSpeed, of course). With that, it suddenly became possible to backup up that 100G turkey and store a backup or two on disk.

I built a utility that once a day tallys the size of the data and log files, as well as the amount of space used by data, indexes, and blobs within a database. It does this for every database on the server and logs it in a table. After a month we had a notion what was going on; after a quarter we could establish solid growth trends and patterns.

Most important lesson learned from Mr. Turkey: archive your data! Once data stored in a databases is no longer relevant, get it out of there [purge, or archive and then purge]. Do this or your databases may grow like the blob and conquer the world. (If When management will say "we can't delete that, we need it", tell them how much it will cost in hard drives x Raid x [Disk Arrays or SANs to house it all]. Sadly, they won't listen, and there really is no joy in saying "I told you so".

   Philip

 




Post #201262
Posted Friday, July 15, 2005 9:44 AM
Hall of Fame

Hall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of FameHall of Fame

Group: General Forum Members
Last Login: Thursday, December 26, 2013 9:45 AM
Points: 3,475, Visits: 577

Andy,

This is a great advice about placeholders.

I also would like to mention Tempdb that may as well grow fast even with read-only reporting as I found out once.

Yelena




Regards,
Yelena Varshal

Post #201657
Posted Tuesday, July 19, 2005 10:04 PM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Tuesday, July 15, 2008 5:20 AM
Points: 91, Visits: 17
Hello Andy,

I am a fan of your articles. However I found "Running Out of Space" a touch troubling. A disk or disk array runs best at 0 to 65% utilization. At 65% to 85% utilization we can expect a fall off in performance. At about 85% utilization we may expect a perceptible degradation in performance. So overall your article neglects the subject of disk performance. You may underestimate your influence if you think many D.B.A. don't pay attention to what you write. Further the suggestion to "keep several 1g files as placeholders on each drive that might run out of space." is questionable at best. Even placeholder files take up space and affect Disk performance. Given a choice between a CYA strategy that can affect performance or proactive maintenance I will choose the proactive approach rather than slow my customers and my system down.

Best wishes,

Barry O'Connell




Post #202641
Posted Wednesday, July 20, 2005 2:14 AM
Forum Newbie

Forum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum NewbieForum Newbie

Group: General Forum Members
Last Login: Thursday, February 23, 2012 5:06 AM
Points: 1, Visits: 5

In reference to the paper "Running Out of Space". I believe that like other databases and other systems the space of data files (data and log) must be forecast and never unlimited. That is to say, If I have a database for example a database with a size 20 Gb, if it continues growing it's a big problem. The analyst of application or responsible or the salesperson must know the size and its growth in a certain percent.

 

I believe that the politic must go towards in tow ways:

 

  1. To limit the maximum size.
  2. According to growth and the size we choose the growth in megabytes or by percent.

 

In both cases is the same for the Data and Log.

 

This is my opinion, it is not the opinion of my company it's only my opinion.

 

Sincerely:

 

 

Ignacio-J. Santos Forner

Madrid (Spain)

Post #202673
Posted Friday, July 14, 2006 9:17 AM
Grasshopper

GrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopperGrasshopper

Group: General Forum Members
Last Login: Friday, July 06, 2012 1:03 PM
Points: 12, Visits: 26

I've got a data mart that's growing by about 3 gig a day and it drives me nuts.  I've only got about 400g to work with (without paying my IT dept a huge amount for a bigger space... but that's another story)

Anyway, the entire system has be redesigned to save some space and increase speed.  Now, this is only for people looking to save space on massive data sets. 

We have inventory data, coming in each day  date-store-product... and 6 othe items cost,price,model_stock, on_hand, in transit, sales qty.

So, rather than keep a column with the date, we made a table for each day.  When running a query, we simply build the result set in a temp table as we go.  (35,000,000 recs a day)

That saved 35,000,000 date fields each day.  The next step was to rename the columns and append them to each day.  So, we ended up with store-product-mon_cost, tue_cost, wed_cost, etc...  That saved us from repeating the storeID and productID 36,000,000*6*2 times.  Believe it or not, it makes running a query on a full week of data VERY fast.  Oddly enough, the only difference has been in difficulty, not speed.  Now that we've got compiled search procedures, it's a snap.

In the end, even if we had more space, we ended up with a much more effective system - even being able to export entire weeks at a time to backup systems -

So, if you're running into the wall, remember that you can redesign some or all of the DB in an unconventional format, keeping the active portion in a standardized layout. 

Post #294507
« Prev Topic | Next Topic »

Add to briefcase

Permissions Expand / Collapse