Overall Guidance and Suggestion of environment

  • First off thanks for your time and much appreciated in all suggestions or ideas.

    The environment i have is totally different than any i have ever been a part of and hoping that i can get some assistance.  It will go against pretty much everything theory based and pretty much shooting from the hip.  okay..here we go

    I have about 15 server and growing fast...the data is big data we are filling up drives fast and most servers have 4 drives and all drives are about 2TB in size with less than 25% free.   The servers have plenty of RAM and CPU so thats good.  Backups are running but there is a space concern so i push those off to a network drive where we are waiting for another 50TB so that we can keep backups for over a week( i currently keep 3 days of the compressed backup files that are about 50+GB for some individual databases).

    How can i control the growth of the log files.... currently every database is in "SIMPLE' mode and simple mode databases will still grow the log when you have a 150- 325GB load to the databases and these happen practically nightly.  Some times the load totally replace the database and some times they just over write everything in them.(i have no control over this aspect and very little to say as to how this is done).  Currently i just shrink log files with checkpoints and truncate only etc.. and there are NO log files backups as Point In Time recovery is not needed.  I know shrinking log files is bad...but when you have limited space and knowing that the databases will load again the next night or within a coupe of days....what can you do.  I have a job right now that just shrinks all log files and its been working but just wondering if there are other ideas i can go with.

    thanks for your time..and that is a bit over simplification but hopefully you get the idea.

     

    DHeath

     

    DHeath

  • To really deal with this, you need to get some input and insight into the load process. I know you say you have zero control over it, but, that's where the problem lies. SQL Server simply works as SQL Server works. You can't, for example, literally turn off logging entirely to avoid the process during data loads. Instead, you look at how the process is working and ensure you're either using minimally logged operations (here's a bunch of documentation) or, that the operations are broken up into smaller chunks to allow for less log use. Not much I can say beyond that with quite a bit more in the  way of details.

    You can look into other mechanisms such as partitioning to break up how data is stored and managed. That can help with logging since those operations are minimally logged as well. SIDE NOTE: Partitioning is for data management, not for query tuning. Be very wary of any advice you see online about this helping performance. It frequently does the opposite.

    Regarding backups. If you reload the data nightly, why take a backup? Chances are, especially for larger databases, the restore process may be just as, or nearly so, long as reloading the data.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Grant,

    Thanks for the input and you are right its probably the loading process that needs to be looked at quite a bit closer.  I was thinking it was my thought process but in the way I was handling it.  I am trying to get it so they have a "loading drive" on the server so that is where the data is dropped then migrate it from there.  That seems kinda redundant because it most likely going to the same server BUT this might be a way that it wont interrupt production if the data drive Or load process dies because drive is full.  The data is reloaded nightly but there are times when its smaller data and the bigger databases are static but rarely are the static for more then a couple of days.(Hence only 3-4 days backups)

    I thought about the bulk-logging idea but that would cause for my databases to be in FULL recovery mode and now I place them all in SIMPLE to minimize the logging even more.

    As I said in the beginning this is probably very accurate that I need to talk to the individuals that do the loading and get a good understanding as to how this aspect is taken care of so that its more "controlled".  Your input is appreciated.

    DHeath

     

    DHeath

  • If the data is loaded and the log files keep growing to 100GB in Simple mode, there's not much you can do. Shrinking them makes no sense because they'll grow again. In simple mode, the log records aren't being kept, and once transactions commit, the log records can get overwritten. If you just leave the log, the next night's load ought to reuse the same space.

     

    If it grows somewhat, you've had a bigger load. In simple mode, your log max is the size you need. The space gets reused

  • What utility is used to load the data?  If it is SSIS then you can control the batch/commit sizes on the OLEDB Destination so the transactions don't grow the log file.  If it is some other utility - look at that utility to see if they have the same type of option.

    You can also look at compressing the tables - either row or page - depending on which one works the best.  This should be tested for performance but should result in quite a bit of savings and may even improve performance.

    As for backups - if you can insure you have a full week of load files you can switch to weekly full backups.  A restore would require restoring the weekly full backup and reloading all files processed since that backup was taken.  You would have to validate the storage requirements for the load files (which you should be keeping anyways) - but at the expense of a slightly longer recovery time.

    Jeffrey Williams
    “We are all faced with a series of great opportunities brilliantly disguised as impossible situations.”

    ― Charles R. Swindoll

    How to post questions to get better answers faster
    Managing Transaction Logs

  • Steve,

    Thanks for the response...The problem here is you maybe 100% correct but if the log file is sitting at 100GB or whatever it is....and the drive fills then they cant load whatever else is coming as it might be 2-3 more loads in a particular night.  I know this goes beyond all rational thinking and thats why i came here...please know i am NOT trying to be a devils advocate but literally thinking that this does happen at least once a week if not more and when the drive fills well production is down on that server momentarily and the only solution is to find log files to delete which in turn will create space.  They use to shrink the data files..i got that stopped... hahaha thank goodness

    Jeffery,

    Thank you as well for replying... i do believe it is SSIS that is used for loads and i am not aware of the "batch/commit size" stuff that you are mentioning so i will research it and pass along this to the dev team(THANK YOU) and maybe they have more awareness about this then i do.  And with the backups keeping a week is pushing the limit so i keep 3-4 days...tried to go to 5 days and ran out of space.  LOVE the idea tho.

    All comments are greatly valued and truly appreciated..thanks again

    DHeath

Viewing 6 posts - 1 through 5 (of 5 total)

You must be logged in to reply to this topic. Login to reply