Can help me?

  • sorry my english is bad.........

    i have problem like this.

    my log file growth so big and then i want to ask, how i reduce the size of it.

    i have tried by deleting it, but i got an error.

    anyone can help me............

    please....

    because i need it for my application.

    i'm new member so give me support in learning SQL

    thanx 4 All

  • Hi there,

    as per your PM i've sent the script that I use to do this, let me know how you get on

    Mike

  • I still confuse mike !!!!!

    can u give me a simple methode....

    please

  • Since you do not seem to worry about keeping the log, you can use this:

    USE MASTER

    BACKUP LOG [DBName] WITH NO_LOG

    GO

    Then immediately after that, run the following

    DBCC SHRINKDATABASE ([DBName])

    GO

    Please note however that this will truncate the log back to default size (1MB) and will therefore discard all your logged transactions

    - Roelof

    <hr noshade size='1' width='250' color='#BBC8E5'>Kindest Regards,

    Roelof
    <a href='http://' class='authorlink' target='_blank'></a>

    --There are only 10 types of people in the world. Those who understand binary, and those who don't.

  • how about if my log file is so big and i want to reduce it.

    any idea?????

  • exactly what the script I gave you in the PM does

  • change a dboption from "Full" to "Bulk-logged" or "Simple", and review your batch process and backup .

  • I wouldn't go down this route on a production server, 'bulk-logged' can actually produce a larger transaction log depending upon what's being done and 'simple' recovery mode means that you can only restore to your last full backup, loosing everything that has been done since.

    That said, if this is a test server then feel free to experiment, just make sure you read books online first and know what each option you're setting does.

  • The commands

    BACKUP LOG [DBName] WITH NO_LOG

    and

    DBCC SHRINKDATABASE ([DBName])

    should work perfectly. You can also run these two commands in a job and schedule to run it, say 1am every morning. You can even check the the log file size, or the degree of the fillness of the translog and only run the script when the you pre-defined conditions meet.

     

  • thanx for ALL........

    i'm sorry if my english is so bad

    i hope you understand with my word

  • now i get problem like this.....

    i insert into my data file for 1000000 record in one of table in database. when i delete it and i execute your Suggestion but my data file can't reduce and only log file that reduce....

    anybody can help me again????

  • Hi There,

    The SP I sent you via PM is only good for shrinking the log file, in order to shrink the Data file you'd need to do something else.  I would like to ask why you need to shrink the data file though, if you're not running out of disk space you'd be better leaving it at the size it reached, after all it's already been filled up once which implies it will be again.

    Shrinking the data file and then allowing it to auto-grow again just causes file fragmentation problems, it's best to grow the file to a manageable size (enough free space for 6 months is a good starting point) and keep track of it than let it grow by itself.

    In the long run you'll have less problems doing it this way, I promise.

  • so do you know how to reduce it Mike???

    please help me Mike.....

  • He told you not to reduce it because that's pointless on the long run. How much free space do you have on that file (%)?

  • dbcc shrinkdatabase (<your db name>, 10)

    This would shrink the database giving you 10% free space when the job had finished.  I still say that you shouldnt bother wasting time shrinking it but if you feel like you have to then the above will do it for you.

    If you have a copy of books online (often refered to here as BOL) then you can read up on this sort of thing for yourself.

Viewing 15 posts - 1 through 15 (of 16 total)

You must be logged in to reply to this topic. Login to reply