I want to avoid transactions

  • Hi All,

    In my SP, i am using some permanent temporary tables which holds the data for intermediate calculations. i delete and insert that data in those tables frequently and i have to put those in trancstions so that if something goes wrong theh i can roll back the data, but the other hind side is that transction logs increases unnecessarily. I need an advice that how to handle this problem.

    First Question: Are the transactions impilicit if yes then how can change it to explicit??

    Second: how can i reduce the transaction log, even though when i commit i think the transaction log is reduced but soemtimes it does not, so does transaction log gets reduced when the commit occurs.

    waiting for reply??

    Kind Regards,

    Affan

  • Thanks for the reply,

    I have read the thread but my porblem is that i need to stop SQL server to stop writing log for some specific tables. From the thread i have relaized that i dont have this kind of control.

    But if you can give some good tips about these than that woiuld be great.

    Kind Regards,

    AFfan

  • You may want to experiment with the SET IMPLICIT_TRANSATIONS option. I would be interested if this helps.

    On the other hand, you could influence the level of 'locking' by using SET TRANSACTION ISOLATION LEVEL. A level of "read uncommitted" is the least restrictive and is equivalent to a "select .... with NOLOCK". Is this what you are after ?

    Kay

  • Thanks Kay,

    these are good points and will help me somewhere else.

    I just wanted to avoid writing to transaction log while implementing transactions.

    Kind Regards,

    Affan

  • I remember our developer group trying the same thing about 5 years ago. They just wanted to avoid the overhead of transaction logging where it's not really necessary. After a lot of expermients, the best thing they came up with was to 'truncate log on checkpoint' and do explicit 'checkpoints' every once in a while. There was no way to switch of logging completely. I'm not sure if this is still the case today, but it wouldn't surprise me if it is.

  • Thanks, I have the same point of view about this.

    Kind Regards,

    Affan

  • Just to mention, it is my understanding that writing to the transaction log is a key, integral part of SQL Server, in that it is absolutely essential for maintaining database integrity. It goes something like:

    - SQL Server notes the change to be made in the log file

    - SQL Server makes the change in the data file

    - SQL Server notes that the change has been completed in the log file

    (There are, of course, much more technical terms describing this, such as "two-phase write and commit" or the like, but I've never been good at memorizing them.)

    If at any point prior to that final "all clear" anything fails, then there is sufficient information to roll back the changes, restoring your original data untouched and uncorrupted. This provides the absolute guarantee that your transaction was either performed 100% accurately or had no corrupting effect whatsoever.

    It seems like a lot of fuss and bother, but then that's why it's an Enterprise-level product, and not just a desktop utility.

    Philip

  • One work around might be to copy the data from the calculation tables to a true temp table and do all your updates during the SP on the temp tables. Then at the end of the SP update the calculation tables with the data from the temp tables. That way you won't have so much space used in your transaction logs.

    Gary Johnson

    DBA

    Sr. DB Engineer




    Gary Johnson
    Microsoft Natural Language Group
    DBA, Sr. DB Engineer

    This posting is provided "AS IS" with no warranties, and confers no rights. The opinions expressed in this post are my own and may not reflect that of my employer.

  • But using temp tables will disable the cahing and put a heavy load on the temp DB, am i correct ???

    Kind Regards,

    Affan

  • If you are using SQL 2K you could also use a table variable depending on the size of the calculation tables.

    Gary Johnson

    DBA

    Sr. DB Engineer




    Gary Johnson
    Microsoft Natural Language Group
    DBA, Sr. DB Engineer

    This posting is provided "AS IS" with no warranties, and confers no rights. The opinions expressed in this post are my own and may not reflect that of my employer.

  • I would think about how and why you are using the permenant temp table (insert "Military Intelligence" or other Oxymoron joke here). I have found that this is most often done because the person writing the query doesn't know or have time to figure out how to "do it right" (whatever THAT means).

    There are many options:

    - Can you rewrite the query to do the selects and calulations in one step?

    - Can you use a defined table (where the table in the "FROM" clause is defined by a SELECT statement as a subquery)

    - Can you use variables in the SP?

    - Are you REALLY sure the answers to the questions above are "NO"?

  • Kay:

    'truncate log on checkpoint'??? Jeez, I guess you don't ever care about having to restore the database with the differential since you don't have them after you truncate the logs!!!!

    Again, I think a better solution is to see if there is a better way to do the query.

  • There is no way to avoid the use of the log. Your 'not so temp' tables may be better in a small, auxiliary database with the trunc. on check point enable, maybe even in a different disk.

    This would keep your main DB safe and sound.

    Tip: your tables in your new database may be referred by views in your main db with their actual names, so you won't need to change your actual code.

    Example:

    prev state: MainDB with 'temp' table AAA.

    new state: new AuxDB, with 'temp' table AAA,

    drop old table MainDB..AAA

    create view MainDB..AAA as select * from AuxDB..AAA

    Your actual code will still run, your new db will isolate her log use.

    quote:


    Hi All,

    In my SP, i am using some permanent temporary tables which holds the data for intermediate calculations. i delete and insert that data in those tables frequently and i have to put those in trancstions so that if something goes wrong theh i can roll back the data, but the other hind side is that transction logs increases unnecessarily. I need an advice that how to handle this problem.

    First Question: Are the transactions impilicit if yes then how can change it to explicit??

    Second: how can i reduce the transaction log, even though when i commit i think the transaction log is reduced but soemtimes it does not, so does transaction log gets reduced when the commit occurs.

    waiting for reply??

    Kind Regards,

    Affan


  • I am fairly certain that you cannot turn logging on a database off.

    However, there are some things you can do.

    The preferred way is to use a solid state disk for the database log. That way you are only limited by the throughput of the bus you are on. All logging will still take place, so the roll up and rollback transactions are still protected if something goes wrong.

    I have never tried this. but you could look into a real transaction monitor, but that would add a new learning curve as well as expense,

    Another option, which I have never tried, , is to us a software ramdisk, however, if you have a failure, there would be no log for recovery to use.

    This would involve quite a bit of work.

    You would have to reproduce quite a bit of the functionality of tempdb, as well as write your own rollback procedures.

    I don't recommend this though.

    You would probably lose all benefit from using a memory disk.

Viewing 15 posts - 1 through 15 (of 24 total)

You must be logged in to reply to this topic. Login to reply