separate tables to improve performance?

  • I have a server with a MSSQL 2000 installed, that contain a data Base X with 200 tables, It's 70 Gb space used. My question is, how I can improve the performance of this Data Base. Is possible to separate in different files like ndf and different disks, or there are something better to separate tables from a specific Data Base, in a different disks. But I can't modify the clients that usually access to this data base, for this reason I think that I need to separate tables in different ndf.

    Could you help me, please.

    Thanks a lot

  • well, you have several options. You could archive old data off to a different database if that is an option for you. You can separate the data files onto separate disks. Ensure the tables are indexed properly. You can even create all your indexs in a seperate file on a different disk. Hope this helps

  • And then, examine the code... most performance problems have nothing to do with the hardware and sometimes not even with indexes.  Got cursors?  Got WHILE loops?  Got views with aggragate functions?  Got views on views?  Got non-setbased triggers?  Got user defined functions (not all are bad but a lot of folks just don't know how to write good ones).  Processing anything one row at a time?  Got mixed datatypes?  Got tables without primary keys?  Got clustered primary keys on big transactional tables?  Got dates stored as VARCHAR?  Got temp DB set to autogrow at 10%?  Etc, etc... Those are ALL different things to look at that can cause performance problems!  200 tables and 70 GB ISN'T a monster database!

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Thanks a lot to all,

    I´m not a programmer, this part is just doing by other company. But is my responsibility to make possible that the data base work fine in point of view, of a technical systems, like "Got temp DB set to autogrow at 10%? Etc, etc" is this that I need. Do you know the best practices in this case?

    Thanks

    --Daniel

  • Yep... no database should ever be set to autogrow by 10% including TempDB... Estimate the maximum size you'll need for the next six months (or a year), add 20%, and set the startup size to that.  Don't be shy about allocating disk space to temp db (~5 to 10 gig is not uncommon) .  Growth should be set to a fixed amount (~250mb).  Both are IMO from previous experience... your findings may be different.

    FYI... If you start with the default of 1MB at 10% growth, you'll end up with 73 fragments due to growth just to get to 1 Gig.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • What you have is not as big.

    Actually we are running SAP systems where the biggest is 30 000 tables and the size is over 1,6 TB within one database.

    If you have a lot of physical disks the I would recommend to separate the tempdb, the system databases and the productive db from each other.

    Then you should identify the most accessed tables and put them on separate disk.

    There are plenty other of optimization possibilities but you have to know your workload, your applications, your system.



    Bye
    Gabor

  • Didn't know SAP made a version for SQL Server... that's cool.  Gotta look that one up.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • I'm running it for 5 years already and still below 0.5 sec/average transaction with 800-1000 concurrents users and 50 GB daily growth.

    We have some issues with the reindexing (SQL2K) as we are running 7x24.

    Also we have got some performance problems due to the automatic update of the stats but since we are checking and updating manually it is working fine.



    Bye
    Gabor

  • SQL 7 and 2000 had some great benchmarks on SAP. They used to be some of the biggest MS testing customers.

    Daniel

    For your DB, I think you want to look at see what bad performance is. Are you having lots of slow queries or just some. Are reports taking too long? Where do you want to improve things. You can definitely move to more file/file groups, which gives you some improvements since they will use more I/O threads. But you end up with more administrative headaches, especially if you have to restore in a DR situation.

    I'd look for problem spots, check on memory, CPU, etc. and then look to move forward.

  • 30,000 tables??? WOW! How on earth does one know what each is for?

    One would need a metadata-database just to begin to suspect where to look for anything...

  • I'd like to see the create script for this DB .  Permissions included lol.

  • That's why you have module experts.

    For example a CO or FI module expert knows his few dozens of tables, a PM module expert the other tables and so one.

    An empty SAP database is comming with 25000 or so tables and 20-25 GB of data. And at that moment there is not any single customer data inside the DB.

    One of the biggest issue where MS should improve on is the maintenance of the system tables (for example sysobjects has 93898 rows and sysindexes 66090). And there is no reindexing possibility (actually for those 2 systems tables the Scan Density is around 12% where it should be around 100%)



    Bye
    Gabor

  • 25,000 tables... 20-25 gig of default data... how big did they make TempDB in total (I realize you have it partitioned)?

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Tempdb is not as big. It is just 9,5GB but of course it is located on a different physical drive



    Bye
    Gabor

  • I always knew that there was a reason that I would not want to work with SAP - 25,000+ tables seems like a really good one.

    Jeff, I really like your "Got List" early on in the post !

    RegardsRudy KomacsarSenior Database Administrator"Ave Caesar! - Morituri te salutamus."

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply