Huge datafile vs. perfomance

  • Hi everybody

    I have a big database on Sql Server SP4, with only one datafile of 170Gb.

    I am noticing problems with the peromance, with I/Os, and I suspect that maybe tis big size is a problem.

    This database is an historical database, that people use to take statistics or some kind or reports.

    What can I do to improve the perfomance? Maybe I have to create more datafiles?

    Thanks in advance.

  • Size doesn't really cause that many issues. You might need to reconsider you current indexing structure and you might need to do general maintenance of the system such as sp_updatestats and DBCC UPDATEUSAGE as well as other minor things. Could be general IO issues with the drive configuration you have it on. I suggest look at the queries being executed first and make sure you have indexes that support those queries best, you can use SQL Profiler to capture them for sveral hours.

  • Might want to consider placing your indexes into a separate file on a separate physical disk.

  • Beware of allocating multiple files to the same filegroup if they will be on the same disks, especially if you are already having I-O problems.  Wintel servers often have limited I-O capacity compared to *nix and mainframe boxes, and using extra files to allow SQL to increase the number of I-O threads can often harm performance.

    The other advice already given about checking index fragmentation and having current stats is definitely an important first step.  So too is the advice to move indexes to a filegroup on separate disks to your current data file.

     

     

    Original author: https://github.com/SQL-FineBuild/Common/wiki/ 1-click install and best practice configuration of SQL Server 2019, 2017 2016, 2014, 2012, 2008 R2, 2008 and 2005.

    When I give food to the poor they call me a saint. When I ask why they are poor they call me a communist - Archbishop Hélder Câmara

  • Thanks everybody

    I think, my problems occur when I do backups, or  reindex and things like that.

    The errorlog shows messages like:

    SQL Server has encountered 1 occurrence(s) of IO requests taking longer than 15 seconds to complete on file [H:\Data\MXZEDHIST_Data.MDF] in database [MXZEDHIST] (11).  The OS file handle is 0x000004DC.  The offset of the latest long IO is: 0x0000013f1b0000

    BobMgr::GetBuf: Bob write not complete after 60 seconds.

    Time out occurred while waiting for buffer latch type 2,bp 0x1870800, page 1:53664), stat 0x60d, object ID 2:1924917929:0, EC 0x709C5520 : 0, waittime 300. Not continuing to wait.

     

  • This is a disk issue. You need to upgrade the disk system.

  •  

    What do you mean with upgrade th disks? ...

    And how can I demonstrate to the system people that we have disk problems?

     

    Thanks in advance.

  • You undoubtably have a disk I/O bottleneck. The errors you posted, plus perfmon and/or profiler output should be good evidence to present to the sysadmins.

    As for what to do about it, there are two approaches:

    A. reduce the workload

    B. add resources

    Reducing the workload does not have to mean restricting usage; it can mean looking for inefficient queries, adding indexes, etc. Presumably you can perform these sorts of activities without needing to justify anything.

    Even after reducing the workload, you still may benefit from additional hardware. Usually, adding additional disk drives and putting the data files, log files, and tempdb all on different drives is a good first step.

    A good resource is the book: SQL Server 2000 Performance Tuning, by Ed Whalen, et. al.

  • Could you perhaps post specs of the server itself?Im also running into problems with historical data being retrieved and statistics being done.Although we running another data aquistion application on top of sql server.

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply