Multiple files in Primary Filegroup

  • I recently came across a large database (1 TB) that had multiple data files spread out among several drives. I understand this and I understand why it was done, but what I don't understand is that all the files are in the PRIMARY filegroup. Normally when I break out data files I put them in different filegroups (one for indexes, one for read only tables, one for heavy read/write tables and so on).

    I have never seen 10 data files all in the PRIMARY filegroup.

    This same database also has 4 log files. I have always believe that a database can only use on logfile at a time and the multiple log files were not helpful as far as performance is concerned.

    This is a 2012 SQL Server Enterprise edition on Windows 2012 OS. The system is a VM.

    Is there an advantage to this file setup?

    Thanks!

    Jim

  • It is recommended to create multiple data files with same size to optimize tempdb to take advantage of proportional fill algorithm and reduce allocation contention. Unless you have separate file groups you cannot control where a particular table is written to. And you are right about the log files are written sequential and can use one file at a a time. How is the auto growth and max size configured?

    ----------------------------------------------------------------------------------------------------------------------------------------------------
    Roshan Joe

    Jeff Moden -Forum Etiquette: How to post data/code on a forum to get the best help[/url]

  • The files are pre-sized correctly, with autogrow on. I understand the advantages of the multiple data files but I have always seen the files assigned to different filegroups. I just thought it kind of strange that all 10 data files are in the Primary filegroup.

    You right if I create a new table or index, I don't know what file on what drive will contain it since all in PRIMARY.

    Jim

  • joeroshan (11/30/2016)


    It is recommended to create multiple data files with same size to optimize tempdb to take advantage of proportional fill algorithm and reduce allocation contention.

    That is specifically for TempDB, not for user databases (as they don't get the same load on allocation structures), and the OP was asking about a user DB.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • Jim Youmans-439383 (11/30/2016)


    I recently came across a large database (1 TB) that had multiple data files spread out among several drives.

    It might be an attempt to reduce IO contention, as tables and indexes in PRIMARY will be spread across all files. Whether it's effective at that, or just someone's 'bright' idea sometime in the past is another matter.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (11/30/2016)


    joeroshan (11/30/2016)


    It is recommended to create multiple data files with same size to optimize tempdb to take advantage of proportional fill algorithm and reduce allocation contention.

    That is specifically for TempDB, not for user databases (as they don't get the same load on allocation structures), and the OP was asking about a user DB.

    That is what I meant Gail. Sorry if I sounded to imply otherwise. May be because I replied on tempdb, user data file and log file in consecutive sentences. Thank you for clarifying that.

    ----------------------------------------------------------------------------------------------------------------------------------------------------
    Roshan Joe

    Jeff Moden -Forum Etiquette: How to post data/code on a forum to get the best help[/url]

  • GilaMonster (11/30/2016)


    It might be an attempt to reduce IO contention, as tables and indexes in PRIMARY will be spread across all files. Whether it's effective at that, or just someone's 'bright' idea sometime in the past is another matter.

    You might want to find out the physical disk layout, not just the logical disk layout. If this VM's drives are sourced from a SAN, there's a chance that they may all be LUNS on the same physical array of disks anyway, in that case trying to get better IO throughput by splitting things to multiple logical drives will only make it harder to manage disk space on all the separate drives without giving any benefit to IO.

  • All the drives that the files are on are supposed to be spread out across the SAN and are all SSDs.

    Jim

  • I got the same issue, I have 8 data files in primary, any good idea to release them? I can move the index and heap to different file groups, but primary still has 8 data files, beside the shrink, any better ways to handle them? thanks.

Viewing 9 posts - 1 through 8 (of 8 total)

You must be logged in to reply to this topic. Login to reply