'PRIMARY' filegroup is full

  • Hi, I get error message in sql logs:

    Could not allocate space for object 'dbo.UserData'.'UserData_Url' in database 'Network_cde' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

    How can I fix it?

    Thank you

  • The error message gave details on how to fix it.

    Krasavita (9/12/2011)


    Could not allocate space for object 'dbo.UserData'.'UserData_Url' in database 'Network_cde' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • 1Create disk space (I have space on the drive 200GB free)

    2.setting autogrowth, all of my files,except Log set to autogrowth

    3.dropping objects,I don't know which objects needs to be dropped

    --------------------------------------------------------------------------------

  • Krasavita (9/12/2011)


    3.dropping objects,I don't know which objects needs to be dropped

    Usually it's none of them.

  • What it is?

  • What is the autogrowth set to? In other words, how many MB or how many percent?

    John

  • Ninja's_RGR'us (9/12/2011)


    Krasavita (9/12/2011)


    3.dropping objects,I don't know which objects needs to be dropped

    Usually it's none of them.

    You say you don't know what objects you can drop. I'm saying it's usually NOTHING can be dropped.

    You could setup an archiving procedure but that's an whole other story.

  • What is the autogrowth set to? In other words, how many MB or how many percent?

    I have 5 data types

    1.FileGrowth in Megabytes 500 (enable autogroth is checked)filegroup=primary

    2.FileGrowth in Megabytes 100 (enable autogroth is checked)filegroup=datafile2

    3.FileGrowth in Megabytes 100 (enable autogroth is checked)filegroup=docs

    4.FileGrowth in Megabytes 100 (enable autogroth is checked)filegroup=indexes

    5.FileGrowth in Megabytes 100 (enable autogroth is checked)filegroup=userdata1

    I have 1 log

    1.FileGrowth in Megabytes 100 (enable autogroth is checked)

    Maximum filegrowth is restricted is set to 2,097,152

  • Why 5 filegroups for this, this is very advanced tuning? Why such a small autogrowth?

    With instant file initialization you could go to at least 1 GB

    => http://sqlskills.com/blogs/Kimberly/post/Instant-Initialization-What-Why-and-How.aspx

  • So I will change

    1.FileGrowth in Megabytes from 500 to 1024 primary

    (2.FileGrowth in Megabytes from 100 to 500 filegroup=datafile2

    3.FileGrowth in Megabytes from 100 to 500 (enable autogroth is checked)filegroup=docs

    4.FileGrowth in Megabytes from 100 to 500 filegroup=indexes

    5.FileGrowth in Megabytes from 100 to 500 filegroup=userdata1

    Is this correct?

    Also my log file is 2,001 MB and is set to restricted growth to 2,097,152 mb and file growth is set 100 MB, should I change it?

  • The log file is harder to guess. But which such a small DB, I'd probably go with 3X of the sum of the 2 biggest tables.

    Why did you split into all those filegroups? This is really advanced tuning and I don't see the benefit with such a smal db!

  • This is not me,someone did it, but database is not small is is 130,209 GB, how can I identify biggest tabels? Do I need to re start agent to refresh the size of data files?Thank you

  • You can use this to find big tables, but not sure what it has to do with your current problem.

    SELECT substring(o.name,1,50) as 'Table Name',

    ddps.row_count , used_page_count--, *

    FROM sys.indexes AS i

    INNER JOIN sys.objects AS o ON i.OBJECT_ID = o.OBJECT_ID

    INNER JOIN sys.dm_db_partition_stats AS ddps ON i.OBJECT_ID = ddps.OBJECT_ID

    AND i.index_id = ddps.index_id

    WHERE i.index_id < 2

    AND o.is_ms_shipped = 0 -- 1 for system databases

    and ddps.row_count >=1

    ORDER BY used_page_count desc

    --ORDER BY ddps.row_count desc

  • Is it could be a case that my SQL Server is running out of available process memory. How can I check how much memory is allocated to the SQL Server? And How much is using at present. How can I check if I

    hit memory which is allocated to the SQL Server.

    Thank you

  • Krasavita (9/12/2011)


    Is it could be a case that my SQL Server is running out of available process memory. How can I check how much memory is allocated to the SQL Server? And How much is using at present. How can I check if I hit memory which is allocated to the SQL Server.

    Thank you

    It's not.

    Could not allocate space for object 'dbo.UserData'.'UserData_Url' in database 'Network_cde' because the 'PRIMARY' filegroup is full. Create disk space by deleting unneeded files, dropping objects in the filegroup, adding additional files to the filegroup, or setting autogrowth on for existing files in the filegroup.

    That is a disk issue. You'd know if you were overloading your RAM via watching PerfMon for page faults, or your swapfile was exploding.

    Something tried to put data into UserData_URL that was too big for it. Does that table have a LOB? Is that LOB stored in Primary or is the TEXT_DATA set to a secondary filegroup?

    You need more space or you need to archive off old data. There's really no way around it unless you can track down if someone was doing something inappropriate that shouldn't have happened anyway. The other possibility is that you're having LOB space reclaiming issues, but that's an avenue you don't want to go down until you're SURE it's the problem. The solution is a real hassle.

    Btw, was that number correct above? Is this 125 Terabytes of data?


    - Craig Farrell

    Never stop learning, even if it hurts. Ego bruises are practically mandatory as you learn unless you've never risked enough to make a mistake.

    For better assistance in answering your questions[/url] | Forum Netiquette
    For index/tuning help, follow these directions.[/url] |Tally Tables[/url]

    Twitter: @AnyWayDBA

Viewing 15 posts - 1 through 15 (of 21 total)

You must be logged in to reply to this topic. Login to reply