Internal Query Processor Error: The query processor ran out of stack space during query optimization.

  • Server: Msg 8621, Level 17, State 1, Line 1

    Internal Query Processor Error: The query processor ran out of stack space during query optimization.

    I got the above error when I tried to delete row from a specific table(delete from test.settings where id = 10054; ), that table has too many foreign keys (1009 FK's), so I thought the Keys is related to this error and I delete the Primary key on the Primary table which means cascade the whole FK keys on the dependent tables, after that I issued the delete statement again and worked fine.

    My question what's the best solution or workaround to allow the deletion process without getting this error???

    May can I use Triggers??

    Help me?!! 

     

  • See, if this helps:

    http://support.microsoft.com/kb/q288095/

    May I add, that I never have heard of a table with that much FK's. Ever thought of doing a bit of normalization on your schema?

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • Thank you Frank , Unfortunately this is not helpful.

    BTW what kind of normalization that you are talking about it?? can you explain please??

  • Sorry, that link was the only one I remembered.

    Well, I was talking about this normalization (taken from BOL):

    Normalization

    The logical design of the database, including the tables and the relationships between them, is the core of an optimized relational database. A good logical database design can lay the foundation for optimal database and application performance. A poor logical database design can impair the performance of the entire system.

    Normalizing a logical database design involves using formal methods to separate the data into multiple, related tables. A greater number of narrow tables (with fewer columns) is characteristic of a normalized database. A few wide tables (with more columns) is characteristic of an nonnomalized database.

    Reasonable normalization often improves performance. When useful indexes are available, the Microsoft® SQL Server™ 2000 query optimizer is efficient at selecting rapid, efficient joins between tables.

    Some of the benefits of normalization include:

    • Faster sorting and index creation.
    • A larger number of clustered indexes. For more information, see Clustered Indexes.
    • Narrower and more compact indexes.
    • Fewer indexes per table, which improves the performance of INSERT, UPDATE, and DELETE statements.
    • Fewer null values and less opportunity for inconsistency, which increase database compactness.

    As normalization increases, so do the number and complexity of joins required to retrieve data. Too many complex relational joins between too many tables can hinder performance. Reasonable normalization often includes few regularly executed queries that use joins involving more than four tables.

    Sometimes the logical database design is already fixed and total redesign is not feasible. Even then, however, it might be possible to normalize a large table selectively into several smaller tables. If the database is accessed through stored procedures, this schema change could take place without affecting applications. If not, it might be possible to create a view that hides the schema change from the applications.

    Achieving a Well-Designed Database

    In relational-database design theory, normalization rules identify certain attributes that must be present or absent in a well-designed database. A complete discussion of normalization rules goes well beyond the scope of this topic. However, there are a few rules that can help you achieve a sound database design:

    • A table should have an identifier.

      The fundamental rule of database design theory is that each table should have a unique row identifier, a column or set of columns used to distinguish any single record from every other record in the table. Each table should have an ID column, and no two records can share the same ID value. The column or columns serving as the unique row identifier for a table is the primary key of the table.

    • A table should store only data for a single type of entity.

      Attempting to store too much information in a table can prevent the efficient and reliable management of the data in the table. In the pubs database in SQL Server 2000, the titles and publishers information is stored in two separate tables. Although it is possible to have columns that contain information for both the book and the publisher in the titles table, this design leads to several problems. The publisher information must be added and stored redundantly for each book published by a publisher. This uses extra storage space in the database. If the address for the publisher changes, the change must be made for each book. And if the last book for a publisher is removed from the title table, the information for that publisher is lost.

      In the pubs database, with the information for books and publishers stored in the titles and publishers tables, the information about the publisher has to be entered only once and then linked to each book. Therefore, if the publisher information is changed, it must be changed in only one place, and the publisher information will be there even if the publisher has no books in the database.

    • A table should avoid nullable columns.

      Tables can have columns defined to allow null values. A null value indicates that there is no value. Although it can be useful to allow null values in isolated cases, it is best to use them sparingly because they require special handling that increases the complexity of data operations. If you have a table with several nullable columns and several of the rows have null values in the columns, you should consider placing these columns in another table linked to the primary table. Storing the data in two separate tables allows the primary table to be simple in design but able to accommodate the occasional need for storing this information.

    • A table should not have repeating values or columns.

      The table for an item in the database should not contain a list of values for a specific piece of information. For example, a book in the pubs database might be coauthored. If there is a column in the titles table for the name of the author, this presents a problem. One solution is to store the name of both authors in the column, but this makes it difficult to show a list of the individual authors. Another solution is to change the structure of the table to add another column for the name of the second author, but this accommodates only two authors. Yet another column must be added if a book has three authors.

      If you find that you need to store a list of values in a single column, or if you have multiple columns for a single piece of data (au_lname1, au_lname2, and so on), you should consider placing the duplicated data in another table with a link back to the primary table. The pubs database has a table for book information and another table that stores only the ID values for the books and the IDs of the authors of the books. This design allows any number of authors for a book without modifying the definition of the table and allocates no unused storage space for books with a single author.

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • Thanks Frank but the normalization is affected and under awareness. 

    I think the reason is the high FK’s number that references on that table.

    I found the below limitations for SQL Server 2000:

     

    Foreign key table references per table

    253

    REFERENCES per table

    253

    http://whidbey.msdn.microsoft.com/library/default.asp?url=/library/en-us/architec/8_ar_ts_8dbn.asp

    What do you think?? it's right??

  • Damn, didn't think of this. Never tried that before with that much FK's, but this could very well be the reason. But what really interesting is, how did you manage to create this table? Can you script the table structure along with the constraints to reproduce the error?

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • Frank I think you are confused ,

    My case is I have a table as a primary table and there are a 1009 tables (FK’s) references on the primary table.

    Is that understood??

     

  • Sir, yes, Sir! This is understood!

    Just made a small script to test this. All tables and references will be created. No error message. However, when you try to delete a row, I get the same error. The reason? As you've suspected I believe this to be the number of references. Workaround? None, that I can think of right now. Sorry!

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • It's ok, BTW please if you find a Workaround let me know, also if I find anything I'll posting it.

  • Frank, ok

  • Here we go:

    create database test_me

    go

    use test_me

    go

    declare @i int

    declare @stmt nvarchar(4000)

    create table t_master(c1 int primary key default 1)

    set @i=1

    while @i<=1009

    begin

     set @stmt = 'create table t_child'+cast(@i as nvarchar)+' (c1 int foreign key references t_master(c1))'

     exec sp_executesql @stmt

     set @i = @i+1

    end

    insert into t_master default values

    delete from t_master

    use master

    drop database test_me

    yields:

    Server: Nachr.-Nr. 8621, Schweregrad 17, Status 1, Zeile 1

    Interner Fehler des Abfrageprozessors: Der Abfrageprozessor hatte während der Abfrageoptimierung zu wenig Stackspeicherplatz.

    --
    Frank Kalis
    Microsoft SQL Server MVP
    Webmaster: http://www.insidesql.org/blogs
    My blog: http://www.insidesql.org/blogs/frankkalis/[/url]

  • Hello Frank, Any workaround??

  • I think you have found the anwser to your own question. You have gone above the maximum capacity specifications for SS 2000.

    Solution: Redesign

    HTH


    * Noel

  • Agreed, the workaround for having > 1000 FK references to a single table is to re-think, re-make, re-design the whole thing. This is not a normal number of FK's.

    /Kenneth

  • DearI face the same problem.But i get the solution .The problem with me is the Datatye mismatching.i was comapring the Int datatype with decimal(24,0). i corrected this then my problem is removed.May be this can help you RegardsAkash Agarwal

Viewing 15 posts - 1 through 14 (of 14 total)

You must be logged in to reply to this topic. Login to reply