Server: Msg 8621, Level 17, State 1, Line 1Internal Query Processor Error: The query processor ran out of stack space during query optimization.
I got the above error when I tried to delete row from a specific table(delete from test.settings where id = 10054; ), that table has too many foreign keys (1009 FK's), so I thought the Keys is related to this error and I delete the Primary key on the Primary table which means cascade the whole FK keys on the dependent tables, after that I issued the delete statement again and worked fine.
My question what's the best solution or workaround to allow the deletion process without getting this error???
May can I use Triggers??
See, if this helps:
May I add, that I never have heard of a table with that much FK's. Ever thought of doing a bit of normalization on your schema?
Thank you Frank , Unfortunately this is not helpful.
BTW what kind of normalization that you are talking about it?? can you explain please??
Sorry, that link was the only one I remembered.
Well, I was talking about this normalization (taken from BOL):
The logical design of the database, including the tables and the relationships between them, is the core of an optimized relational database. A good logical database design can lay the foundation for optimal database and application performance. A poor logical database design can impair the performance of the entire system.
Normalizing a logical database design involves using formal methods to separate the data into multiple, related tables. A greater number of narrow tables (with fewer columns) is characteristic of a normalized database. A few wide tables (with more columns) is characteristic of an nonnomalized database.
Reasonable normalization often improves performance. When useful indexes are available, the Microsoft® SQL Server™ 2000 query optimizer is efficient at selecting rapid, efficient joins between tables.
Some of the benefits of normalization include:
As normalization increases, so do the number and complexity of joins required to retrieve data. Too many complex relational joins between too many tables can hinder performance. Reasonable normalization often includes few regularly executed queries that use joins involving more than four tables.
Sometimes the logical database design is already fixed and total redesign is not feasible. Even then, however, it might be possible to normalize a large table selectively into several smaller tables. If the database is accessed through stored procedures, this schema change could take place without affecting applications. If not, it might be possible to create a view that hides the schema change from the applications.
In relational-database design theory, normalization rules identify certain attributes that must be present or absent in a well-designed database. A complete discussion of normalization rules goes well beyond the scope of this topic. However, there are a few rules that can help you achieve a sound database design:
The fundamental rule of database design theory is that each table should have a unique row identifier, a column or set of columns used to distinguish any single record from every other record in the table. Each table should have an ID column, and no two records can share the same ID value. The column or columns serving as the unique row identifier for a table is the primary key of the table.
Attempting to store too much information in a table can prevent the efficient and reliable management of the data in the table. In the pubs database in SQL Server 2000, the titles and publishers information is stored in two separate tables. Although it is possible to have columns that contain information for both the book and the publisher in the titles table, this design leads to several problems. The publisher information must be added and stored redundantly for each book published by a publisher. This uses extra storage space in the database. If the address for the publisher changes, the change must be made for each book. And if the last book for a publisher is removed from the title table, the information for that publisher is lost.
In the pubs database, with the information for books and publishers stored in the titles and publishers tables, the information about the publisher has to be entered only once and then linked to each book. Therefore, if the publisher information is changed, it must be changed in only one place, and the publisher information will be there even if the publisher has no books in the database.
Tables can have columns defined to allow null values. A null value indicates that there is no value. Although it can be useful to allow null values in isolated cases, it is best to use them sparingly because they require special handling that increases the complexity of data operations. If you have a table with several nullable columns and several of the rows have null values in the columns, you should consider placing these columns in another table linked to the primary table. Storing the data in two separate tables allows the primary table to be simple in design but able to accommodate the occasional need for storing this information.
The table for an item in the database should not contain a list of values for a specific piece of information. For example, a book in the pubs database might be coauthored. If there is a column in the titles table for the name of the author, this presents a problem. One solution is to store the name of both authors in the column, but this makes it difficult to show a list of the individual authors. Another solution is to change the structure of the table to add another column for the name of the second author, but this accommodates only two authors. Yet another column must be added if a book has three authors.
If you find that you need to store a list of values in a single column, or if you have multiple columns for a single piece of data (au_lname1, au_lname2, and so on), you should consider placing the duplicated data in another table with a link back to the primary table. The pubs database has a table for book information and another table that stores only the ID values for the books and the IDs of the authors of the books. This design allows any number of authors for a book without modifying the definition of the table and allocates no unused storage space for books with a single author.
Thanks Frank but the normalization is affected and under awareness.
I think the reason is the high FK’s number that references on that table.
I found the below limitations for SQL Server 2000:
Foreign key table references per table
REFERENCES per table
What do you think?? it's right??
Frank I think you are confused ,
My case is I have a table as a primary table and there are a 1009 tables (FK’s) references on the primary table.
Is that understood??
Sir, yes, Sir! This is understood!
Just made a small script to test this. All tables and references will be created. No error message. However, when you try to delete a row, I get the same error. The reason? As you've suspected I believe this to be the number of references. Workaround? None, that I can think of right now. Sorry!