October 18, 2012 at 1:28 pm
Hi,
I know I that, when using UPDATEABLE VIEWS, INSERT statements are not allowed if a member table contains a column with an identity property. But how can I circumvent that MS requirement without altering my existing store procedures?
For instance ...
Let's say I have these two tables, where c2 MUST BE IDENTITY!
CREATE TABLE dbo.MyTable_A
(
c1 int PRIMARY KEY CHECK (c1 BETWEEN 12501 AND 1000000000),
c2 int IDENTITY
)
GO
CREATE TABLE dbo.MyTable_B
(
c1 int PRIMARY KEY CHECK (c1 BETWEEN 1 AND 12500),
c2 int
)
GO
Assuming that I have a view on both, using UNION ALL, any INSERT statement will fail. But I can't remove the identity value from c2, because is a surrogate key and I have no other way to ensure uniqueness.
Any ideas?
By the way, it may be possible via TRIGGERS but do not want to take that route.
Thanks in advance,
October 18, 2012 at 1:33 pm
well, you identified the fix: creating an INSTEAD OF TRIGGER on the view, that inserts the data into the "right" table.
but if you don't want to use a trigger, then there is no way to do what you are asking...you've eliminated the only solution, as far as I know.
Lowell
October 18, 2012 at 1:41 pm
Lowell (10/18/2012)
well, you identified the fix: creating an INSTEAD OF TRIGGER on the view, that inserts the data into the "right" table.but if you don't want to use a trigger, then there is no way to do what you are asking...you've eliminated the only solution, as far as I know.
Arghhhhhhhhhhhhhh! .... hahahaha...
Those are bad, bad news ... missed Enterprise version of MS-SQL on my new job (data partitioning), lol ...
Ok, so , will I have to recreate or put that on my existing store procedures? I did have an idea that a trigger may circumvent that, but honestly, I have not tried before. What would be the logic ...
Let's say I use this simple code ...
CREATE TABLE dbo.MyTable_A
(
c1 int PRIMARY KEY CHECK (c1 BETWEEN 11 AND 20),
c2 int IDENTITY
)
GO
CREATE TABLE dbo.MyTable_B
(
c1 int PRIMARY KEY CHECK (c1 BETWEEN 1 AND 10),
c2 int
)
GO
CREATE VIEW dbo.MyTable
AS
SELECT
c1, c2
FROM
dbo.T
UNION ALL
SELECT
c1, c2
FROM
dbo.T1
GO
October 18, 2012 at 2:16 pm
ok i can do an instead of trigger;
here's a simple version of your example:
note how the isnerts into Table_A ignore whatever was passed for the c2 column
CREATE TABLE dbo.MyTable_A
(
c1 int PRIMARY KEY CHECK (c1 BETWEEN 11 AND 20),
c2 int IDENTITY
)
GO
CREATE TABLE dbo.MyTable_B
(
c1 int PRIMARY KEY CHECK (c1 BETWEEN 1 AND 10),
c2 int
)
GO
CREATE VIEW dbo.MyTable
AS
SELECT
c1, c2
FROM
dbo.MyTable_A
UNION ALL
SELECT
c1, c2
FROM
dbo.MyTable_B
GO
CREATE TRIGGER TR_InsteadOf_For_MyTable ON MyTable --well, it's really a view, but hey.
INSTEAD OF INSERT
AS
BEGIN --TRIGGER
--based on the PK, insert into either of the tables. allow built in error for duplicate PK's received.
INSERT INTO MyTable_A(c1) --has an identity
SELECT c1 FROM INSERTED WHERE c1 BETWEEN 11 AND 20
INSERT INTO MyTable_B(c1,c2) --has an identity
SELECT c1,c2 FROM INSERTED WHERE c1 BETWEEN 1 AND 10
--
END --TRIGGER
GO
INSERT INTO MyTable(c1,c2)
SELECT 1,1 UNION ALL
SELECT 2,1 UNION ALL
SELECT 11,1 UNION ALL
SELECT 12,1 UNION ALL
SELECT 13,2
SELECT * FROM MyTable
SELECT * FROM MyTable_A
SELECT * FROM MyTable_B
Lowell
October 18, 2012 at 2:44 pm
Do I have to use SET IDENTITY INSERT on the trigger itself? When inserting, I am getting this error ...
Cannot insert explicit value for identity column in table 'MyTable_A' when IDENTITY_INSERT is set to OFF.
October 18, 2012 at 3:01 pm
sql-lover (10/18/2012)
Do I have to use SET IDENTITY INSERT on the trigger itself? When inserting, I am getting this error ...
Cannot insert explicit value for identity column in table 'MyTable_A' when IDENTITY_INSERT is set to OFF.
well, you have changed the code I gave you for the example then. My code handled it correctly with the exisitng schema we have so far.
would the end user/applciation actually KNOW what the new identity is supposed to be? what if the value already existed?
identities are usually there for a reason...noone enters them manually.
i'd say your logic is incorrect.
it's an anonymous incrementing number...why would someone second guess it and choose their own value?
Probably the whole abstract everything out to "Table_A" and anonymous column names has obscured the business logic.
show your code, explain what you are tring to do.
Lowell
October 18, 2012 at 3:12 pm
That is correct!
Like I explained, the PK is a surrogate key. MS-SQL needs to generate the key itself, no manual intervention.
Here's the modified code ...
IF OBJECT_ID ('dbo.MyTable_A') IS NOT NULL
DROP TABLE dbo.MyTable_A;
GO
IF OBJECT_ID ('dbo.MyTable_B') IS NOT NULL
DROP TABLE dbo.MyTable_B;
GO
IF OBJECT_ID ('dbo.MyTable') IS NOT NULL
DROP VIEW dbo.MyTable;
GO
CREATE TABLE MyTable_A
(
c1 int IDENTITY PRIMARY KEY CHECK (c1 BETWEEN 11 AND 20) NOT NULL,
c2 int
)
GO
CREATE TABLE MyTable_B
(
c1 int PRIMARY KEY CHECK (c1 BETWEEN 1 AND 10) NOT NULL ,
c2 int
)
GO
CREATE VIEW dbo.MyTable
AS
SELECT
c1, c2
FROM
dbo.MyTable_A
UNION ALL
SELECT
c1, c2
FROM
dbo.MyTable_B
GO
CREATE TRIGGER TR_InsteadOf_For_MyTable ON MyTable INSTEAD OF INSERT
AS
BEGIN
INSERT INTO MyTable_A(c1)
SELECT c1 FROM INSERTED WHERE c1 BETWEEN 11 AND 20
INSERT INTO MyTable_B(c1)
SELECT c1 FROM INSERTED WHERE c1 BETWEEN 1 AND 10
END
Assuming MyTable_B has the historical data, all new inserts should go to A, hence, I removed identity from c1 on B.
When doing a simple insert like this ...
INSERT INTO dbo.MyTable (c2) VALUES (1000);
It fails ...
The real tables are more complex. I just simplified it using above snippet. But my point is, c1 needs to be PK and auto-increment by 1. I will move 80% of the whole table, based on c1, to MyTable_B ... and 20% will stay on MyTable_A. MyTable will be the view, keeping the same object name as original table so the software and store procedures will not break.
October 18, 2012 at 10:32 pm
sql-lover (10/18/2012)
That is correct!Like I explained, the PK is a surrogate key. MS-SQL needs to generate the key itself, no manual intervention.
Here's the modified code ...
IF OBJECT_ID ('dbo.MyTable_A') IS NOT NULL
DROP TABLE dbo.MyTable_A;
GO
IF OBJECT_ID ('dbo.MyTable_B') IS NOT NULL
DROP TABLE dbo.MyTable_B;
GO
IF OBJECT_ID ('dbo.MyTable') IS NOT NULL
DROP VIEW dbo.MyTable;
GO
CREATE TABLE MyTable_A
(
c1 int IDENTITY PRIMARY KEY CHECK (c1 BETWEEN 11 AND 20) NOT NULL,
c2 int
)
GO
CREATE TABLE MyTable_B
(
c1 int PRIMARY KEY CHECK (c1 BETWEEN 1 AND 10) NOT NULL ,
c2 int
)
GO
CREATE VIEW dbo.MyTable
AS
SELECT
c1, c2
FROM
dbo.MyTable_A
UNION ALL
SELECT
c1, c2
FROM
dbo.MyTable_B
GO
CREATE TRIGGER TR_InsteadOf_For_MyTable ON MyTable INSTEAD OF INSERT
AS
BEGIN
INSERT INTO MyTable_A(c1)
SELECT c1 FROM INSERTED WHERE c1 BETWEEN 11 AND 20
INSERT INTO MyTable_B(c1)
SELECT c1 FROM INSERTED WHERE c1 BETWEEN 1 AND 10
END
Assuming MyTable_B has the historical data, all new inserts should go to A, hence, I removed identity from c1 on B.
When doing a simple insert like this ...
INSERT INTO dbo.MyTable (c2) VALUES (1000);
It fails ...
The real tables are more complex. I just simplified it using above snippet. But my point is, c1 needs to be PK and auto-increment by 1. I will move 80% of the whole table, based on c1, to MyTable_B ... and 20% will stay on MyTable_A. MyTable will be the view, keeping the same object name as original table so the software and store procedures will not break.
Why are you limiting the identity column c1 in MyTable_A to only have a value between 11 and 20? That isn't how identity columns work.
October 18, 2012 at 11:38 pm
Lynn Pettis (10/18/2012)
Why are you limiting the identity column c1 in MyTable_A to only have a value between 11 and 20? That isn't how identity columns work.
Again,
It was an example.
In real life, it won't be 20, but whatever the max value is for BIGINT or INT
October 19, 2012 at 5:14 am
sql-lover (10/18/2012)
Like I explained, the PK is a surrogate key. MS-SQL needs to generate the key itself, no manual intervention.
The real tables are more complex. I just simplified it using above snippet. But my point is, c1 needs to be PK and auto-increment by 1. I will move 80% of the whole table, based on c1, to MyTable_B ... and 20% will stay on MyTable_A. MyTable will be the view, keeping the same object name as original table so the software and store procedures will not break.
ok show us the actual table structures, and identify what would normally be inserted/updated.
with only two columns in the example, it doesn't make any sense; we need a better picture than an abstract idea to really help.
the INSTEAD OF trigger simply ignores any PK information passed for inserts, but uses it for updates and deletes...i would go with seperate triggers if they logic gets complex.
Lowell
October 19, 2012 at 7:09 am
Lowell (10/19/2012)
sql-lover (10/18/2012)
Like I explained, the PK is a surrogate key. MS-SQL needs to generate the key itself, no manual intervention.
The real tables are more complex. I just simplified it using above snippet. But my point is, c1 needs to be PK and auto-increment by 1. I will move 80% of the whole table, based on c1, to MyTable_B ... and 20% will stay on MyTable_A. MyTable will be the view, keeping the same object name as original table so the software and store procedures will not break.
ok show us the actual table structures, and identify what would normally be inserted/updated.
with only two columns in the example, it doesn't make any sense; we need a better picture than an abstract idea to really help.
the INSTEAD OF trigger simply ignores any PK information passed for inserts, but uses it for updates and deletes...i would go with seperate triggers if they logic gets complex.
Apologies,
Maybe I am not following, or I am not explaining myself properly, but I do not see the reason why we cannot elaborate an answer with the two column tables I provided.
The objective is simple.
One table with two columns, let's call it MyTable. Col1 has an identity an autoincrement, which is PK and used to identify the column. The other column holds some type of data.
MyTable is too big, so needs to be divided in two. Col1 is INT, so upper limit can't be greater than that limit. On that regard, we create two tables and one view: MyTableA, MyTableB and MyTable (the view which now has same name as previous table)
MyTableB becomes the archiving table, which before, was MyTable (this object name is now used for the view). And MyTableA is the active table with most recent identity number on the top.
If the model works, people can do SELECT (tested already, no issues here) and also insert or update. But the queries will run faster as I won't have million of records to traverse but a few thousand; this is a proven concept, no need to elaborate, I just need to translate to a view with two tables.
And before someone ask me this, but I think I clarified: no ... I cannot trim or move old data from live table .... also, I cannot use partitioning as it is not the right MS-SQL version.
I just need to create a view that can keep the same name of previous table and point to two physical tables instead: archiving (MyTableB) and live (MyTableA)
October 19, 2012 at 7:22 am
Create table statements from initial post:
CREATE TABLE dbo.MyTable_A(
c1 int PRIMARY KEY CHECK (c1 BETWEEN 12501 AND 1000000000),
c2 int IDENTITY)
GO
CREATE TABLE dbo.MyTable_B(
c1 int PRIMARY KEY CHECK (c1 BETWEEN 1 AND 12500),
c2 int)
GO
Subsequent create table statements later in the thread:
CREATE TABLE MyTable_A(
c1 int IDENTITY PRIMARY KEY CHECK (c1 BETWEEN 11 AND 20) NOT NULL,
c2 int )
GO
CREATE TABLE MyTable_B(
c1 int PRIMARY KEY CHECK (c1 BETWEEN 1 AND 10) NOT NULL ,
c2 int)
GO
Look above. Do you see any descrepencies between the code you have posted? We need to know exactly what it is you are trying to accomplish. Based on the initial create table statements, the identity column is not the PK but the second column in MyTable_A. Based on the initial create table statements, inserts into either table through the view will work IF the insert only references columns in one of the underlying tables.
What would help, besides consistency in your posts is to know the full and actual error messages you are dealing with. Please remember, we can't see what you see. You have to proivide us with the information needed to be able to assist you. We aren't mind readers and most of us don't have working crystal balls.
October 19, 2012 at 7:29 am
Lynn Pettis (10/19/2012)
Create table statements from initial post:
CREATE TABLE dbo.MyTable_A(
c1 int PRIMARY KEY CHECK (c1 BETWEEN 12501 AND 1000000000),
c2 int IDENTITY)
GO
CREATE TABLE dbo.MyTable_B(
c1 int PRIMARY KEY CHECK (c1 BETWEEN 1 AND 12500),
c2 int)
GO
Subsequent create table statements later in the thread:
CREATE TABLE MyTable_A(
c1 int IDENTITY PRIMARY KEY CHECK (c1 BETWEEN 11 AND 20) NOT NULL,
c2 int )
GO
CREATE TABLE MyTable_B(
c1 int PRIMARY KEY CHECK (c1 BETWEEN 1 AND 10) NOT NULL ,
c2 int)
GO
Look above. Do you see any descrepencies between the code you have posted? We need to know exactly what it is you are trying to accomplish. Based on the initial create table statements, the identity column is not the PK but the second column in MyTable_A. Based on the initial create table statements, inserts into either table through the view will work IF the insert only references columns in one of the underlying tables.
What would help, besides consistency in your posts is to know the full and actual error messages you are dealing with. Please remember, we can't see what you see. You have to proivide us with the information needed to be able to assist you. We aren't mind readers and most of us don't have working crystal balls.
My previous post about it...
Again,
It was an example.
In real life, it won't be 20, but whatever the max value is for BIGINT or INT
Objective:
Two tables, mapped to one view. One table will hold archiving data, the other one will hold the live data with c1 as PK an auto-increment.
October 19, 2012 at 7:33 am
sql-lover (10/19/2012)
Apologies,
Maybe I am not following, or I am not explaining myself properly, but I do not see the reason why we cannot elaborate an answer with the two column tables I provided.
the primary key is too abstract. the examples i've see so far seem to imply that the end user or application would be sending a value for the primary key on insert.
if it's an identity now in the old table, it should still be an identity in the new table, right?
so we should never see an INSERT that is trying to send the PK...that's what i'm trying to visualize. you would never insert into the old/history table, in my opinion, so there's no need to have a filter/test on the PK values for insert....everything goes in the new table, period.
updates or deletes, yeah, they would require the PK, which already exists.
The objective is simple.
One table with two columns, let's call it MyTable. Col1 has an identity an auto-increment, which is PK and used to identify the column. The other column holds some type of data.
MyTable is too big, so needs to be divided in two. Col1 is INT, so upper limit can't be greater than that limit. On that regard, we create two tables and one view: MyTableA, MyTableB and MyTable (the view which now has same name as previous table)
you could always change the table to be a bigint, instead of the current int. this is a lot of work and changes if the only thing you are worried about is running out of identity values. the GUI designer would handle all the FK , index and other constraint changes/rebuilds for you...but it might be slow to make the changeover; it'd be a massive table and schema lock across lots of tables..
MyTableB becomes the archiving table, which before, was MyTable (this object name is now used for the view). And MyTableA is the active table with most recent identity number on the top.
If the model works, people can do SELECT (tested already, no issues here) and also insert or update. But the queries will run faster as I won't have million of records to traverse but a few thousand; this is a proven concept, no need to elaborate, I just need to translate to a view with two tables.
OK a suite of INSTEAD OF TRIGGERS will work just fine, if the objective is to partition/archive out the data.
the triggers have to be well written to handle multi row inserts and updates.
And before someone ask me this, but I think I clarified: no ... I cannot trim or move old data from live table .... also, I cannot use partitioning as it is not the right MS-SQL version.
I just need to create a view that can keep the same name of previous table and point to two physical tables instead: archiving (MyTableB) and live (MyTableA)
sounds fine to me.
Lowell
October 19, 2012 at 7:42 am
you would never insert into the old/history table
... that is correct. I did not imply that. The table is there for historical purposes or data retrieval, and for UPDATES (delete existing rows, but not inserting)
the primary key is too abstract
... that is correct. It is what it is. I did not design the software's logic and changing it now is out of question and not part of this thread.
for insert....everything goes in the new table, period.
... that is correct.
you could always change the table to be a bigint, instead of the current int. this is a lot of work and changes if the only thing you are worried about is running out of identity values. the GUI designer would handle all the FK , index and other constraint changes/rebuilds for you...but it might be slow to make the changeover; it'd be a massive table and schema lock across lots of tables..
... This is unrelated to this thread, but I did it. I created a copy with right data type, BCP insert, and sync both sets via T-SQL, then swapped. Fixed. But the table is massive. Archiving 80% of the table will improve performance.
By the way, thanks for reply ...
Viewing 15 posts - 1 through 15 (of 15 total)
You must be logged in to reply to this topic. Login to reply