Can't ALTER COLUMN to NOT NULL and ADD PRIMARY KEY in the same SQL Batch?

  • I've searched existing posts and this seems so straightforward, I'm convinced that I'm just missing something simple.

    --First create a simple table with a NULLable column.

    --This is just for use in the example.

    --The tables I'm actually working on were long ago created without Primary Keys and populated with data:

    [font="Courier New"]CREATE TABLE test1 (col1 INT NULL)

    GO[/font]

    --Then try to "fix" the column to NOT NULL so that you can then add the PK

    [font="Courier New"]ALTER TABLE test1 ALTER COLUMN col1 INT NOT NULL

    ALTER TABLE test1 ADD CONSTRAINT PK_test1 PRIMARY KEY (col1)

    GO[/font]

    I've tested this in 2008R2, 2012, and 2014 and the ALTER table batch fails compilation and won't run even though the results if it did run would be valid. You will be met with the following error:

    [font="Courier New"]Msg 8111, Level 16, State 1, Line 6

    Cannot define PRIMARY KEY constraint on nullable column in table 'test1'.

    Msg 1750, Level 16, State 0, Line 6

    Could not create constraint. See previous errors.[/font]

    There are a couple of obvious work arounds:

    1. Separate the ALTER COLUMN into another batch and run it first. Easy enough for a single one-off task but this is just one piece of a much larger process so there are reasons I don't want to do it that way.

    2. Shove the PK create into a variable and use dynamic sql so that it won't get evaluated until runtime. Messy but this is propbably what i will do for the time being.

    Please someone tell me I'm stupid and show me what I'm missing because that would be preferable to finding out it just can't be done. Surely there is some SET RUNTIME NULL ON or other setting to make SQL behave the way I believe it should.

  • Sorry if I misunderstand but there is a problem with doing this?

    CREATE TABLE test1 (col1 INT NULL)

    GO

    ALTER TABLE test1 ALTER COLUMN col1 INT NOT NULL

    GO

    ALTER TABLE test1 ADD CONSTRAINT PK_test1 PRIMARY KEY (col1)

    GO


    SELECT quote FROM brain WHERE original = 1
    0 rows returned

  • The behaviour is as expected.

    SQL Server will always first compile an entire batch, then execute it. During compilation, various things are checked against the schema. But modifications made by early statements in the batch are not catered for, because the compilation does not simulate schema changes. (It also does not care about conditional execution).

    Existence of a table is the single exception to this. If a query references a table that does not exist at compile time, then that query is marked for defered name resolution, and it will be compiled again when it is run. If at that time the table still does not exist, it will fail.

    Unfortunately, columns, nullability, etc are all not treated this way.

    I recommend using separate batches. Please do not go down the path of dynamic SQL - way too many risks and issues.


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • Hugo Kornelis (1/22/2016)


    The behaviour is as expected.

    SQL Server will always first compile an entire batch, then execute it. During compilation, various things are checked against the schema. But modifications made by early statements in the batch are not catered for, because the compilation does not simulate schema changes. (It also does not care about conditional execution).

    Existence of a table is the single exception to this. If a query references a table that does not exist at compile time, then that query is marked for defered name resolution, and it will be compiled again when it is run. If at that time the table still does not exist, it will fail.

    Unfortunately, columns, nullability, etc are all not treated this way.

    I recommend using separate batches. Please do not go down the path of dynamic SQL - way too many risks and issues.

    Always love your great explanations Hugo!

    I knew you needed to have a new batch but I would have never explained it so eloquently.

    I'm just curious why a separate batch is an issue for the OP if I read the post correctly.


    SELECT quote FROM brain WHERE original = 1
    0 rows returned

  • Hugo Kornelis (1/22/2016)


    SQL Server will always first compile an entire batch, then execute it. During compilation, various things are checked against the schema. But modifications made by early statements in the batch are not catered for, because the compilation does not simulate schema changes. (It also does not care about conditional execution).

    Existence of a table is the single exception to this. If a query references a table that does not exist at compile time, then that query is marked for defered name resolution, and it will be compiled again when it is run. If at that time the table still does not exist, it will fail.

    Unfortunately, columns, nullability, etc are all not treated this way.

    Not trying to be argumentative, but I understand how the process works, I'm just looking for a simpler solution. I've been around long enough to remember having to create temp tables in stored procedure scripts just to compile a procedure due to lack of runtime name resolution. It just seems to me that NULLability is something that is easily ignored at compile time.

    For example, this compiles and runs fine in one batch:

    [font="Courier New"]CREATE TABLE test1 (col1 INT NOT NULL)

    ALTER TABLE test1 ALTER COLUMN col1 INT NULL

    INSERT test1 VALUES (NULL)

    GO[/font]

    Of course I realize this is DML and not DDL but the idea is the same. NULLability could be ignored until run-time and it seems just as low risk. I'm sure there is something I don't understand about the internals of why but still wish it weren't so.

    As for why not just use two batches, this one piece of logic is just a small portion of a much larger piece of conditional logic. Having to break the conditional into multiple batches means also resetting variables, retesting conditions, etc. and makes the whole script much less intuitive to someone else and more difficult to maintain.

    If the answer is "No, there is not a way to disable nullability checking at compile time for DDL", I can accept that. It's just not the answer I wanted.

  • hlrx (1/22/2016)


    Not trying to be argumentative, but I understand how the process works, I'm just looking for a simpler solution.

    Simpler then either adding "go" in between or submitting as two batches? No, it doesn't come simpler than that.

    If the answer is "No, there is not a way to disable nullability checking at compile time for DDL", I can accept that. It's just not the answer I wanted.

    No, there is not a way to disable nullability checking at compile time for DDL.

    (At least not for adding a primary key; never tried for any other)


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

  • hlrx (1/22/2016)


    For example, this compiles and runs fine in one batch:

    [font="Courier New"]CREATE TABLE test1 (col1 INT NOT NULL)

    ALTER TABLE test1 ALTER COLUMN col1 INT NULL

    INSERT test1 VALUES (NULL)

    GO[/font]

    Yes it does, but not because the parser's ignoring the nullability. When the batch starts, the table doesn't exist. The second and third statements cannot be checked for validity, because the table doesn't exist, and so the checks are deferred until the statements execute. That's only allowed because the object doesn't exist when the batch starts.

    Same reason you can create a procedure that references tables that don't exist, or you can create a temp table and refer to it in the same batch. The object didn't exist at parse time and so the checks are deferred until just before the statement executes.

    Gail Shaw
    Microsoft Certified Master: SQL Server, MVP, M.Sc (Comp Sci)
    SQL In The Wild: Discussions on DB performance with occasional diversions into recoverability

    We walk in the dark places no others will enter
    We stand on the bridge and no one may pass
  • GilaMonster (1/23/2016)


    hlrx (1/22/2016)


    For example, this compiles and runs fine in one batch:

    [font="Courier New"]CREATE TABLE test1 (col1 INT NOT NULL)

    ALTER TABLE test1 ALTER COLUMN col1 INT NULL

    INSERT test1 VALUES (NULL)

    GO[/font]

    Yes it does, but not because the parser's ignoring the nullability. When the batch starts, the table doesn't exist. The second and third statements cannot be checked for validity, because the table doesn't exist, and so the checks are deferred until the statements execute. That's only allowed because the object doesn't exist when the batch starts.

    Same reason you can create a procedure that references tables that don't exist, or you can create a temp table and refer to it in the same batch. The object didn't exist at parse time and so the checks are deferred until just before the statement executes.

    I apologize that my example code was flawed but I'm a bit surprised that an "MVP" would be both misinformed and so impulsive to incorrectly answer a question, that you couldn't take 10 seconds to check your answer.

    The code below works just as well, because the parser ignores nullability for the INSERT. Create the table whenever you want.

    [font="Courier New"]CREATE TABLE test1 (col1 INT NOT NULL)

    GO

    ALTER TABLE test1 ALTER COLUMN col1 INT NULL

    INSERT test1 VALUES (NULL)

    GO[/font]

  • hlrx (1/25/2016)


    GilaMonster (1/23/2016)


    hlrx (1/22/2016)


    For example, this compiles and runs fine in one batch:

    [font="Courier New"]CREATE TABLE test1 (col1 INT NOT NULL)

    ALTER TABLE test1 ALTER COLUMN col1 INT NULL

    INSERT test1 VALUES (NULL)

    GO[/font]

    Yes it does, but not because the parser's ignoring the nullability. When the batch starts, the table doesn't exist. The second and third statements cannot be checked for validity, because the table doesn't exist, and so the checks are deferred until the statements execute. That's only allowed because the object doesn't exist when the batch starts.

    Same reason you can create a procedure that references tables that don't exist, or you can create a temp table and refer to it in the same batch. The object didn't exist at parse time and so the checks are deferred until just before the statement executes.

    I apologize that my example code was flawed but I'm a bit surprised that an "MVP" would be both misinformed and so impulsive to incorrectly answer a question, that you couldn't take 10 seconds to check your answer.

    The code below works just as well, because the parser ignores nullability for the INSERT. Create the table whenever you want.

    [font="Courier New"]CREATE TABLE test1 (col1 INT NOT NULL)

    GO

    ALTER TABLE test1 ALTER COLUMN col1 INT NULL

    INSERT test1 VALUES (NULL)

    GO[/font]

    the parser didn't ignore anything. your DDL commands not separated by a GO statement. DDL and DML need to be seperate batches for the compiler to identify the changes.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • Lowell (1/25/2016)


    the parser didn't ignore anything. your DDL commands not separated by a GO statement. DDL and DML need to be seperate batches for the compiler to identify the changes.

    Sorry, Lowell, but you misread the code. Gail and you are wrong in this case, and hlrx is right. The first batch creates a table with a column that is not nullable. The second batch changes it to allow nulls and insert a null. When that batch compiles, the column is not nullable, and yet the code compiles. The same behaviour is not used for ALTER TABLE ADD PRIMARY KEY.

    Works:

    CREATE TABLE dbo.test1 (col1 INT NOT NULL);

    GO

    ALTER TABLE dbo.test1 ALTER COLUMN col1 INT NULL;

    INSERT dbo.test1 VALUES (NULL);

    GO

    DROP TABLE dbo.test1;

    GO

    Fails:

    CREATE TABLE dbo.test1 (col1 int NULL);

    GO

    ALTER TABLE dbo.test1 ALTER COLUMN col1 int NOT NULL;

    ALTER TABLE dbo.test1 ADD PRIMARY KEY (col1);

    GO

    DROP TABLE dbo.test1;

    GO

    This is definitely a difference between how ALTER TABLE is handled vs how INSERT is handled (and not a difference between checks for NULL vs checks for NOT NULL). This can be seen from the following code:

    -- Run time error

    CREATE TABLE dbo.test1 (col1 int NULL);

    GO

    ALTER TABLE dbo.test1 ALTER COLUMN col1 int NOT NULL;

    PRINT 'Executing';

    INSERT dbo.test1 VALUES (NULL);

    GO

    DROP TABLE dbo.test1;

    GO

    -- Compile time error

    CREATE TABLE test1 (col1 int NULL);

    GO

    ALTER TABLE test1 ALTER COLUMN col1 int NOT NULL;

    ALTER TABLE test1 ADD PRIMARY KEY (col1);

    GO

    DROP TABLE dbo.test1;

    GO

    -- No error

    CREATE TABLE dbo.test1 (col1 int NOT NULL);

    GO

    ALTER TABLE dbo.test1 ALTER COLUMN col1 int NULL;

    PRINT 'Executing';

    INSERT dbo.test1 VALUES (NULL);

    GO

    DROP TABLE dbo.test1;

    GO

    Now while all this is very interesting (in my own special geeky way), it is not very relevant for the original question. Like it or not, consistent or not, it is how SQL Server works and we cannot change it. If hlrx wants the code to run, (s)he will have to use a separate batch for the ALTER TABLE ADD PRIMARY KEY.


    Hugo Kornelis, SQL Server/Data Platform MVP (2006-2016)
    Visit my SQL Server blog: https://sqlserverfast.com/blog/
    SQL Server Execution Plan Reference: https://sqlserverfast.com/epr/

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply