Hugo Kornelis (3/30/2011)
I assume that you meant the first SET statement to read "SET @bit = 1".
The answer is that this should result in 1. The "SET @bit += 2147483646" is shorthand for "SET @bit= @bit + 2147483646". The constant 2147483646 implies the integer datatype. This data type has a higher precedence than bit, so @bit is cast to integer (value: 1). Then integer arithmetic is used to calculate 1 + 2147483646 (resulting in 2147483647). And finally, this value 2147483647 is converted back to bit for the assignment, resulting in the bit value 1.
It seems pretty clear that in this respect bit is (contrary to the statement in BoL) not an integer data type, since a statement like "select @i = 17179869184" results in arithmentic overflow if the type of @i is any of tinyint, smallint, or int but not of @i is of type bit. Also, this behaviour of the bit type clearly discards both associativity and commutativity of addition for integers; what should one expect the results of
declare @a bit = 1, @B bit = 1,@c bit = 1,@d bit = 1, @i int = 1
select @i = @a+@b+@i+@c+@d; select @i
select @i = 1
select @i = @i+@a+@b+@c+@d; select @i
select @i = 1
select @i = @a+@b+@c+@d+@i; select @i
to be? Clearly it depends on the order in which the additions are carried out, and when conversion(s) from bit to int occur.