• Rod at work - Friday, March 3, 2017 7:57 AM

    I've seen 2 practices; using so called "magic numbers" or using nulls. When I first went to work at my previous job, I had very little experience with any SQL DBMS. And we were writing Windows applications for Windows 95/98 using controls that didn't understand the concept of a null for a datetime field. So I came up with the idea of using the date of 1800-01-01 to represent a null datetime. That way we could check for that  value in code and force the control to show a blank. But over time the programming environment got better, controls got smarter and the whole point of using some arbitrary value like 1/1/1800 to represent a null became ridiculous. I now look back at that and think it was just a stupid idea. As time went by, and other developers came and went, we always had to go through the process of telling them that 1/1/1800 wasn't a "real" date, that it really meant a null date. It became impossible to get rid of, because of all of the software and reports that we built up around the idea of looking for 1/1/1800 (or 1800-01-01) and replacing that with a null or blank. I look back on that as one of the most boneheaded ideas I ever came up with.

    However, I've noticed that I wasn't the only one who came up with using magic numbers. At that old job we participated in several Federal grants, where we had to supply the Feds with data to satisfy the grant. In all cases the Feds specified that if any value was unknown, it had to be replaced with 9's. How long the string of 9's it took was dependent entirely upon the length of the field of data the Feds were looking for. Sometimes it was as short as just 99. Other times it was like you said Steve, 99999. As far as I know the Feds still have this thing for 9's.

    Well, one of the original advocates of using a default value instead of NULL (perhaps 1st Jan 1901 for dates, -32768 for smallints, and so on) was Chris Date, way back in about 1979 or 1980; after he had talked Ted Codd into having two distinct NULLs in the second manifesto instead of the one NULL Codd had suggested in the mid 70s he had this insane idea that having two NULLs meant you would need an infinite number of distinct NULLs to get a clean representation, and it was therefore impossible to have a clean system with any NULLs because you clearly couldn't have an infinite number of anything.  At least he had the sense not to go down the multiple magic values track beloved of so many (CJD's argument that you needed infinitely many distinct NULLs as soon as you had two would apply equally to magic values in a domain) and stuck to a single magic value (which would be the default value) for each domain.  Over the years he got more strident about using default values instead of NULL, eventually joining up with Fabian Pascal to generate some extremely vicious and obnoxious rants on that topic.  There's maybe a case to argue, but neither religious fervour nor total unwillingness to consider any view that's in the least divergent from the chosen dogma is good computer science or good maths.  You're lucky you didn't fall down that hole.

    Tom