Steve Jones - SSC Editor (9/12/2011)
I keep seeing the chest-pounding developers who claim if you don't know how to implement low-level structures in C (or equally low-level languages) that you're not a real developer/CS degree holder/etc.
Folks, it's been *decades* since I worried about low-level stuff. I'm too freaking busy creating applications by my lonesome to bother with sorting and memory management. That's what *frameworks* are for.
I disagree with you for this reason. It seems that without being able to understand low level structures, and have an appreciation for how socket networking operates, pointers work, low level functions like copy, etc., I think you lose perspective for how to program efficiently.
It's just my opinion, but it seems that so many people I've met who graduated post-Java era ('92 or so), don't really think about what is happening in their frameworks, or try to be efficient in their use of the higher level functions.
I don't believe we need to go to assembler to teach this, nor do I think that we should have people specialize this if they aren't low level programmers, but I do think that having to do a semester of C will ground people in the basics of programming and development. After that, if they move to higher level frameworks, that's fine, and hopefully then they learn to think about the functions they use, look for efficient working functions from the machine's perspective.
I believe, and I may be wrong, but too often is seems people take the shortest method for the programmer to solve a problem, which often ends up being the longer method for the machine and becomes a bottleneck.
I agree that people should maintain some understanding of what's going on at a low level, and that if they don't they may lose sight of factors that have an impact on code performance.
I don't however think that a semester of C will give people an understanding of these things, because C is a betwixt and between mess. It is indeed very low level, but it has high level pretensions. Maybe Knuth's artificial assembler language MIX (used in "The Art of Computer Programming") or the modern substitute MMIX would be better, or maybe Dijkstra's guarded command language (used in "A Discipline of Programming"); or perhaps an assembler for a real machine would be better. Or a language with sound pointers (Algol 68 or any of its children).
I don't think the problem of people who have no regard for efficiency has anything to do with graduating in the post 92 era or with Java; we have had people like that for ever (or at least for four decades). I remember one guy who had a computation which involved calculating the determinants of some largish (25 row) matrices and tried to use a routine that did the job using the recursive descent method (add products of top row elements with the corresponding submatrices 1 smaller, with the recursion stopping at single row matrices where the determinant is just the single element; computational complexity O(N!)) and was very upset when the sysops thew it off the machine the first time it ran because it ran for too long. He spent ages trying to work out why it took so long, until someone who understood program efficience took a look at it and exploded with laughter. Nothing could be done to make him understand that the algorithm he had used was (a) hopelessly inefficient (so inefficient that on what was in 1971 a big computer would have taken at least a hundred billion years to invert just one of those matrices, let alone do the rest of the required computation) and (b) certain to maximise the effect of rounding errors, so that the results would be useless if it ever did terminate. Not even when someone else picked it up and wrote a version which used Gaussian elimination with pivot optimisation to semi-diagonalise and then took the product of the diagonal elements (complexity O(N^3), so about a sextillion [1,000,000,000,000,000,000,000] times as fast as recursive descent on that size matrix) and of course much more resistant to losing significance through rounding errors being compounded would he believe there was anything wrong with his way of doing it.accept that he should have looked at computational complexity before writing his program, let alone before wasting valuable machine time on it.