Yeah, the good old days…
In the past I did a lot of work on Solaris / UNIX systems to extract data and prepare it for database loading. I was dealing with cellular system ROP (Report Only Printing i.e. logs) data that normally would go to a printer, but instead was being captured in the Solaris “management” system. The format of the record was such that some common header type data was present, but then changed depending on what was being reported. Record separator was multiple blank lines, after which the first fields of the new record occurred. I had to interpret new lines similar to field separators, and depending on what was in key locations in the record determined what the record type was.
My 2 primary tools were KSH and AWK, though I did use GREP, HEAD, TAIL, and (sometimes) SED. The whole piping thing was all through the code, redirecting to separate files, not just STDIN, STDOUT, STDERR. Recently, while reading “Enterprise Integration Patterns” I discovered that the whole approach I had was considered a design pattern called “pipes and filters”.
Of course, to really get good at AWK, you needed to get good at “regular expressions”: a complete language onto itself that technically isn’t programming, but it should have been.
Beer's Law: Absolutum obsoletum
"if it works it's out-of-date"