When someone says "ok so what do I do?" We're pretty vague
That pretty much sums it up for me. No-one sets out to write bad software. There is a limitless supply of critics and wise after the event merchants but the people who identify problems, illustrate why they are problems and demonstrate how to fix the problems are few and far between.
Thinking back a decade or two one of the common problems was vulnerability to buffer overruns. It was quite hard finding out how to spot the vulnerability and even worse trying to find out how you were expected to deal with it.
It seems to me that your buffer overrun example is one which, for anyone who knows a little history, rebuts the statement in the preceding paragraph.
Buffer overflow problems are pretty straightforward. They were noted rather early on, and by the late 60s eliminating them was being addressed in several ways:-
(i) Operating System (or Monitor or Executive or whatever it was called by whoever wrote it) interface design - requiring an application which requested input to provide a buffer of known length and not transferring more data than that into it) - if the hardware suppliying the input needed a bigger buffer, either refuse the input request or stage it through a big enouah buffer which it itself allocated internally. But along came Unix, which didn't do that in some early versions, and since Unix was so thoroughy overhyped that kind of buffer overflow was not wiped out.
(ii) Operating system internals and hardware: store which could be written and store which could be executed as separate stores, with status of each area controlled/specified by the OS and enforced by hardware - usully done with virtual addressing, but not always. This didn't eliminate all buffer overflows within the OS, but did prevent overflow into executable code (whether at user level or at OS level); usually done with Virtual Addressing, but sometimes just by having a bitlist with a single bit mark for each page to say whether it was read/execute only or not.
(iii) programming language design: having string type which had a length as well as a start point. But then along came C which left out string length to buty performance at the expense of security and bug-freeness, and C was so hyped that it almost took over the world (and carried it's hopeless pointers forwards into C++).
(iv) hardware supported bounded strings which could (a) support (iii) above and (b) assist (i) above; by 1969 both Burroughs and ICL were using languages with this property for writing operating systems which did (i) for hardware which did both (ii) and (iii), but Unix wasn't going to take up much of "that nonsense" (nor, later, was Windows), and neither was C (nor, later, C++).
By 1969 hordes of people had pointed out what was needed to eliminate buffer overflows, and at least two signicficant companies were doing those things on a large scale. The ICL stuff is still going strong (now sold by Fujitsu - who regularly try to recruit people with experience of ICL's VME-B, so I guess I could find a job there if I wanted to); I don't know whether the Burroughs stuff is still going or not.
Both the problems and the solutions were very well documented in technical journals, trade press, and books. But most people in computing couldn't give a toss - they knew that they were too clever to need things like bounds checks on strings, that bound checks cost performance (false on sensibly designed hardware) and that hardware that did boundchecks or knew what was read/execute only was expensive (true, unfortunately).
Anyway, if you don't want buffer overflow in your software, just write your sofware in a language in which buffers (be they strings or arrays or anything else) have limits which are enforced and run it on hardware-OS combination that prevents both buffer overflow at the OS API and OS-internal buffer overflow.