I would like to counter burke.marg's comment about peak performance being more important than readability. My opinion, readability is the most critical piece of the puzzle. If I need to investigate the query in 5 years time as a bug was found in an odd edge case, readability makes that bug fixing a lot easier. If I focused entirely on performance of the query, I may spend a day or two debugging a hard to read query.
That being said, at my job we have a focus on readability so that code I write can be supported by other developers and DBA's, so I may have a bias towards readability. If I use some odd trick to get the best performance out of the query but it is a trick only I know how to do, that means I am going to be the only one who can support it.
On top of that, I like using built-in functions. I know I can trust the results without having to reinvent the wheel. The downside to built-in functions is that you don't know how it is handling it on the back end or how it may change in the future or may not be aware of limitations such as string_split() which may end up re-ordering the output. If you use the function and don't hit the cases where problems can arise during test, it may happen during live use and cause problems. But this can be mitigated by reading the documentation on the function prior to using it and understanding the risks.
As long as the built-in function is documented, I am more than happy to use it. Undocumented functions I avoid as much as I can because they can have behavior changes or be removed without notice.
And with all of the above being said, some things are best left to the application/presentation layer and not the database engine. For example, if you are wanting a running total, Excel can handle that. So set up a data source in Excel to pull data from SQL and then let Excel do the running total. The advantage to this is you don't have a running total calculation eating up precious SQL resources. Going back to Burke.Marg's comment about performance, having faster queries on the SQL side results in less blocking which results in a nicer end user experience at the cost of some end-user resources for doing the calculations. This does depend on the purpose of the data. It may make more sense to do running totals inside SQL server if you are presenting it to a tool that can't handle running totals well.
But going back to the main article, I liked that. It was very well written and I liked the linked article where you tested out different methods of doing lag, posted the code, and posted the results. This makes it a repeatable test which is always nice. I like using all of the tools in the TSQL toolbelt. There are SO many and there are also tools outside SQL that are beneficial. Sometimes doing it inside SQL Server is the wrong approach and it is better to handle it at the application layer. This is especially true for sorting the data IF the end user is allowed to re-sort it after it is at the application side. Let the application sort it if the user can sort it. SQL MAY sort it faster, but if you have to pull the data all over again over a slow link, you will be annoyed at the time it takes. Much better to let the end user sort it and pull the data only once if possible.
The above is all just my opinion on what you should do.
As with all advice you find on a random internet forum - you shouldn't blindly follow it. Always test on a test server to see if there is negative side effects before making changes to live!