Good to know there would be built in function to split string now. And i have various ways to use them.
But right now i use function which can work with multiply-character delimiter (like "||" , "&&") and output table has 2 columns. First one is the ID its the member place from left to right. Some time i had 2 or 3 inputs in procedure with string and after convertin them to tables had to join them by their ID. So it is usefull.
So right now i don't see any bonuses in built in function, except performance. But since i use those function not in OLTP envirement and not so frequent and not with big string i don't see any improvement. Results are anyway in macro seconds...
It's a funny thing about "fast" code... when you call them 100,000 times a day. 😀 I continue to find and repair such things where I work. Individually, they don't amount to much. Collectively, they take literally hours of CPU time and add to the IO bottleneck by a substantial amount.
In the last couple of years, I've eliminated nearly 3 days per day of CPU time and a couple of hundred trillion bytes of IO on our "money maker" system and it's really helped with timeouts and deadlocks on the user end a whole lot.
That's the problem with such things. No one thinks that "milliseconds matter" and then when performance tanks (it was seriously tanked when I first joined the company), it's a real bugger to fix because it's a system cancer that has occurred instead of just a handful of places that need help. It's not just front-end code, either. I've started working on some of the batch jobs that are getting out of hand because of bad practices that people paid little attention to because they didn't look ahead. For example, there are dozens of places where someone populates a Temp Table as a very necessary loop control. Ironically, the loops work fine but the Temp Tables are a killer because they select the TOP 1 ID and when the loop completes, they DELETE that row from the Temp Table. These Temp Tables get quite large and the "find" for the DELETEs cause a full table scan of the Temp Table. I'm sure none of that was a problem per se` when the tables were much smaller but they're in the top 10 of things that take too much CPU/IO now. It's 1) stupid to delete from a Temp Table ((N2+N)/2+N rows read, in this case... one pass worse than a Triangular Join) and 2) it amazing that they didn't thing of using a Clustered Index on an IDENTITY column for the control. The loop, of course, is also a problem but it pales in comparison to just doing the DELETEs from the control table.
My recommendation is to never justify poor performance by saying "good enough" based on a small number of rows, which have the habit of growing or someone else using the code as a example for larger problems (which you have NO control over, ever).
Do like Granny used to say... "Mind the Pennies and the Dollars will take care of themselves". It just doesn't take any extra time and, in the long run, will save a huge amount of time because rework is about 8 times more expensive than doing it right the first time.
is pronounced "ree-bar
" and is a "Modenism
" for R
First step towards the paradigm shift of writing Set Based code:
________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.
"Change is inevitable... change for the better is not".
"Dear Lord... I'm a DBA so please give me patience because, if you give me strength, I'm going to need bail money too!"
How to post code problems
How to Post Performance Problems
Create a Tally Function (fnTally)