• I don't have access to that code currently, since it's for a prior employer. However, if I remember it correctly, it ended up using the TopParentID mentioned in the article to first find a simple count by that column, did a "running total" type calculation on that (I use a CLR function for that, blindingly fast) to get top level range start and stop values. All of that was very, very fast, like milliseconds. I think it then repeated that for each level till it got zero for @@rowcount, but the lower levels weren't as fast because they had to actually crawl the hierarchy to get the number of nodes beneath each, instead of just a count on TopParentID.

    I tried the update method mentioned in Joe's article and found that it was WAY too slow for an in-use database. My update solution was more complex, but much, much faster.

    I could improve the process immensely with the difference between what I know now and what I knew when I built it, but that's pretty much true of any code I wrote more than about a month ago. Just some simple Cross Apply inline queries would make the thing much more efficient.

    The 2,700 rows was for one hierarchy within a multi-million row table. 11 seconds to resolve anything on a 2,700-row table would imply that I was running it on, maybe a 286 CPU with 2 Meg of RAM? TSR-80? Timex/Sinclair 1000? Not sure how far back I'd have to go to get that bad of performance, even on an adjacency crawl. If I remember correctly, the table had somewhere around 2-million total rows, and 2,700 nodes, 6 levels deep, was the biggest single hierarchy within it.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon