• That's simply due to the fact that the software is essentially lazy. The first query, run multiple times, is likely to pull the same data, because both the plan and the data are cached. Even if you clear the cache, you're likely to get the same data on multiple runs of the same query, because of the physical implementation of the database and the fact that the devs didn't add in random number generators just for the heck of it.

    The point is that you are likely to get the same data. You aren't guaranteed to get the same data.

    Thus, a minor, seemingly immaterial change in the query, is likely to get different results. Again, it might, or it might not.

    The point is that you don't know what you'll get. It might give you the same results 10,000 times in a row, and then on the 10,001st, give you something different. No way to know.

    Since most businesses don't want to get that kind of unpredictability out of their data, it's better to force the issue. That means ordering your query if you want the top X rows. It means avoiding "nolock" unless you have a real business reason to allow dirty reads. Has a lot of other ramifications, but those are definitely two of them.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon