• Interesting question.

    Is the correct answer really correct? It's rather naive. All the sp gives you is an estimate of average row size reduction and a count of rows, and that only gives you and estimate of how much less space will be occupied by rows. That's not necessarily space saved - it may just be additional empty space in non-empty pages; maybe some of it will be space saved, because some rows may be small enough to fit into the newly unused space - but without additional data you can't make an estimate of space saved out of that. You would need an estimate of the variance of row sizes in the table before compression, an estimate of the variance of row sizes after compression, and estimates of the mean and variance of the amount of empty space in pages before compression before you could get a good answer. So the SP gives you less than half of what you need to get a decent estimate of the saving (or loss) of space caused by applying decimal compression.

    Even with that extra information you might want to look at whether the clustering of the table is biased towards rows being roughly ordered by length and if so to what extent is the bias significant - of course it's very unlikely that there's an intentional bias of that sort, but if someone has invented a classification system for objects in the table and the classification is the first element of the clustering key the application designers may have accidentally introduced such a bias without even being aware that they have done so.

    Despite that, I don't think that any of the other options is as good an answer as the correct answer, so I guess it's right to call it correct even though it doesn't cover all the nasty nitty gritty stuff that somethimes intervenes in the real world when people attept to estimate things based on insufficient data.

    Tom