I have a query which become slower as the number of item in the temp table
Here's my code:
CREATE table #regions (
ipFloat float null,
countryId int null,
insert into #regions (ipFloat) select distinct top 100 ipFloat from tbl_logs
-- This is where the code gets really slow!
update #regions set countryId = (select countryId from tbl_ip2countries
where ipFloat between ipFrom and ipTo)
select * from #regions
drop table #regions
What it does is it takes some ip converted in a float number and look for
its associated countryId in a table. The Ip2countries table has more than 2
000 000 records. There's a clustered index on the primary key which is ipTo
and ipFrom (both float).
If I take only 10 ip (select distinct top 10 ipFloat from tbl_logs), the
query executes in less than 1 second.
If I take up to 100 ip, it takes more than 45 secondes. And in this case, I
have a index spool (eager spool) that consumes 83%. By the way, in the
execution plan, the arrow from the clustered index shows about 2 000 000
actual number of rows. But the arrow after the eager spool shows an actual
number of rows of more than 80 000 000! Is this normal? There's a filter
after that reduces the number of rows to only 100...
If I take 1000 ip, it takes several minutes.
Is there a way to reduces the eager spool or to reduce the number of 80 000
000 of actual rows? I don't understand what's going on here and why this is
so slow for few hundreds of rows.