Fastest way to query Millions of records with Pagination and Total Record Count

  • HI,

    We had a table that contains a huge volume of data inserted per day it contains millions of rows. Currently, we had 2 million records in staging but once we hit the production the data will be easily more than 10 - 20 million rows

    Previously we had a pagination query that took more than 2 minutes to execute later we modified to below the query that took 16-20 seconds.  We followed the following article https://raresql.com/2015/03/30/sql-server-how-to-get-total-row-count-from-offset-fetch-next-paging/

    So is there any way to improve performance to get the faster pagination records with TotalRecordsCount as per defined datetime range

    Recently we index also create a nonclustered index  on PictureTime Coulums but when I see it Query execution plan it says missing index

    DECLARE
    @PageSize INT = 20,
    @PageNum INT = 1;
    DECLARE @StartRow INT
    DECLARE @EndRow INT


    ;WITH D_CTE AS(
    SELECT LC.Id,LC.Name,LC.Number,
    LC.PictureTime, LC.CreatedOn, D.Name
    FROM CapturedList AS LC
    INNER JOIN Device AS D ON LC.DeviceId=D.Id
    WHERE

    (LC.PictureTime >= 'Oct 26 2019 12:00AM' and LC.PictureTime <= 'Dec 28 2019 11:59PM') ),
    Count_CTE AS (SELECT COUNT(*) AS [TotalRowCount] FROM D_CTE)
    SELECT * FROM D_CTE, Count_CTE ORDER BY D_CTE.PictureTime asc
    OFFSET (@PageNum-1)*@PageSize ROWS

     

    Attachments:
    You must be logged in to view attached files.
  • I re-created the NonClustered Index on PictureTime and DeviceId, the performance had increased drasctically and executes in 1 sec

    CREATE NONCLUSTERED INDEX [PicureTime_NCI] 
    ON [dbo].[CapturedList]([PictureTime] ASC)
    INCLUDE([DeviceId]) WITH
    (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF,
    SORT_IN_TEMPDB = OFF, DROP_EXISTING = OFF, ONLINE = OFF,
    ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON)
    ON [PRIMARY]GO

     

    Any more suggestions would be welcome, I think problem resolved as of now.

    Do you guys think any problem will arise as record grow to like more than 100 million?

     

    Thanks,

    Bhavi

    • This reply was modified 4 years, 4 months ago by  bhavi. Reason: modified query format
  • I ran into another problem when I applied the Like conditions of few columns in the Query

    Whats does the following warning mention about

    "the query memory grant detected excessive grant which may impact the reliability. grant size initial 1024 kb, final 1024 kb used 1024 kb"

     

     

    Attachments:
    You must be logged in to view attached files.

Viewing 3 posts - 1 through 2 (of 2 total)

You must be logged in to reply to this topic. Login to reply