Select is Slow

  • Having a table with 100 columns and 1 million records.

    Select Count(*) from Mytable;

    COUNT(*)

    ----------

    1001920

    Time taken to return is 15 sec

    select * from Mytable;

    Time taken to return is 02:15 min

    select * from Mytable order by some_coumn desc;

    Time taken to return is 04:58 min

    There is no primary key for the column which is ordered by

    Order by is taking more time.

    I need to show to client some 1000 records which are order by then filter condition

    Conditions:

    Need to Order by my data based on sort condition client specified

    Then apply filter condition

    sample

    Select * from mytable order by some_column and Where condition like 1 to 1000 and 1001 to 2000 and so on.....

    Note there are no Joins just straight select with one primary key

    How to improve the query ?

  • You have to go for table level partition (split the tables in different disks), before that check the fragmentation level of the tables and defrag the same.

    Is this table used as an archive or a regular use , If its regular use consider the table partition, also create a job for re-indexing in the low usage time.

    Thanks & regards

    MJ

  • By how many of those hundred columns client wants to order? If number is limited then adding clustered primary key and index to each column helps.

    Is there lot of changes in table, additional indexes might slow these.

  • is this table consist of all field with varchar/nvarchar, or there are multiple data types. i am asking this because if you have all varchar field it is a issue in itself. if there are varchar(max)/nvarchar(max) fields then you need to rethink.

  • yuvipoy (12/12/2013)


    Having a table with 100 columns and 1 million records.

    Select Count(*) from Mytable;

    COUNT(*)

    ----------

    1001920

    Time taken to return is 15 sec

    select * from Mytable;

    Time taken to return is 02:15 min

    select * from Mytable order by some_coumn desc;

    Time taken to return is 04:58 min

    There is no primary key for the column which is ordered by

    Order by is taking more time.

    I need to show to client some 1000 records which are order by then filter condition

    Conditions:

    Need to Order by my data based on sort condition client specified

    Then apply filter condition

    sample

    Select * from mytable order by some_column and Where condition like 1 to 1000 and 1001 to 2000 and so on.....

    Note there are no Joins just straight select with one primary key

    How to improve the query ?

    Post the ddl for the table and the indexes. Post a few rows of sample data too if you can.

    One million rows is trivial, the count would normally return within a second or so.

    The timing for "select * from Mytable" is meaningless, most of the time will be network latency and screen refresh.

    This doesn't make sense - "Select * from mytable order by some_column and Where condition like 1 to 1000 and 1001 to 2000 and so on....." - what are you trying to do here?

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • To understand why a SELECT statement is slow, you need to look at the execution plan to understand how the optimizer is resolving the query. Also, you need to look to the system to see if you are experiencing blocking or resource contention.

    Based on what you've posted, that's all I've got.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • After you address the stuff above about some of the specifics, here's a good article on catch-all queries http://sqlinthewild.co.za/index.php/2009/03/19/catch-all-queries/ written by Gail Shaw. However, it sounds like you don't really know what you want yet.

  • the table consists of numbers and floats only.. int bigint , float data types only...

    100 columns combination of above.

    option will be given to user , where he can scroll through the records of 1 million rows .

    * he might order by say column 10 desc

    * so i need to do desc on column 10 and given the 1 million records based on it.

    * user may choose any column to sort

    * there may be multiple columns also to sort the data.

  • yuvipoy (12/14/2013)


    ...where he can scroll through the records of 1 million rows... .

    [sarcasm ON]

    If the company can afford to pay someone to scroll through one million rows, they'll definitely have the time to way a few minutes each time the sort order is changed...

    [sarcasm OFF]

    More honestly: I'd question the business need to return a million rows in the first place. The time it takes to transfer the data will be significant.

    Example: let's assume all 100 columns are defined as integer. Then one row will use 400byte. So you're transferring almost 400MB each time the sort order is changed.

    Assuming the hardware hosting the SQL Server does provide a large enough amount of memory, data might be cached in memory leading to a faster return of the rows in a different order.

    But again: What's the purpose of it?



    Lutz
    A pessimist is an optimist with experience.

    How to get fast answers to your question[/url]
    How to post performance related questions[/url]
    Links for Tally Table [/url] , Cross Tabs [/url] and Dynamic Cross Tabs [/url], Delimited Split Function[/url]

  • here i meant scroll is where condition..... for every scroll i will be giving 1000 records...

    For the 1st time i will return 1000 records and next time 1001 to 2000 on 1 million rows which is sorted (order by )

  • yuvipoy (12/15/2013)


    here i meant scroll is where condition..... for every scroll i will be giving 1000 records...

    For the 1st time i will return 1000 records and next time 1001 to 2000 on 1 million rows which is sorted (order by )

    1000 rows of INT's, FLOATs and BIGINTs. Is this data processed in some way by a front end?

    “Write the query the simplest way. If through testing it becomes clear that the performance is inadequate, consider alternative query forms.” - Gail Shaw

    For fast, accurate and documented assistance in answering your questions, please read this article.
    Understanding and using APPLY, (I) and (II) Paul White
    Hidden RBAR: Triangular Joins / The "Numbers" or "Tally" Table: What it is and how it replaces a loop Jeff Moden

  • 1000 rows of INT's, FLOATs and BIGINTs. Is this data processed in some way by a front end?

    For each and every scroll the SP will be called,giving from_no and to_no to it.

    There is no processing at the front end whatever data is there database will be displayed.

    Thanks!

  • ChrisM@Work (12/16/2013)


    yuvipoy (12/15/2013)


    here i meant scroll is where condition..... for every scroll i will be giving 1000 records...

    For the 1st time i will return 1000 records and next time 1001 to 2000 on 1 million rows which is sorted (order by )

    1000 rows of INT's, FLOATs and BIGINTs. Is this data processed in some way by a front end?

    Serious laguage barrier here. I believe "scroll" actually means that this is a paging problem.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • page will be having 1000 records (between 1 and 1000 ) in first go.. for each and every scroll there be another 1000 rows (between 1001 and 2000 )and so on...

  • LutzM (12/15/2013)


    Example: let's assume all 100 columns are defined as integer. Then one row will use 400byte. So you're transferring almost 400MB each time the sort order is changed.

    Assuming the hardware hosting the SQL Server does provide a large enough amount of memory, data might be cached in memory leading to a faster return of the rows in a different order.

    I will be fetching only 0.381 MB data(400 bytes *1000 rows) each time but will of in order by enteir table

    say

    mytable

    c1 c2

    1 1

    2 2

    3 3

    4 4

    Select Row_number R1, C2 from mytable order by c2 desc

    R1 c2

    1 4

    2 3

    3 2

    4 1

    Select Row_number R1 , C2 from mytable where r1 between 3 and 4 order by c2 desc

    R1 c2

    3 3

    4 4

    here 3 and 4 will be first filtered then order by is applied.

    This is what i needed

    Select Row_number R1, c2 from mytable order by c2 desc and where r1 between 3 and 4

    R1 c2

    3 2

    4 1

    first order by is applied and then where condition .

    but for doing 1 miilion row is slower.

Viewing 15 posts - 1 through 15 (of 18 total)

You must be logged in to reply to this topic. Login to reply