Tabledesign (newbie)

  • Hello,

    I am creating a simple database to log the number of visits from clients. The table is as follows :

    tbl_statistics

    --------------------------------------------------

    p_key | client_id | client_datetime

    --------------------------------------------------

    1______4________2013-12-05 21:00:00

    2______3________2013-12-05 21:02:11

    3______1________2013-12-05 21:07:31

    4______3________2013-12-05 21:12:42

    .....

    It will probably be logging about 2-3 rows in this table per second maximum so there will be some rows ... My question to you experts now is how the design of this table is done in the best way ( performance wise) in terms of "primary keys" , " clustered index" , "non clustered index"...

    The only type of query that will retrieve data from this table is a standard "SELECT FROM" where " client_id " is of a certain value and " DATE_TIME " is between a specific date range.

    example:

    SELECT Count ( client_id ) AS totantal FROM tbl_statistics

    WHERE ( client_datetime ) Between '2013 - 01-01 Product 00:00:00 ' And '2014 - 01-01 00:00:00 '

    Or is this better?

    SELECT Count ( p_key ) AS totantal FROM tbl_statistics

    WHERE ( client_datetime ) Between '2013 - 01-01 Product 00:00:00 ' And '2014 - 01-01 00:00:00 '

    Are extremely grateful for all tips and comments.

  • Not something to worry about mcuh. Just put everything in the clustered key and all is well.

    create table tbl_statistics

    (p_key int identity(1,1),

    client_id int not null,

    client_datetime datetime not null,

    constraint CI_rowId_ClientId primary key clustered (p_key,client_id,client_datetime)

    )

    If you'd like to have datetime as nullable, than removing client_datetime from the index and it will not cause any changes . The table has just three short columns.

    Igor Micev,My blog: www.igormicev.com

  • Maybe it's irrelevant but you might also want to consider the expected growth of this table and possible archiving scenarios.

  • liebesiech (12/9/2013)


    Maybe it's irrelevant but you might also want to consider the expected growth of this table and possible archiving scenarios.

    I think you should not worry even if your table reaches 500 million rows (on 24-core system with 32 GB). If your table reaches billions of rows and you have performance issues, than there are another approaches to avoid them. Albeit in quite a doze, it depends on the hardware as well.

    Regards,

    IgorMi

    Igor Micev,My blog: www.igormicev.com

  • Just my 3 cents......

    But do not use that naming convention. You should not prefix your objects with the type of object. Everyone can easily tell it is a table.

    You will have a horrible time trying to find that table among the hundreds of others that also begin with that awful prefix, when listed in alphabetical order.

    Andrew SQLDBA

Viewing 5 posts - 1 through 4 (of 4 total)

You must be logged in to reply to this topic. Login to reply