Can we know the detailed information of a table ?

  • Jeff Moden wrote:

    892717952 wrote:

    I want if we can know how many pieces of data can be saved for each data page. thanks!

    Thank you but the question above is basically just a rewording of your original questions and doesn't answer my question.  WHY do you want to know "how many pieces of data can be saved for each data page"?  What benefit will you gain or want to gain by knowing that information?  WHAT will you actually do with that information once you know that information?

    Thank you Jeff Modern!

    as data length of each field relates to the storage space, in theory, if the data length of the field is long, it takes more space to save the data, corresponding, it takes more data page to save the data, as each page size is 8k in SQL Server, and each field of the table has its data length,   we know the total data length of all the fields in a table by its data length, and each page is 8k,  I want to know if it has a formulla to calculate the how many piecies of record can be saved in one page between the each data page size(8K) and the total data length of all the fields of a table? thanks!

     

     

     

     

  • 892717952 wrote:

    Jeff Moden wrote:

    892717952 wrote:

    I want if we can know how many pieces of data can be saved for each data page. thanks!

    Thank you but the question above is basically just a rewording of your original questions and doesn't answer my question.  WHY do you want to know "how many pieces of data can be saved for each data page"?  What benefit will you gain or want to gain by knowing that information?  WHAT will you actually do with that information once you know that information?

    Thank you Jeff Modern!

    as data length of each field relates to the storage space, in theory, if the data length of the field is long, it takes more space to save the data, corresponding, it takes more data page to save the data, as each page size is 8k in SQL Server, and each field of the table has its data length,   we know the total data length of all the fields in a table by its data length, and each page is 8k,  I want to know if it has a formulla to calculate the how many piecies of record can be saved in one page between the each data page size(8K) and the total data length of all the fields of a table? thanks!

    Sure... but it's complex and relatively inaccurate because VARCHARs and other variable width columns do vary.  See the following Google Search for the MS documentation on the subject and some scripts in other people's articles (I have NOT tested any of them)... it can really get tough if you have "in-row" blobs which can put everything at a serious tilt.

    https://www.google.com/search?q=Estimate+the+Size+of+a+Clustered+Index

    Instead, if you want a min, max,  and decent average, just call sys.dm_db_index_physical_stats for the index or heap in the "Sampled" or "Detailed" modes and it'll provide those 3 columns with a whole lot more info about the status of the index.

     

    You still haven't answered the ultimate question though... you said in this latest post that you want to know how many "pieces" will be saved on a page and, penultimately, know "the total data length of all the fields of a table".

    WHY?  What are you going to do with THAT bit of computational trivia?

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden wrote:

    892717952 wrote:

    Jeff Moden wrote:

    892717952 wrote:

    I want if we can know how many pieces of data can be saved for each data page. thanks!

    Thank you but the question above is basically just a rewording of your original questions and doesn't answer my question.  WHY do you want to know "how many pieces of data can be saved for each data page"?  What benefit will you gain or want to gain by knowing that information?  WHAT will you actually do with that information once you know that information?

    Thank you Jeff Modern!

    as data length of each field relates to the storage space, in theory, if the data length of the field is long, it takes more space to save the data, corresponding, it takes more data page to save the data, as each page size is 8k in SQL Server, and each field of the table has its data length,   we know the total data length of all the fields in a table by its data length, and each page is 8k,  I want to know if it has a formulla to calculate the how many piecies of record can be saved in one page between the each data page size(8K) and the total data length of all the fields of a table? thanks!

    Sure... but it's complex and relatively inaccurate because VARCHARs and other variable width columns do vary.  See the following Google Search for the MS documentation on the subject and some scripts in other people's articles (I have NOT tested any of them)... it can really get tough if you have "in-row" blobs which can put everything at a serious tilt.

    https://www.google.com/search?q=Estimate+the+Size+of+a+Clustered+Index

    Instead, if you want a min, max,  and decent average, just call sys.dm_db_index_physical_stats for the index or heap in the "Sampled" or "Detailed" modes and it'll provide those 3 columns with a whole lot more info about the status of the index.

    You still haven't answered the ultimate question though... you said in this latest post that you want to know how many "pieces" will be saved on a page and, penultimately, know "the total data length of all the fields of a table".

    WHY?  What are you going to do with THAT bit of computational trivia?

    Thank you Jeff Moden!

    I want to know this it is because if we can let one page can save more piece of data by reducing the data length (for example, when we design a table, can chang the bigint int int or change varchar from 200 to 100, but the data type or data length after minizatiion still meet our requirement ) , if one page can save more data by doing this,  so it's helpful to us for the database optimization when there is big data on the data.  this is why I want to do this. thanks !

     

  • Ah, but it appears that may be operating under a really bad premise.  Changing a VARCHAR(200) to a VARCHAR(100) will, by itself, DO NOTHING to save on space.  The only way that might help is if you reduce any data that's more than 100 character to less than 100 characters and that doesn't even require changing the VARCHAR(200) to VARCHAR(100).  You also have to remember to rebuild any indexes that include the modified  column to actually recover space.

    If you want to optimize for "big data", fixing some datatypes will certainly help a bit provided that you understand that you have to change the related code so that you don't have a sudden explosion of implicit converts.  You might also get better performance by properly normalizing tables and and fixing performance challenged code and fixing things like "ExpAnsive" Updates and forcing LOBs/RowOverflow columns to go out of row all the time.

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • Jeff Moden wrote:

    Ah, but it appears that may be operating under a really bad premise.  Changing a VARCHAR(200) to a VARCHAR(100) will, by itself, DO NOTHING to save on space.  The only way that might help is if you reduce any data that's more than 100 character to less than 100 characters and that doesn't even require changing the VARCHAR(200) to VARCHAR(100).  You also have to remember to rebuild any indexes that include the modified  column to actually recover space.

    If you want to optimize for "big data", fixing some datatypes will certainly help a bit provided that you understand that you have to change the related code so that you don't have a sudden explosion of implicit converts.  You might also get better performance by properly normalizing tables and and fixing performance challenged code and fixing things like "ExpAnsive" Updates and forcing LOBs/RowOverflow columns to go out of row all the time.

    Jeff Moden, Thank you from the bottom of my heart! it took a lot time in helping me, thanks!

    about the last word "fixing things like "ExpAnsive" Updates and forcing LOBs/RowOverflow columns to go out of row all the time" , could you give me some example of this to understand it ?

    for "LOBs/RowOverflow columns" I guess, if a table has big data column like image/ntext/text/nvarchar(max)/varchar(max), we need to separate them from the table and use another table to save this big data type, is it right ?

  • 892717952 wrote:

    Jeff Moden wrote:

    Ah, but it appears that may be operating under a really bad premise.  Changing a VARCHAR(200) to a VARCHAR(100) will, by itself, DO NOTHING to save on space.  The only way that might help is if you reduce any data that's more than 100 character to less than 100 characters and that doesn't even require changing the VARCHAR(200) to VARCHAR(100).  You also have to remember to rebuild any indexes that include the modified  column to actually recover space.

    If you want to optimize for "big data", fixing some datatypes will certainly help a bit provided that you understand that you have to change the related code so that you don't have a sudden explosion of implicit converts.  You might also get better performance by properly normalizing tables and and fixing performance challenged code and fixing things like "ExpAnsive" Updates and forcing LOBs/RowOverflow columns to go out of row all the time.

    Jeff Moden, Thank you from the bottom of my heart! it took a lot time in helping me, thanks!

    about the last word "fixing things like "ExpAnsive" Updates and forcing LOBs/RowOverflow columns to go out of row all the time" , could you give me some example of this to understand it ?

    for "LOBs/RowOverflow columns" I guess, if a table has big data column like image/ntext/text/nvarchar(max)/varchar(max), we need to separate them from the table and use another table to save this big data type, is it right ?

    Do you have PowerPoint available to you?

     

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 6 posts - 16 through 20 (of 20 total)

You must be logged in to reply to this topic. Login to reply