What would Fill Factor Do?

  • Jeff Moden (3/25/2015)


    Wayne West (3/25/2015)


    PHYData DBA (3/24/2015)


    ... Also they seem to see this as a "Magic Bullet" to resolve not a performance issue, but and issue with increase in Table space.

    This is the data size of the tables, not the Index size.

    Very peculiar. Sometimes you wonder if the people you are talking to actually understand the technology they are talking about.

    I'm confused. How is changing the indexes supposed to resolve an issue with an increase in table space?

    And man, you have my sympathy! Oracle -> MySQL -> SQL Server -> MySQL -> SQL Server?! I don't think I'd want to see your schema! :crazy:

    Go back and read what I wrote about them decreasing the fill factor to reduce space used. Seems contrary but can work out in the end. Definitely NOT a panacea, though.

    Brain fart. For some reason my brain was thinking the indexes are stored separately from the data, which they are on a page basis, but still in the same file. Flashbacks to xBase days. 😀

    -----
    [font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]

  • Wayne West (3/25/2015)


    I'm confused. How is changing the indexes supposed to resolve an issue with an increase in table space?

    And man, you have my sympathy! Oracle -> MySQL -> SQL Server -> MySQL -> SQL Server?! I don't think I'd want to see your schema! :crazy:

    Wayne - Reducing the Fill Factor setting decreases the amount of Free space left for new entries. Less white or blank space means a smaller size.

    As far as the schema goes those migrations don't really effect the relationship model.

    They will affect the data types and the data stored in them if you do not write specific conversions.

    Also - You can in MS SQL server put the Non Clustered indexes in a different file group. I am sure there is a place that is good for this. Not sure if OLTP is one of them.

  • PHYData DBA (3/26/2015)


    Wayne West (3/25/2015)


    I'm confused. How is changing the indexes supposed to resolve an issue with an increase in table space?

    And man, you have my sympathy! Oracle -> MySQL -> SQL Server -> MySQL -> SQL Server?! I don't think I'd want to see your schema! :crazy:

    Wayne - Reducing the Fill Factor setting decreases the amount of Free space left for new entries. Less white or blank space means a smaller size.

    As far as the schema goes those migrations don't really effect the relationship model.

    They will affect the data types and the data stored in them if you do not write specific conversions.

    Also - You can in MS SQL server put the Non Clustered indexes in a different file group. I am sure there is a place that is good for this. Not sure if OLTP is one of them.

    I have pondered putting NCIs in different file groups, but with the advent of SAN I just don't know how much of a benefit it is, and as you say, would OLTP benefit. The guy who managed the SAN at my previous gig was kind of stingy with information, so I know almost nothing about SANs. How do you argue that you want as many spindles as possible to separate the DB, Log, and TempDB from each other to maximize performance when everything is one giant pool? I have the same problem right now in that my current system is going to be deployed on a cloud VM, so I can configure different volume sizes, but they'll still be running against the same disk/spindle pool. At least I'll have some control over file growth not choking things out if I get unexpected growth in one of the three. I don't think my data will grow very fast, but the change audit system sure will.

    -----
    [font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]

  • Wayne West (3/26/2015)


    .....

    How do you argue that you want as many spindles as possible to separate the DB, Log, and TempDB from each other to maximize performance when everything is one giant pool? I have the same problem right now in that my current system is going to be deployed on a cloud VM, so I can configure different volume sizes, but they'll still be running against the same disk/spindle pool.....

    MS White papers and best practices state that they need to be on different LUN's and that the clustered shared Volumes/VHD's/Etc... that are these LUN's need to be owned by the same host system that your VM is being Hosted on. If this is not followed you will see a large performance hit it Disk I/O.

    This is serious stuff that there is very good information on Tech Net about.

    I have had to use it myself. There are several different one's depending on the hardware and connectivity behind your SAN solution.

    Cloud VM Server that you then install MS SQL server on as if it was a regular VM or on physical hardware sounds like a terrible idea made by some very miss-informed people. Maybe I am the one with bad information so please someone correct me even thought this if off topic.

    Considering how cheap an Obfuscated, SSL connected, cloud database solution is why would you waste server space in the cloud for creating a VM to install MS SQL server on? I could see if maybe it was to support a single node solution where the app only needs to talk to a local DB instance and you are using SQL Express. However if you are paying per user or per core for this, you might be doing it wrong.

    Anybody please let Wayne know if I am wrong here. But this screams bad idea.

    Reminds me of when I was asked to make the tempdb's (4 CPU server running Enterprise edition) on a drive that was iSCSI connected over the single NIC on a server.

  • Jeff Moden (3/24/2015)


    As for the padding thing in the BTREE, that's done during an index rebuild using the "WITH PADDING" option. Note that it will not help performance of batch runs that should be doing an index seek with a scan of the leaf level when the start of the scan is determined.

    But this still means setting the Fill Factor to something other than 100% or it does nothing really.

    Right?

    Is there some magic I am missing where Fill Factor can be 100 or 0 but PADDING can be set?

    https://technet.microsoft.com/en-us/library/aa258260(v=sql.80).aspx

    Maybe there is something wrong with this article, please let me know, but it says that with Padding would cause the intermediate and Leaf Pages to be created with the specified Fill Factor setting. Since there is only one page left free at all times for all settings of Fill Factor I can't explain to others how this would be better for anything.

    I trust your opinion a LOT. So some more input on that would be awesome.

  • PHYData DBA (3/26/2015)


    Wayne West (3/26/2015)


    .....

    How do you argue that you want as many spindles as possible to separate the DB, Log, and TempDB from each other to maximize performance when everything is one giant pool? I have the same problem right now in that my current system is going to be deployed on a cloud VM, so I can configure different volume sizes, but they'll still be running against the same disk/spindle pool.....

    MS White papers and best practices state that they need to be on different LUN's and that the clustered shared Volumes/VHD's/Etc... that are these LUN's need to be owned by the same host system that your VM is being Hosted on. If this is not followed you will see a large performance hit it Disk I/O.

    This is serious stuff that there is very good information on Tech Net about.

    I have had to use it myself. There are several different one's depending on the hardware and connectivity behind your SAN solution.

    Cloud VM Server that you then install MS SQL server on as if it was a regular VM or on physical hardware sounds like a terrible idea made by some very miss-informed people. Maybe I am the one with bad information so please someone correct me even thought this if off topic.

    Considering how cheap an Obfuscated, SSL connected, cloud database solution is why would you waste server space in the cloud for creating a VM to install MS SQL server on? I could see if maybe it was to support a single node solution where the app only needs to talk to a local DB instance and you are using SQL Express. However if you are paying per user or per core for this, you might be doing it wrong.

    Anybody please let Wayne know if I am wrong here. But this screams bad idea.

    Reminds me of when I was asked to make the tempdb's (4 CPU server running Enterprise edition) on a drive that was iSCSI connected over the single NIC on a server.

    I've neglected MS White Papers having been out of development for a while, and SANs came along just as my previous job was winding down, and that was almost 3 years ago. I really should dig back in to them.

    The hosted server that I'm going to be deploying to is CenturyLink and they claim good SQL performance. It's not going to be super high transactions, I'm limiting the connections (current plan) to 100 out of 200 or so potential users. Users will be connecting via VPN. This was decided before I started here, I've never had a database that I wasn't running within my own data center.

    The desire of management here is that everything would be better 'in the cloud', which I do not agree with, but that's the current thinking. Currently the database size will fit entirely in configured RAM, but this is a first for the school district, so we can only guess at usage patterns. I know I'll have peak transactions before the semesters begin and for the first month, but beyond that it's a mystery. It's a weird setup here, we have over a dozen servers with no SAN or consolidated backup, and I'm the low man on the totem pole.

    -----
    [font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]

  • PHYData DBA (3/24/2015)


    Alan.B (3/23/2015)


    My apologies - I misread you original post a little. I was under the impression that some of these indexes were 10% fragmented weekly, not more than 10% as you clearly stated. That changes things a little.

    Alan, Thanks for your response. I did typo weekly and not daily.

    I also did not mention many other things from the last 9 months that went into my decision to make this update.

    Also did not mention that one of the databases on this server is a table to store the text BLOB of all our HL7 messages.

    So I liked what Jeff had to say about all of that.

    I'm happy Jeff chimed in. I am no guru in this area and juts wanted to add my 2 cents. This is has turned out to be a very informative thread for me.

    "I cant stress enough the importance of switching from a sequential files mindset to set-based thinking. After you make the switch, you can spend your time tuning and optimizing your queries instead of maintaining lengthy, poor-performing code."

    -- Itzik Ben-Gan 2001

  • Wayne West (3/26/2015)


    PHYData DBA (3/26/2015)


    Wayne West (3/26/2015)


    .....

    How do you argue that you want as many spindles as possible to separate the DB, Log, and TempDB from each other to maximize performance when everything is one giant pool? I have the same problem right now in that my current system is going to be deployed on a cloud VM, so I can configure different volume sizes, but they'll still be running against the same disk/spindle pool.....

    MS White papers and best practices state that they need to be on different LUN's and that the clustered shared Volumes/VHD's/Etc... that are these LUN's need to be owned by the same host system that your VM is being Hosted on. If this is not followed you will see a large performance hit it Disk I/O.

    This is serious stuff that there is very good information on Tech Net about.

    I have had to use it myself. There are several different one's depending on the hardware and connectivity behind your SAN solution.

    Cloud VM Server that you then install MS SQL server on as if it was a regular VM or on physical hardware sounds like a terrible idea made by some very miss-informed people. Maybe I am the one with bad information so please someone correct me even thought this if off topic.

    Considering how cheap an Obfuscated, SSL connected, cloud database solution is why would you waste server space in the cloud for creating a VM to install MS SQL server on? I could see if maybe it was to support a single node solution where the app only needs to talk to a local DB instance and you are using SQL Express. However if you are paying per user or per core for this, you might be doing it wrong.

    Anybody please let Wayne know if I am wrong here. But this screams bad idea.

    Reminds me of when I was asked to make the tempdb's (4 CPU server running Enterprise edition) on a drive that was iSCSI connected over the single NIC on a server.

    I've neglected MS White Papers having been out of development for a while, and SANs came along just as my previous job was winding down, and that was almost 3 years ago. I really should dig back in to them.

    The hosted server that I'm going to be deploying to is CenturyLink and they claim good SQL performance. It's not going to be super high transactions, I'm limiting the connections (current plan) to 100 out of 200 or so potential users. Users will be connecting via VPN. This was decided before I started here, I've never had a database that I wasn't running within my own data center.

    The desire of management here is that everything would be better 'in the cloud', which I do not agree with, but that's the current thinking. Currently the database size will fit entirely in configured RAM, but this is a first for the school district, so we can only guess at usage patterns. I know I'll have peak transactions before the semesters begin and for the first month, but beyond that it's a mystery. It's a weird setup here, we have over a dozen servers with no SAN or consolidated backup, and I'm the low man on the totem pole.

    Wow.... where to begin... so many things...

    The biggest one is this. The only solution in the cloud for what you are discussing is a cloud database solution. Not a VM on a cloud server. That will give you so much more things at every level and why Azure was created in the first place.

    If the things that are going to use this won't support connecting to a cloud SQL solution then you are going to need a version of SQL server that will allow you to load the database into RAM. You will also want to make sure you have enough RAM to enable loading all your Indexes and the entire SQL memory Footprint.

    You will have to find out what the Max RAM your VM is going to be able to use.

    Then make sure you only set SQL Server to use no more than 80% of that RAM and to require 70% on startup.

    I would also recommend turning on server settings Optimize for Ad Hoc queries and Forces Parameterization.

    Here is the fun part. RAM for a VM was not cheap in the cloud last time I looked. Reserved ram of at least 128 Gigabytes or more is even more expensive.

    Does that seem like a lot of RAM? Not if you databases add up to around 40GB or more.

    Even if the entire DB file and every tran log and index along with your tempdb is loaded into RAM, you will see no performance gain unless you can re-use actual execution plans from the Plan cache. RAM for the Buffer = Power in SQL server versions 2005 and up.

    The cloud is nice and easy if you want to be lazy about backups and blame storm any outages to the could provider.

    However to run a server in the cloud in a way the returns results from SQL queries in 5 seconds or less every time takes some knowledge and good planning/testing.

    Good Luck... 😎

  • PHYData DBA (3/26/2015)


    Jeff Moden (3/24/2015)


    As for the padding thing in the BTREE, that's done during an index rebuild using the "WITH PADDING" option. Note that it will not help performance of batch runs that should be doing an index seek with a scan of the leaf level when the start of the scan is determined.

    But this still means setting the Fill Factor to something other than 100% or it does nothing really.

    Right?

    Is there some magic I am missing where Fill Factor can be 100 or 0 but PADDING can be set?

    https://technet.microsoft.com/en-us/library/aa258260(v=sql.80).aspx

    Maybe there is something wrong with this article, please let me know, but it says that with Padding would cause the intermediate and Leaf Pages to be created with the specified Fill Factor setting. Since there is only one page left free at all times for all settings of Fill Factor I can't explain to others how this would be better for anything.

    I trust your opinion a LOT. So some more input on that would be awesome.

    No. There is no magic here. "Padding" is a switch. It's either on or off. If it's on, the intermediate levels of the B-TREE will be padded to the same extent as the FILL FACTOR for the leaf level.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

  • PHYData DBA (3/26/2015)


    Wow.... where to begin... so many things...

    The biggest one is this. The only solution in the cloud for what you are discussing is a cloud database solution. Not a VM on a cloud server. That will give you so much more things at every level and why Azure was created in the first place.

    If the things that are going to use this won't support connecting to a cloud SQL solution then you are going to need a version of SQL server that will allow you to load the database into RAM. You will also want to make sure you have enough RAM to enable loading all your Indexes and the entire SQL memory Footprint.

    You will have to find out what the Max RAM your VM is going to be able to use.

    Then make sure you only set SQL Server to use no more than 80% of that RAM and to require 70% on startup.

    I would also recommend turning on server settings Optimize for Ad Hoc queries and Forces Parameterization.

    Here is the fun part. RAM for a VM was not cheap in the cloud last time I looked. Reserved ram of at least 128 Gigabytes or more is even more expensive.

    Does that seem like a lot of RAM? Not if you databases add up to around 40GB or more.

    Even if the entire DB file and every tran log and index along with your tempdb is loaded into RAM, you will see no performance gain unless you can re-use actual execution plans from the Plan cache. RAM for the Buffer = Power in SQL server versions 2005 and up.

    The cloud is nice and easy if you want to be lazy about backups and blame storm any outages to the could provider.

    However to run a server in the cloud in a way the returns results from SQL queries in 5 seconds or less every time takes some knowledge and good planning/testing.

    Good Luck... 😎

    It'll be interesting. Right now the database, at about a third its population, isn't even 10 meg. It's comparatively pretty puny, one main transactional table with 20 or so dinky lookup tables. The full auditing will grow pretty fast and I'm looking in to improving that. With 8 gig of RAM currently planned for the server, that's why I was saying it would all fit in memory: everything should cache well. And yes, I know about setting min/max memory parameters on that.

    My last server was a dual CPU quad core hyperthreaded for an effective 16 cores with gobs of disk and memory. I was able to mix spindles very optimally on that beast. This one, I'm not so sure about. Because of the hard line against us hosting the server inside our data center, I don't have a lot of options available. Backups are always a priority for me so they'll be copying down to my data center after they complete.

    We'll probably get the startup info in the next week or so and then we'll see how this thing performs. Since the app sits on the server, I'm mildly optimistic that it'll perform well. I will ping my engineer at CL on LUNs, I forgot about that when we were talking config over the last couple of months.

    -----
    [font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]

  • Wayne West (3/26/2015)


    PHYData DBA (3/26/2015)


    Wow....

    <many things were here>

    Good Luck... 😎

    It'll be interesting. Right now the database, at about a third its population, isn't even 10 meg. It's comparatively pretty puny, one main transactional table with 20 or so dinky lookup tables. The full auditing will grow pretty fast and I'm looking in to improving that. With 8 gig of RAM currently planned for the server, that's why I was saying it would all fit in memory: everything should cache well. And yes, I know about setting min/max memory parameters on that.

    My last server was a dual CPU quad core hyperthreaded for an effective 16 cores with gobs of disk and memory. I was able to mix spindles very optimally on that beast. This one, I'm not so sure about. Because of the hard line against us hosting the server inside our data center, I don't have a lot of options available. Backups are always a priority for me so they'll be copying down to my data center after they complete.

    We'll probably get the startup info in the next week or so and then we'll see how this thing performs. Since the app sits on the server, I'm mildly optimistic that it'll perform well. I will ping my engineer at CL on LUNs, I forgot about that when we were talking config over the last couple of months.

    Don't get caught up in your spindle count in the cloud. Put the temdb and tran logs on diff LUN's than the system files and the data files if you can. If not why worry about something you may only use during Maintenance cycles.

    Think more about how to tweak settings and add RAM to make this thing work fast as if one spindle was all you had.

    Again I recommend turning on server settings Optimize for Ad Hoc queries and Force Parameterization.

    Can not stress enough what a difference that could make for what you are doing and how it run in the Cloud.

    Even if the entire DB file, tran log and tempdb is loaded into RAM, you will see poor performance unless you can re-use actual execution plans from the Plan cache.

    RAM for the Buffer Pool = Power in SQL server versions 2005 and up.

    Why worry about spindles count if they could stop spinning for an hour and you'd never notice. <true story here>

    I have a server that supports over 1500 TPS for a 90Gb Database configured like this.

    The Disk I/O peaks today averaged at 20MBs <bits>.

    Not because my path to disk is slow, it is never used except to update the actual data files when the Tran Log is backed up every hour.

    Of course this system keeps constant memory pressure of 80% of the systems 64GB.

  • PHYData DBA (3/26/2015)


    Wayne West (3/26/2015)


    PHYData DBA (3/26/2015)


    Wow....

    <many things were here>

    Good Luck... 😎

    It'll be interesting. Right now the database, at about a third its population, isn't even 10 meg. It's comparatively pretty puny, one main transactional table with 20 or so dinky lookup tables. The full auditing will grow pretty fast and I'm looking in to improving that. With 8 gig of RAM currently planned for the server, that's why I was saying it would all fit in memory: everything should cache well. And yes, I know about setting min/max memory parameters on that.

    My last server was a dual CPU quad core hyperthreaded for an effective 16 cores with gobs of disk and memory. I was able to mix spindles very optimally on that beast. This one, I'm not so sure about. Because of the hard line against us hosting the server inside our data center, I don't have a lot of options available. Backups are always a priority for me so they'll be copying down to my data center after they complete.

    We'll probably get the startup info in the next week or so and then we'll see how this thing performs. Since the app sits on the server, I'm mildly optimistic that it'll perform well. I will ping my engineer at CL on LUNs, I forgot about that when we were talking config over the last couple of months.

    Don't get caught up in your spindle count in the cloud. Put the temdb and tran logs on diff LUN's than the system files and the data files if you can. If not why worry about something you may only use during Maintenance cycles.

    Think more about how to tweak settings and add RAM to make this thing work fast as if one spindle was all you had.

    Again I recommend turning on server settings Optimize for Ad Hoc queries and Force Parameterization.

    Can not stress enough what a difference that could make for what you are doing and how it run in the Cloud.

    Even if the entire DB file, tran log and tempdb is loaded into RAM, you will see poor performance unless you can re-use actual execution plans from the Plan cache.

    RAM for the Buffer Pool = Power in SQL server versions 2005 and up.

    Why worry about spindles count if they could stop spinning for an hour and you'd never notice. <true story here>

    I have a server that supports over 1500 TPS for a 90Gb Database configured like this.

    The Disk I/O peaks today averaged at 20MBs <bits>.

    Not because my path to disk is slow, it is never used except to update the actual data files when the Tran Log is backed up every hour.

    Of course this system keeps constant memory pressure of 80% of the systems 64GB.

    That's a lot of memory pressure!

    I just got a reply from the Century Link engineer:

    You can create all the partitions you need. The CLC architecture will systematically rebalance itself to assure At least 2000 IOPS. The disks are all hybrid SSD/SAS and I have customers running multi TB SQL and Exchange in CLC no problem.

    So hopefully things should be good, as long as I'm careful with my config.

    I'm not really obsessed with spindles, it's a habit of old thinking from 20 years ago.

    -----
    [font="Arial"]Knowledge is of two kinds. We know a subject ourselves or we know where we can find information upon it. --Samuel Johnson[/font]

  • Wayne West (3/26/2015)


    That's a lot of memory pressure!

    It will use all that I set Max memory to until about 136 gigabytes.

    Why?

    Because the total Buffer Pool size of all executing code during a 7 day cycle is a little under 128 Gig.

    That means that the Plan Cache and data store needed in memory to provide all data for all SQL hits this server is 128 gig.

    Did I mention that my Cache hit ratio is at 100% for the past 8 days?

    We are thinking about setting up a betting pool on when the buffer pool hit ratio drops under 100% and it has to actually read something from disk.

  • To anyone reading this thread. I know we went Off Topic there for a long period.

    It seemed like an important subject and one I had data to share with the community.

    I will attempt to steer this discussion back to optimum settings for Index Fill Factor.

    In the Books Online I found this statement.

    https://technet.microsoft.com/en-us/library/ms177459(v=sql.105).aspx

    If existing rows will be updated with data that lengthens the size of the rows, use a fill factor of less than 100. The extra bytes on each page will help to minimize page splits caused by extra length in the rows.

    Now the database I am working with is 90% varchar and nvarchar values.

    98% of those values do not have an entry yet for the Max Length.

    So this database has the ability to triple the row size of every table (and all indexes) with one very very long winded entry from the UI.

    It is my understanding from the information above that keeping the Fill Factor at 10 (90%) is not a bad idea for data tables like this.

    What are your experiences?

    PS: I want to mention that our vendor that wants us to change to Fill Factor of 0 or 100 has no information on why he thinks this will be good. Just says it will make the indexes and data tables smaller.

    I agree with him.

    I also tested that the first time the row size for a table gets increased we will see the app lock up for 10 minutes while 9 gigabytes of pages get split.

  • PHYData DBA (3/26/2015)


    To anyone reading this thread. I know we went Off Topic there for a long period.

    It seemed like an important subject and one I had data to share with the community.

    I will attempt to steer this discussion back to optimum settings for Index Fill Factor.

    In the Books Online I found this statement.

    https://technet.microsoft.com/en-us/library/ms177459(v=sql.105).aspx

    If existing rows will be updated with data that lengthens the size of the rows, use a fill factor of less than 100. The extra bytes on each page will help to minimize page splits caused by extra length in the rows.

    Now the database I am working with is 90% varchar and nvarchar values.

    98% of those values do not have an entry yet for the Max Length.

    So this database has the ability to triple the row size of every table (and all indexes) with one very very long winded entry from the UI.

    It is my understanding from the information above that keeping the Fill Factor at 10 (90%) is not a bad idea for data tables like this.

    What are your experiences?

    PS: I want to mention that our vendor that wants us to change to Fill Factor of 0 or 100 has no information on why he thinks this will be good. Just says it will make the indexes and data tables smaller.

    I agree with him.

    I also tested that the first time the row size for a table gets increased we will see the app lock up for 10 minutes while 9 gigabytes of pages get split.

    Like I said in my long winded write-up, an 80% Fill Factor is much better than page splits, which creates two pages each that are half empty which wastes 100% (total) for a given page that's been split and that can be quite substantial when it comes to the Clustered Index. I'll also remind you that if the rows of a table average over 4000 bytes, then no Fill Factor will help the Clustered Index, but you also won't get page splits there, either.

    The bottom line is the same as all else in SQL Server. Changing the FILL FACTOR is not a panacea nor a bane to performance. "It Depends" on the situation and each index, especially Clustered Indexes and Covering Indexes should be examined carefully before making a change to FILL FACTOR.

    To answer your question, I've seen changing the FILL FACTOR cause quite a relief to Insert/Update intensive tables and I've also seen it provide nothing but wasted space. The same holds true with SELECTs except I've also seen it hurt SELECTs quite a bit when there is a lot of wasted space. It always requires an educated and informed decision on larger tables.

    --Jeff Moden


    RBAR is pronounced "ree-bar" and is a "Modenism" for Row-By-Agonizing-Row.
    First step towards the paradigm shift of writing Set Based code:
    ________Stop thinking about what you want to do to a ROW... think, instead, of what you want to do to a COLUMN.

    Change is inevitable... Change for the better is not.


    Helpful Links:
    How to post code problems
    How to Post Performance Problems
    Create a Tally Function (fnTally)

Viewing 15 posts - 16 through 29 (of 29 total)

You must be logged in to reply to this topic. Login to reply