UPDATE STATISTICS with FULLSCAN -- implications?

  • I am being asked by our SharePoint people upon suggestion of a Microsoft engineer to run an "UPDATE STATISTICS [tableName] WITH FULLSCAN" command on a very large (115GB) SharePoint database.  I run this command routinely following a nightly database reorganize maintenance task and have for years without issues, but the databases I have charge of are neither large nor busy.  So my questions in this case are:

    - Will running the command "UPDATE STATISTICS [tableName] WITH FULLSCAN" on it's own cause any potential problems?  The database is already seriously bogged down with blocking processes and long running queries against this particular table (AllUserData if it matters) and so it makes sense to update the statistics manually IMHO.  They haven't been done since last Friday by a query I ran.

    In any event, what are the consequences of running this command against this table in the middle of a production workday, given the databases size and the fact it is already all but inaccessible?

  • How large is the table?

    Updating statistics on a table will only be slow if that table is huge.  Well, blocking will likely also occur, but if that table is a few hundred rows, upating statistics on there should be quick.  If your 115 GB of data is all stored in that one table, then it will be slow.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • The Disk Usage by Table Report returns this:

    Records = 23,709,423
    Reserved (KB): 27,005,640
    Data (KB): 24,452,976
    Indexes (KB): 2,527,656
    Unused (KB): 25,008

    So it's pretty huge.  I knew it was, but wow.  As I said, another group handles SP end to end so I am just here for added support.  I believe this will help solve their issue, or at least contribute to same but I honestly don't have a freaking clue how long this command would take to run, nor it's overall impact on the table's accessibility.  I'm guessing a) A long time and b) Probably worse than it already is, which is inaccessible via SharePoint.

  • If it is currently inaccessable, how much worse can it be?

    It will just cause some blocking while it runs.

    But 27 GB out of 115 GB isn't horrible.

    How bad are the statistics on that table?  That is when were they last upated and how many rows were changed:
    SELECT OBJECT_NAME(id)AS tablename,name AS statname,STATS_DATE(id, indid) AS lastupdate,rowmodctr AS rowschanged
    FROM sys.sysindexes
    WHERE STATS_DATE(id, indid)<=DATEADD(DAY,-1,GETDATE())
    AND rowmodctr>0
    AND id IN (SELECT object_id FROM sys.tables)
    ORDER BY [sysindexes].[rowmodctr] desc

    (I took that code from mssqltips.com and just added in column aliases).

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • bmg002 - Wednesday, February 22, 2017 1:31 PM

    If it is currently inaccessable, how much worse can it be?

    It will just cause some blocking while it runs.

    But 27 GB out of 115 GB isn't horrible.

    How bad are the statistics on that table?  That is when were they last upated and how many rows were changed:
    SELECT OBJECT_NAME(id)AS tablename,name AS statname,STATS_DATE(id, indid) AS lastupdate,rowmodctr AS rowschanged
    FROM sys.sysindexes
    WHERE STATS_DATE(id, indid)<=DATEADD(DAY,-1,GETDATE())
    AND rowmodctr>0
    AND id IN (SELECT object_id FROM sys.tables)
    ORDER BY [sysindexes].[rowmodctr] desc

    (I took that code from mssqltips.com and just added in column aliases).

    The two indexes on this table were last statistically updated on Friday.  I know you are right as far as it's impact vs possible solution potential, but the people in charge here (and I use that term loosely) wanted estimates so i asked here on the off chance one of the noted superstars in these parts would just know the answer within an order of magnitude 😉

    I live vicariously through this place as a SQL DBA, of which I am the accidental variety.  That's what I get for showing an aptitude lol

  • Oh, and where are my manners?  Thanks BMG002 for your answers and help in this!

  • I know that feeling.  I come here to learn too and help where I can.
    I am an accidental DBA as well.

    I would give them a high estimate as it is tricky to say with any certainty (as far as I am aware).  It can rely on a lot of different things like database activity, physical server activity, available RAM/CPU, available tempdb space, disk I/O, etc.

    Do you have a test instance of this database?  If so, you could try running it against that and estimate based on the hardware differences.  In my case, live runs about 3 times faster than test in most cases, but it depends on your systems.
    Or you said you ran that command last week friday?  How long did it take (roughly)?  I believe the FULLSCAN option means it should take just as long to run it 2 times with a 5 minute wait inbetween or with a 5 week wait inbetween (presuming minimal data changes) so I'd expect it to be roughly the same time.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • bmg002 - Wednesday, February 22, 2017 1:43 PM

    I know that feeling.  I come here to learn too and help where I can.
    I am an accidental DBA as well.

    I would give them a high estimate as it is tricky to say with any certainty (as far as I am aware).  It can rely on a lot of different things like database activity, physical server activity, available RAM/CPU, available tempdb space, disk I/O, etc.

    Do you have a test instance of this database?  If so, you could try running it against that and estimate based on the hardware differences.  In my case, live runs about 3 times faster than test in most cases, but it depends on your systems.
    Or you said you ran that command last week friday?  How long did it take (roughly)?  I believe the FULLSCAN option means it should take just as long to run it 2 times with a 5 minute wait inbetween or with a 5 week wait inbetween (presuming minimal data changes) so I'd expect it to be roughly the same time.

    I just kicked off the run and I'm heading home.  I'll check it tomorrow in the morning when I get here.  As you said, it's already Tango Uniform so there's little to lose at this point really and the MS Engineer DID recommend looking at the statistics.  We'll see...

    Thanks again sir (or ma'am) as the case may be.

  • Well, one thing you might want to know is just how much RAM there is on this server.   I was looking at a few posts during the last week on this site where someone had a fairly active 2 TB database on a server with just 128GB of RAM and wondering why there was a problem.   You have a 27 GB table and a 115 GB database.   If the server is some under-spec'ed 32 GB box, then it's not too hard to see why a product like Sharepoint might drive the RAM usage up so high that there's nothing left.   A quick look at PLE (page life expectancy) might reveal that it's rather low, which could be a strong indicator that the box is running out of RAM.   Of course, there are a number of other things to look at, like Disk Queue Length, which might tell you if you are I/O bound.   SAN fabric speeds of just a gigabit can be a serious I/O bottleneck.   Just a couple things that might at least point you in a direction.   I'm sure there are more experienced folks out there than me who can provide additional guidance, as those two things are more in the early indicators category, but not necessarily definitive, as sometimes there are other factors contributing to whatever resource shortage might be in play.

    Steve (aka sgmunson) 🙂 🙂 🙂
    Rent Servers for Income (picks and shovels strategy)

  • Siberian Khatru - Wednesday, February 22, 2017 2:04 PM

    bmg002 - Wednesday, February 22, 2017 1:43 PM

    I know that feeling.  I come here to learn too and help where I can.
    I am an accidental DBA as well.

    I would give them a high estimate as it is tricky to say with any certainty (as far as I am aware).  It can rely on a lot of different things like database activity, physical server activity, available RAM/CPU, available tempdb space, disk I/O, etc.

    Do you have a test instance of this database?  If so, you could try running it against that and estimate based on the hardware differences.  In my case, live runs about 3 times faster than test in most cases, but it depends on your systems.
    Or you said you ran that command last week friday?  How long did it take (roughly)?  I believe the FULLSCAN option means it should take just as long to run it 2 times with a 5 minute wait inbetween or with a 5 week wait inbetween (presuming minimal data changes) so I'd expect it to be roughly the same time.

    I just kicked off the run and I'm heading home.  I'll check it tomorrow in the morning when I get here.  As you said, it's already Tango Uniform so there's little to lose at this point really and the MS Engineer DID recommend looking at the statistics.  We'll see...

    Thanks again sir (or ma'am) as the case may be.

    No problem.  It won't make things any worse updating the statistics it just might not help.
    But what I've often found when working with any support guys is that unless you do exactly what they ask, they likely won't help you much further.  I imagine the support guy you are working with either ran some commands to see the stats were out of date and needed updating, knows more about your system and thinks updating stats will help OR has a script of things they tell people:
    1 - update statistics
    2 - defragment indexes
    3 - reboot
    (note - those are not my suggestions, just an example).  Something I'd be curuios about is if it was SQL being grumpy or sharepoint being grumpy?  Hopefully the updated stats help, but I wouldn't be surprised if this is more of a temporary fix to a larger problem (memory pressure, CPU pressure, insufficient resource allocation, etc).

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • bmg002 - Wednesday, February 22, 2017 2:17 PM

    Siberian Khatru - Wednesday, February 22, 2017 2:04 PM

    bmg002 - Wednesday, February 22, 2017 1:43 PM

    I know that feeling.  I come here to learn too and help where I can.
    I am an accidental DBA as well.

    I would give them a high estimate as it is tricky to say with any certainty (as far as I am aware).  It can rely on a lot of different things like database activity, physical server activity, available RAM/CPU, available tempdb space, disk I/O, etc.

    Do you have a test instance of this database?  If so, you could try running it against that and estimate based on the hardware differences.  In my case, live runs about 3 times faster than test in most cases, but it depends on your systems.
    Or you said you ran that command last week friday?  How long did it take (roughly)?  I believe the FULLSCAN option means it should take just as long to run it 2 times with a 5 minute wait inbetween or with a 5 week wait inbetween (presuming minimal data changes) so I'd expect it to be roughly the same time.

    I just kicked off the run and I'm heading home.  I'll check it tomorrow in the morning when I get here.  As you said, it's already Tango Uniform so there's little to lose at this point really and the MS Engineer DID recommend looking at the statistics.  We'll see...

    Thanks again sir (or ma'am) as the case may be.

    No problem.  It won't make things any worse updating the statistics it just might not help.
    But what I've often found when working with any support guys is that unless you do exactly what they ask, they likely won't help you much further.  I imagine the support guy you are working with either ran some commands to see the stats were out of date and needed updating, knows more about your system and thinks updating stats will help OR has a script of things they tell people:
    1 - update statistics
    2 - defragment indexes
    3 - reboot
    (note - those are not my suggestions, just an example).  Something I'd be curuios about is if it was SQL being grumpy or sharepoint being grumpy?  Hopefully the updated stats help, but I wouldn't be surprised if this is more of a temporary fix to a larger problem (memory pressure, CPU pressure, insufficient resource allocation, etc).

    SQL is the problem as my Regate SQL Monitor installation helped pinpoint the issue to a recurring query against the AllUserData table of a particular SharePoint content table. This query throws blocking process and long running query alerts nearly perpetually - all on this and the Workflow tables which are joined (and rejoined 3 times in the case of the AllUserData table) in one massive parameterized query. So the trouble lies in there somewhere as I see it. The MS  engineer also wanted the SP skin to check the Max Degrees of Parallelism setting which could be it as well.

    Tomorrow's another day and the good thing is I'm pushing my own knowledge base!

  • Siberian Khatru - Wednesday, February 22, 2017 3:05 PM

    bmg002 - Wednesday, February 22, 2017 2:17 PM

    Siberian Khatru - Wednesday, February 22, 2017 2:04 PM

    bmg002 - Wednesday, February 22, 2017 1:43 PM

    I know that feeling.  I come here to learn too and help where I can.
    I am an accidental DBA as well.

    I would give them a high estimate as it is tricky to say with any certainty (as far as I am aware).  It can rely on a lot of different things like database activity, physical server activity, available RAM/CPU, available tempdb space, disk I/O, etc.

    Do you have a test instance of this database?  If so, you could try running it against that and estimate based on the hardware differences.  In my case, live runs about 3 times faster than test in most cases, but it depends on your systems.
    Or you said you ran that command last week friday?  How long did it take (roughly)?  I believe the FULLSCAN option means it should take just as long to run it 2 times with a 5 minute wait inbetween or with a 5 week wait inbetween (presuming minimal data changes) so I'd expect it to be roughly the same time.

    I just kicked off the run and I'm heading home.  I'll check it tomorrow in the morning when I get here.  As you said, it's already Tango Uniform so there's little to lose at this point really and the MS Engineer DID recommend looking at the statistics.  We'll see...

    Thanks again sir (or ma'am) as the case may be.

    No problem.  It won't make things any worse updating the statistics it just might not help.
    But what I've often found when working with any support guys is that unless you do exactly what they ask, they likely won't help you much further.  I imagine the support guy you are working with either ran some commands to see the stats were out of date and needed updating, knows more about your system and thinks updating stats will help OR has a script of things they tell people:
    1 - update statistics
    2 - defragment indexes
    3 - reboot
    (note - those are not my suggestions, just an example).  Something I'd be curuios about is if it was SQL being grumpy or sharepoint being grumpy?  Hopefully the updated stats help, but I wouldn't be surprised if this is more of a temporary fix to a larger problem (memory pressure, CPU pressure, insufficient resource allocation, etc).

    SQL is the problem as my Regate SQL Monitor installation helped pinpoint the issue to a recurring query against the AllUserData table of a particular SharePoint content table. This query throws blocking process and long running query alerts nearly perpetually - all on this and the Workflow tables which are joined (and rejoined 3 times in the case of the AllUserData table) in one massive parameterized query. So the trouble lies in there somewhere as I see it. The MS  engineer also wanted the SP skin to check the Max Degrees of Parallelism setting which could be it as well.

    Tomorrow's another day and the good thing is I'm pushing my own knowledge base!

    What version of redgate SQL Monitor are you on?  Is it 7.0.0 or 7.0.1?  If so, you may want to upgrade to 7.0.2.  Redgate SQL Monitor 7.0.0 has a bug in it that killed performance on our SQL databases to the point where queries that should take less than a second were taking hours to complete and it required a restart of the SQL Instance to fix.
    If it is version 6 or lower, then this likely isn't your issue or if it is version 7.0.2 then you shoudl be fine as well.

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • bmg002 - Wednesday, February 22, 2017 3:12 PM

    Siberian Khatru - Wednesday, February 22, 2017 3:05 PM

    bmg002 - Wednesday, February 22, 2017 2:17 PM

    Siberian Khatru - Wednesday, February 22, 2017 2:04 PM

    bmg002 - Wednesday, February 22, 2017 1:43 PM

    I know that feeling.  I come here to learn too and help where I can.
    I am an accidental DBA as well.

    I would give them a high estimate as it is tricky to say with any certainty (as far as I am aware).  It can rely on a lot of different things like database activity, physical server activity, available RAM/CPU, available tempdb space, disk I/O, etc.

    Do you have a test instance of this database?  If so, you could try running it against that and estimate based on the hardware differences.  In my case, live runs about 3 times faster than test in most cases, but it depends on your systems.
    Or you said you ran that command last week friday?  How long did it take (roughly)?  I believe the FULLSCAN option means it should take just as long to run it 2 times with a 5 minute wait inbetween or with a 5 week wait inbetween (presuming minimal data changes) so I'd expect it to be roughly the same time.

    I just kicked off the run and I'm heading home.  I'll check it tomorrow in the morning when I get here.  As you said, it's already Tango Uniform so there's little to lose at this point really and the MS Engineer DID recommend looking at the statistics.  We'll see...

    Thanks again sir (or ma'am) as the case may be.

    No problem.  It won't make things any worse updating the statistics it just might not help.
    But what I've often found when working with any support guys is that unless you do exactly what they ask, they likely won't help you much further.  I imagine the support guy you are working with either ran some commands to see the stats were out of date and needed updating, knows more about your system and thinks updating stats will help OR has a script of things they tell people:
    1 - update statistics
    2 - defragment indexes
    3 - reboot
    (note - those are not my suggestions, just an example).  Something I'd be curuios about is if it was SQL being grumpy or sharepoint being grumpy?  Hopefully the updated stats help, but I wouldn't be surprised if this is more of a temporary fix to a larger problem (memory pressure, CPU pressure, insufficient resource allocation, etc).

    SQL is the problem as my Regate SQL Monitor installation helped pinpoint the issue to a recurring query against the AllUserData table of a particular SharePoint content table. This query throws blocking process and long running query alerts nearly perpetually - all on this and the Workflow tables which are joined (and rejoined 3 times in the case of the AllUserData table) in one massive parameterized query. So the trouble lies in there somewhere as I see it. The MS  engineer also wanted the SP skin to check the Max Degrees of Parallelism setting which could be it as well.

    Tomorrow's another day and the good thing is I'm pushing my own knowledge base!

    What version of redgate SQL Monitor are you on?  Is it 7.0.0 or 7.0.1?  If so, you may want to upgrade to 7.0.2.  Redgate SQL Monitor 7.0.0 has a bug in it that killed performance on our SQL databases to the point where queries that should take less than a second were taking hours to complete and it required a restart of the SQL Instance to fix.
    If it is version 6 or lower, then this likely isn't your issue or if it is version 7.0.2 then you shoudl be fine as well.

    V5,  I believe. I'll double check tomorrow but I'm pretty sure it's 5.

  • Siberian Khatru - Wednesday, February 22, 2017 3:17 PM

    bmg002 - Wednesday, February 22, 2017 3:12 PM

    Siberian Khatru - Wednesday, February 22, 2017 3:05 PM

    bmg002 - Wednesday, February 22, 2017 2:17 PM

    Siberian Khatru - Wednesday, February 22, 2017 2:04 PM

    bmg002 - Wednesday, February 22, 2017 1:43 PM

    I know that feeling.  I come here to learn too and help where I can.
    I am an accidental DBA as well.

    I would give them a high estimate as it is tricky to say with any certainty (as far as I am aware).  It can rely on a lot of different things like database activity, physical server activity, available RAM/CPU, available tempdb space, disk I/O, etc.

    Do you have a test instance of this database?  If so, you could try running it against that and estimate based on the hardware differences.  In my case, live runs about 3 times faster than test in most cases, but it depends on your systems.
    Or you said you ran that command last week friday?  How long did it take (roughly)?  I believe the FULLSCAN option means it should take just as long to run it 2 times with a 5 minute wait inbetween or with a 5 week wait inbetween (presuming minimal data changes) so I'd expect it to be roughly the same time.

    I just kicked off the run and I'm heading home.  I'll check it tomorrow in the morning when I get here.  As you said, it's already Tango Uniform so there's little to lose at this point really and the MS Engineer DID recommend looking at the statistics.  We'll see...

    Thanks again sir (or ma'am) as the case may be.

    No problem.  It won't make things any worse updating the statistics it just might not help.
    But what I've often found when working with any support guys is that unless you do exactly what they ask, they likely won't help you much further.  I imagine the support guy you are working with either ran some commands to see the stats were out of date and needed updating, knows more about your system and thinks updating stats will help OR has a script of things they tell people:
    1 - update statistics
    2 - defragment indexes
    3 - reboot
    (note - those are not my suggestions, just an example).  Something I'd be curuios about is if it was SQL being grumpy or sharepoint being grumpy?  Hopefully the updated stats help, but I wouldn't be surprised if this is more of a temporary fix to a larger problem (memory pressure, CPU pressure, insufficient resource allocation, etc).

    SQL is the problem as my Regate SQL Monitor installation helped pinpoint the issue to a recurring query against the AllUserData table of a particular SharePoint content table. This query throws blocking process and long running query alerts nearly perpetually - all on this and the Workflow tables which are joined (and rejoined 3 times in the case of the AllUserData table) in one massive parameterized query. So the trouble lies in there somewhere as I see it. The MS  engineer also wanted the SP skin to check the Max Degrees of Parallelism setting which could be it as well.

    Tomorrow's another day and the good thing is I'm pushing my own knowledge base!

    What version of redgate SQL Monitor are you on?  Is it 7.0.0 or 7.0.1?  If so, you may want to upgrade to 7.0.2.  Redgate SQL Monitor 7.0.0 has a bug in it that killed performance on our SQL databases to the point where queries that should take less than a second were taking hours to complete and it required a restart of the SQL Instance to fix.
    If it is version 6 or lower, then this likely isn't your issue or if it is version 7.0.2 then you shoudl be fine as well.

    V5,  I believe. I'll double check tomorrow but I'm pretty sure it's 5.

    Some of the changes in 6 are a bit annoying especially if you have some form of failover set up... it detects SQL instances differently than 5.  7 is nice though with the reporting built in.  The email report schedules are a bit laggy but they are fixing that in 7.0.3 (or so I am told).

    But heres hoping that the stats update helps.  Keep us posted

    The above is all just my opinion on what you should do. 
    As with all advice you find on a random internet forum - you shouldn't blindly follow it.  Always test on a test server to see if there is negative side effects before making changes to live!
    I recommend you NEVER run "random code" you found online on any system you care about UNLESS you understand and can verify the code OR you don't care if the code trashes your system.

  • bmg002 - Wednesday, February 22, 2017 3:23 PM

    Siberian Khatru - Wednesday, February 22, 2017 3:17 PM

    bmg002 - Wednesday, February 22, 2017 3:12 PM

    Siberian Khatru - Wednesday, February 22, 2017 3:05 PM

    bmg002 - Wednesday, February 22, 2017 2:17 PM

    Siberian Khatru - Wednesday, February 22, 2017 2:04 PM

    bmg002 - Wednesday, February 22, 2017 1:43 PM

    I know that feeling.  I come here to learn too and help where I can.
    I am an accidental DBA as well.

    I would give them a high estimate as it is tricky to say with any certainty (as far as I am aware).  It can rely on a lot of different things like database activity, physical server activity, available RAM/CPU, available tempdb space, disk I/O, etc.

    Do you have a test instance of this database?  If so, you could try running it against that and estimate based on the hardware differences.  In my case, live runs about 3 times faster than test in most cases, but it depends on your systems.
    Or you said you ran that command last week friday?  How long did it take (roughly)?  I believe the FULLSCAN option means it should take just as long to run it 2 times with a 5 minute wait inbetween or with a 5 week wait inbetween (presuming minimal data changes) so I'd expect it to be roughly the same time.

    I just kicked off the run and I'm heading home.  I'll check it tomorrow in the morning when I get here.  As you said, it's already Tango Uniform so there's little to lose at this point really and the MS Engineer DID recommend looking at the statistics.  We'll see...

    Thanks again sir (or ma'am) as the case may be.

    No problem.  It won't make things any worse updating the statistics it just might not help.
    But what I've often found when working with any support guys is that unless you do exactly what they ask, they likely won't help you much further.  I imagine the support guy you are working with either ran some commands to see the stats were out of date and needed updating, knows more about your system and thinks updating stats will help OR has a script of things they tell people:
    1 - update statistics
    2 - defragment indexes
    3 - reboot
    (note - those are not my suggestions, just an example).  Something I'd be curuios about is if it was SQL being grumpy or sharepoint being grumpy?  Hopefully the updated stats help, but I wouldn't be surprised if this is more of a temporary fix to a larger problem (memory pressure, CPU pressure, insufficient resource allocation, etc).

    SQL is the problem as my Regate SQL Monitor installation helped pinpoint the issue to a recurring query against the AllUserData table of a particular SharePoint content table. This query throws blocking process and long running query alerts nearly perpetually - all on this and the Workflow tables which are joined (and rejoined 3 times in the case of the AllUserData table) in one massive parameterized query. So the trouble lies in there somewhere as I see it. The MS  engineer also wanted the SP skin to check the Max Degrees of Parallelism setting which could be it as well.

    Tomorrow's another day and the good thing is I'm pushing my own knowledge base!

    What version of redgate SQL Monitor are you on?  Is it 7.0.0 or 7.0.1?  If so, you may want to upgrade to 7.0.2.  Redgate SQL Monitor 7.0.0 has a bug in it that killed performance on our SQL databases to the point where queries that should take less than a second were taking hours to complete and it required a restart of the SQL Instance to fix.
    If it is version 6 or lower, then this likely isn't your issue or if it is version 7.0.2 then you shoudl be fine as well.

    V5,  I believe. I'll double check tomorrow but I'm pretty sure it's 5.

    Some of the changes in 6 are a bit annoying especially if you have some form of failover set up... it detects SQL instances differently than 5.  7 is nice though with the reporting built in.  The email report schedules are a bit laggy but they are fixing that in 7.0.3 (or so I am told).

    But heres hoping that the stats update helps.  Keep us posted

    In at  the mercy of the bean  counters I'm  afraid and that means I'm lucky to have what I have lol.

Viewing 15 posts - 1 through 15 (of 28 total)

You must be logged in to reply to this topic. Login to reply