SQL Overview IV - DBA's Morning Review

  • Hi David,

    I'm receiving the following error when running the checkout package on some of our SQL Server 2000 instances:

    SSIS Error Code DTS_E_CANNOTACQUIRECONNECTIONFROMCONNECTIONMANAGER. The AcquireConnection method call to the connection manager "MultiServer" failed with error code 0xC0202009. There may be error messages posted before this with more information on why the AcquireConnection method call failed.

    I compared server configuration parameters against other 2000 SQL Servers that executed the checkout successfully but I didn't find anything different. Any ideas?

    Thank you,

    Chris

  • Never mind the post above, I had to enable named pipes in the SQL Server Configuration Manager.

    Also, I noticed that I get a duplicate key error with named server instances when running the report of missing database backups. I made the following tweak in bold to the first step of the SQL Agent job to correct this :

    INSERT INTO [rep].[Database_List]

    ([Server],DatabaseName,Usage)

    SELECT [Database_Status].[Server]

    ,[Database_Status].[DatabaseName]

    ,isNULL([SSIS_ServerList].[Usage],'NA')

    FROM [Database_Status] INNER JOIN [SSIS_ServerList]

    ON [SSIS_ServerList].[Server] = [Database_Status].[Server]

    Where NOT EXISTS (select * from [rep].[Database_List]

    WHERE [Database_List].[Server] = [Database_Status].[Server]

    AND [Database_List].[DatabaseName] = [Database_Status].[DatabaseName]

    )

  • Thanks for posting solutions to the problems you experienced with the package.

    David Bird

  • Super series of articles. This has helped me streamline monitoring and also learn SSIS more effectively.

    Many thanks for taking the time to contribute.

    I've found a few things that might be worth looking at:

    1. The list of Report Jobs shows "DBA-SQL Overview - Report Large Log Files" but the script to create the job has "@job_name=N'DBA-SQL Overview - Report Large Log File'"

    2. DBA-SQL Overview - Report Job Failures

    "SELECT @From=@@SERVERNAME + ''@choosebroadspire.com''"

    You actually have this in other places as well, is it just something that wasn't cleaned up?

    EDIT: I believe I know why this is here now.

    "@output_file_name=N'I:\Output\Job\DBA-SQL Overview - Report Job Failures.txt', "

    May need to be changed to an appropriate drive on the Host Server.

  • Hey David., I know this is an old thread but I really liked your approach and have developed on top of yours. I had a question about drive spae and low drive space and how you would handle my issues.

    We have a SQL cluster with 12 instances of SQL. Each assigned their own Logical drive and one mount point. So for instance sql1 it would be assigned drive H and the mount point would be H:\LOGS. Instance SQL2 would have drive J: and Mount point J:\LOGS

    Your drive space scripts, has every drive on the cluster node under each instance server name. So for instnce...if there are 5 sql instances on each node, each instance would have 5 drives under each one. None of the mount points are listed.

    Are there any views or tables that shows what drive letters each SQL instance can see or uses? Is there any new methods that replaces the (sp_OAMethod @fso,'GetDrive', @odrive OUT) si that we can get drive space from mount points?

    Hope your stilll around and monitor this.. Thanks

  • Your question got my mind of thinking. I have tested the script on a Lab server to verify it would not blow up. Currently, we do not have any multiple instance servers.

    So you are running 12 instances on one Windows Cluster. Is the problem the same drive is being reported twelve times?

    Or you want to see only drives listed with its assigned instance?

    It will take me some time to think about it. I do not know of any other scripts to collect space on drives. What release of SQL are you using? Maybe 2008 has a DMV.

    David Bird

  • David Bird (7/31/2009)


    Your question got my mind of thinking. I have tested the script on a Lab server to verify it would not blow up. Currently, we do not have any multiple instance servers.

    So you are running 12 instances on one Windows Cluster. Is the problem the same drive is being reported twelve times?

    Or you want to see only drives listed with its assigned instance?

    It will take me some time to think about it. I do not know of any other scripts to collect space on drives. What release of SQL are you using? Maybe 2008 has a DMV.

    Yes but we run as a active active cluster so 6 instance are each node. As the Diskspace script runs, it runs against each instance so all the drives on the node get listed 6 times. So SQLInstance1 list drives d-i and sqlinstance2 d-i and so on.... It would be "nice" if they were listed only once. But it would be better if SQLInstance1 reported on the drives it was assigned, such a D and SQLInstance2 reported drive E and so on.

    The other issue was the mount points, which are like drives but are mounted in the folder of a drive instead of haveing a drive letter assigned its a folder. like E:\LOGS. Those do not get reported at all.

    We are still using 2005. We have a coulple of 2000 servers but they are about gone.

    Hope we can come up with something..

    Thanks

  • Remove the duplicate drives by deleting them from the table Disk_Space.

    You can use the drives letters identified in the Database_info column Filename to identify which drives to keep for the instance. For drives, that are not identified by a database file on them, keep the first and delete the rest. Sorry I do not have time to create any sample Delete SQL.

    I have no experience with using mount points on windows. Maybe Microsoft's Doctor Scripto has an answer. Sorry

    David Bird

  • Thanks, I'll check it out.

  • Good stuff,

    However, I noticed when you use this on different SQL servers with different level of sensitivities/collations, you may get errors, since columns name is different in create statement and the queries used(some capital some small), same is related to system stored proc sp_MSforeachdb, I had to change to exact name as it's in master database and it let me run it on any of my servers smoothly..

    Also for checking disk space is not accurate to use master.dbo.xp_fixedDrives, which is not showing mounting points info.

    What I'm using for checking accurate disk space - WMI Data reader task,

    this does not require enabling OLA Authomation if you have security restriction ...

    There you may use this query:

    Select Capacity,FreeSpace,DriveLetter,Label,SystemName From Win32_Volume Where DriveType = 3

    and dump info in temporary text file, from which you can load this data in your central location and provide appropriate data massage

    Here's query example from destination table:

    SELECT [SystemName] as [System Name]

    ,case len(DriveLetter )

    when 0 then 'Mounting Point'

    else DriveLetter end as DriveLetter

    ,[Label] as Label

    ,round(convert(float,[Capacity])/1024/1024/1024,2) as [Capacity in GB]

    ,round(convert(float,[FreeSpace])/1024/1024/1024,2) as [Free Space in GB]

    ,round((convert(float,[FreeSpace])/1024/1024/1024)/

    (convert(float,[Capacity])/1024/1024/1024)*100.0,2) as [% of available free space]

    FROM [myDB].[dbo].[Disk space report]

    Hope this is help..

    Found a bug in code so far..

    Collect server info in dataflow task:

    Load Server Info contain wrong code:

    CASE

    WHEN Serverproperty('EngineEdition') = 1 THEN 'Integrated security'

    WHEN Serverproperty('EngineEdition') = 2 THEN 'Not Integrated security'

    Should be:

    CASE

    WHEN Serverproperty('IsIntegratedSecurityOnly') = 1 THEN 'Integrated security'

    WHEN Serverproperty('IsIntegratedSecurityOnly') = 0 THEN 'Not Integrated security'

    Cheers, 🙂

    Vladimir

    P.S> If anyone is interested to extend reports and use additional criteria - WMI Data Reader Task may help..


    Press any key to continue or any other key to exit...

Viewing 10 posts - 16 through 24 (of 24 total)

You must be logged in to reply to this topic. Login to reply