Logon Triggers. A good idea? Or a nerve-racking exprience?

  • Because of security issues, I was requested to keep track of active login accounts so that unused logins can be cleaned up regularly. First option was Setting SQLserver Login Auditing to track both failed and successful logins, but the huge daily errorlogs that got created, was just not handy.

    Second option was Logon Triggers which I am currently experimenting with and have already realized how dangerous they can be. ( I am now quite familiar with making DAC connections to my server!!) . I also realized that logon trigger info should best be captured to a table in Master db to make sure the table is available when all sorts of services, including SQLAgent, start up and tries to login.

    My questions are:

    How many of you are actually using logon triggers, capturing output to a table?

    What else should I be aware of before implementing these on my production servers?

    What other options are there to track whether a login is still used?

  • I'm not using logon triggers, but I did deploy login auditing using Event Notifications and Service Broker. I liked that it was asynchronous and wouldn't impact the end user if there happened to be a problem in my logic.



    Colleen M. Morrow
    Cleveland DBA

  • I use a trace that captures event ID 14. Server-side traces are low-impact, easy to manage, et al. Takes an on-startup proc in master to make sure it starts again after a service restart, but that's easy enough to set up.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Thank you! These sound both like very good ideas and certainly less problematic if something goes wrong. I have no experience with Event Notification and use of Service Broker and this is a nice opportunity to gain experience.

    Implementing a server side trace would be fairly easy, but is there a way to get the results directly into a table other than regurlarly importing the results from out of the results file on the file system?

  • henriettewhite (1/12/2012)


    Thank you! These sound both like very good ideas and certainly less problematic if something goes wrong. I have no experience with Event Notification and use of Service Broker and this is a nice opportunity to gain experience.

    Implementing a server side trace would be fairly easy, but is there a way to get the results directly into a table other than regurlarly importing the results from out of the results file on the file system?

    You could easily set up a job to run daily (or whatever) to pull the trace data into a table.

    But you can query trace files directly, without the import step, by using fn_trace_gettable(). That's what I usually do.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • GSquared (1/12/2012)


    But you can query trace files directly, without the import step, by using fn_trace_gettable(). That's what I usually do.

    taking that one step further, since you can directly query a trace, you can also create a a VIEW that contains that same query.

    then it's just a matter of querying the view on demand when you need to review it.

    here's a simple example of my default trace:

    CREATE VIEW dbo.VW_Trace_1

    AS

    SELECT

    ev.name ,

    tr.*

    FROM ::fn_trace_gettable('C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log\log_63.trc', default) tr

    INNER join sys.trace_events ev

    ON tr.eventclass = ev.trace_event_id

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • here's a snippet that woudl generate a CREATE view statement for your traces;

    default naming convention of VW_TRACE_[traceid]

    SELECT

    ' CREATE VIEW master.dbo.VW_Trace_' + CONVERT(varchar,id) + '

    AS

    SELECT

    ev.name ,

    tr.*

    FROM ::fn_trace_gettable(''' + path +'.trc'', default) tr

    INNER join sys.trace_events ev

    ON tr.eventclass = ev.trace_event_id

    END '

    ,*

    from sys.traces

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • Thanks again! The problem is just that I need to implement this on about 40 servers and schedule a procedure for collecting data to a table. How do I handle the ever changing trace file names? The logon event generates lots of events on some servers and I would therefor have to use rollover which causes the filename to change.

    The idea is to, after 3 months, implement a procedure which disables all login accounts which have not been used for 3 months.

  • I'm under the impression that by using the default parameter, it reads all the automatically created rollover files.

    http://msdn.microsoft.com/en-us/library/aa258856(v=sql.80).aspx


    Users may specify the default value "default" to tell SQL Server to read all rollover files until the end of the trace.

    so if you use the example i made, in theory you should get all the results of the trace data in all rolover files as well as the current.

    Lowell


    --help us help you! If you post a question, make sure you include a CREATE TABLE... statement and INSERT INTO... statement into that table to give the volunteers here representative data. with your description of the problem, we can provide a tested, verifiable solution to your question! asking the question the right way gets you a tested answer the fastest way possible!

  • Thanks again! And I think you are quite right, because even when manualy reading from the files, it automatically rolls over when reaching eof.

  • In case you decide to go the Event Notification route, here's a generic sample of how to implement it. All this does is maintain a table of logins and the last time they were used. Though not done in this example, you can also configure all your servers to log to a central repository.

    USE [master]

    GO

    IF EXISTS (SELECT name FROM sys.databases WHERE name = N'Audit')

    DROP DATABASE [Audit]

    GO

    USE [master]

    GO

    CREATE DATABASE [Audit]

    GO

    ALTER DATABASE [Audit]

    SET ENABLE_BROKER;

    GO

    USE [Audit];

    GO

    CREATE TABLE AuditLoginsLog (

    LoginNamevarchar(128),

    LastLogindatetime)

    GO

    --CREATE SERVICE PROCEDURE

    CREATE PROCEDURE [dbo].[auditLogins_usp]

    AS

    BEGIN

    SET NOCOUNT ON;

    SET ARITHABORT ON;

    DECLARE @message XML,

    @messageName NVARCHAR(256),

    @dialogue UNIQUEIDENTIFIER ;

    BEGIN TRY

    --Continuous loop

    WHILE (1 = 1)

    BEGIN

    BEGIN TRANSACTION;

    --Retrieve the next message from the queue

    SET @dialogue = NULL;

    WAITFOR (

    GET CONVERSATION GROUP @dialogue FROM dbo.AuditLoginsQueue

    ), TIMEOUT 2000;

    IF @dialogue IS NULL

    BEGIN

    ROLLBACK;

    BREAK;

    END

    ;RECEIVE TOP(1)

    @messageName=message_type_name,

    @message=message_body,

    @dialogue = conversation_handle

    FROM dbo.AuditLoginsQueue

    WHERE conversation_group_id = @dialogue;

    IF EXISTS (SELECT LoginName FROM AuditLoginsLog where LoginName = @message.value('(/EVENT_INSTANCE/LoginName)[1]', 'VARCHAR(128)'))

    UPDATE AuditLoginsLog SET LastLogin = @message.value('(/EVENT_INSTANCE/PostTime)[1]', 'DATETIME')

    WHERE LoginName = @message.value('(/EVENT_INSTANCE/LoginName)[1]', 'VARCHAR(128)')

    ELSE

    INSERT INTO AuditLoginsLog VALUES (

    @message.value('(/EVENT_INSTANCE/LoginName)[1]', 'VARCHAR(128)'),

    @message.value('(/EVENT_INSTANCE/PostTime)[1]', 'DATETIME') )

    COMMIT;

    END

    END TRY

    BEGIN CATCH

    DECLARE @errorNumber INT,@errorMessage NVARCHAR(MAX),@errorState INT,@errorSeverity INT,@errorLine INT,@errorProcedure NVARCHAR(128)

    SET @errorNumber = error_number();

    SET @errorMessage = error_message();

    SET @errorState = error_state();

    SET @errorSeverity = error_severity();

    SET @errorLine = error_line();

    SET @errorProcedure = error_procedure();

    if not(xact_state() = 0)

    ROLLBACK;

    RAISERROR('%s:%d %s (%d)',@errorSeverity,@errorState,@errorProcedure,@errorLine,@errorMessage,@errorNumber) WITH log;

    END CATCH

    END

    GO

    --CREATE QUEUE

    CREATE QUEUE AuditLoginsQueue

    WITH STATUS = ON

    ,ACTIVATION (

    PROCEDURE_NAME = dbo.auditLogins_usp ,

    MAX_QUEUE_READERS = 2, EXECUTE AS SELF)

    GO

    --CREATE SERVICE

    CREATE SERVICE AuditLoginsService

    ON QUEUE [AuditLoginsQueue]

    ([http://schemas.microsoft.com/SQL/Notifications/PostEventNotification])

    GO

    --CREATE ROUTE

    CREATE ROUTE AuditLoginsRoute

    WITH SERVICE_NAME = 'AuditLoginsService',

    ADDRESS = 'Local'

    GO

    --CREATE EVENT

    CREATE EVENT NOTIFICATION AuditLogins_event

    ON SERVER

    WITH FAN_IN

    FOR AUDIT_LOGIN

    TO SERVICE 'AuditLoginsService', 'current database'

    GO



    Colleen M. Morrow
    Cleveland DBA

  • Leave off the file number, and use "default" for the number of files to read, and it will start with the first existing file and read through to the end.

    That can be a problem if you don't roll files over, or have huge files, in which case reading one at at time might be better. SSIS can run a For Each Next loop over the files in a directory, and can feed the file name to a variable used in a dynamic query. That might be better if you have a lot of big files and need to break the job down a bit for hardware resource reasons.

    If the files are created with a standard name on every server, it should be possible to query them in such a way that you can dump them easily into a warehouse on a central server. Add @@servername as one of the columns in the query, to make sure you can tell what server the data came from.

    If you're going to harden the data into a table, I suggest setting a relatively low rollover threshhold for the trace, and import to the table frequently. That way the table will grow, but the files won't. Otherwise, both will grow, and you'll have to allocate storage to files you really won't be querying directly anyway, since you'll use the table for that.

    You've probably already thought of this, but I suggest importing fast into a staging table, and then from there load into a more permanent table. You can dedupe in that process, instead of adding to the trace-query overhead by having dynamic Where clauses on that. The final table can have whatever indexes, constraints, et all, you find you need for whatever you're using the trace data for, while the staging table can be optimized for fast bulk loading.

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • Thanks again for the very useful replies. I am currently implementing the Notification services option on my management server. The reason I chose this instead of traces, is that I now do not need to create directories and organize access on the file systems of all servers. (Our servers are unfortunately not standardized and I therefore have to do this manually on many of the 50 servers!)

    AndI thank you so much for the scripts Colleen. It saved me a lot of googling and BOL-ing! I do have one more question about the CREATE QUEUE statement. From BOL I understand that ‘EXECUTE AS SELF’ means that the sp will be executed with privileges of the user who created the queue. If this is me, does itt mean that this will cause a problem when I leave the company and my AD account gets deleted? If so, what then is best practice, keeping in mind that some servers run under Local System account and others under AD startup accounts.

  • You could also use EXECUTE AS OWNER or specify another login. I use a generic "auditor" login who owns the Audit database. That will allow it to create all of the Service Broker objects. To create the event notification the login will need CREATE TRACE EVENT NOTIFICATION permission and a corresponding user in the msdb database (no special permissions needed there, it just needs to exist).



    Colleen M. Morrow
    Cleveland DBA

Viewing 14 posts - 1 through 13 (of 13 total)

You must be logged in to reply to this topic. Login to reply