Need the final push on this ...
I do have a requirement at work that I need to control or kill ad-hoc queries that have been affecting performance on an specific client or database.
I would prefer to avoid the bad queries altogether, but I can't. And the reason why this is happening in the 1st place is because the developers had the idea (not sure if good or bad) of giving the flexibility to the user of writing their own customize reports. Nice, user or business perspective, but now I have the database problem on my hands. Anyway ...
Narrowed the problem to this T-SQL query that allows me to see what's taking more than X amount of time (sharing, in case someone else need it ) ...
SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED
SELECT TOP 1
[Spid] = session_Id
, [Database] = DB_NAME(sp.dbid)
, [User] = nt_username
,total_elapsed_time/1000 AS 'elapsed time in seconds'
FROM sys.dm_exec_requests er
INNER JOIN sys.sysprocesses sp ON er.session_id = sp.spid
CROSS APPLY sys.dm_exec_sql_text(er.sql_handle) as qt
WHERE session_Id > 50
AND DB_NAME(sp.dbid)='MyDatabaseHere' AND session_Id NOT IN (@@SPID) AND nt_username='Joe'
ORDER BY total_elapsed_time/1000 DESC;
The question I have ... how to handle the 2nd part, which is killing the process. I guess that I can put above in a job, insert on a temp table, and query for anything longer than 120 seconds and then kill? However, I do not want to end with bunch of rollback processes if the rogue transaction is an INSERT, UPDATE, DELETE. Then I will affect performance instead of improving it.
By the way, I also thought about using query governor, but I'm afraid that will affect the whole SQL instance. I need this for a particular SQL login and specific client or database.