February 24, 2009 at 2:25 pm
I'll skip to the end, then tell you why:
Regarding SQL Server 2005.
Is there a way I can tell if a table (and even what field) has been written to, either a new record or updating a record. I have about 20 tables I need to look at during each step of my project.
I'm fairly new the SQL Server, so you can safely assume I have little to no idea what I'm talking about.
I inherited a project (ERP Package) that uses SQL Server 2005. One of the challanges is to be able to use some of our existing hardware to interface with the ERP software. The ERP vendor isn't interested in doing this, or providing source code so we can do it ourselves.
So, Basically, I need to be able to enter data into the ERP system, by writing directly to the tables, yet do it in such a fashion so the ERP system is happy. i.e. entering and item into inventory would probably write to at least four tables
My plan is to do one of the transactions (i.e. enter an item into inventory) and then look to see what tables have been accessed, written to, updated, etc.
I plan to do this on a weekend when I can isolate the system so I will be the only one accessing the SQL Server via the ERP.
So, Is there any way to look at a table(s) and see if they have been accessed in the last few minutes??
Thanks
John
NOTE - Just about every thing I've learned about SQL Server to date, has come from this forum. Thanks
February 24, 2009 at 2:28 pm
If you can add triggers to tables, you can log stuff that way.
If you can get a log parser, you can check the SQL Server log files.
If neither of those will do, you can create a trace that will tell you what your commands do. Check out sp_trace_create in Books Online.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
February 24, 2009 at 2:57 pm
I'd go the other way around. Start a trace, add a filter so that only your connection (by username) is monitered.
Then go enter that item in the inventory and see what sql is being used. I don't know any faster / easier way than this (besides watching the source code execute).
Plan b would be triggers with history tables.... but you don't see the joins, the logic implemented in the queries and code.
C would be a log parser. It has the same problems as triggers, but it costs a lot more.
Plan A.1 would be to simply ask the users (what happens we you do xyz). They often know their apps really well if they've been using them for a while (especially if they are competent). Keep in mind that even with the best intent and most competent users, they can't know or see what's going under the hood, so there are often a few missing links to figure out, but that's easier than starting from scratch.
February 25, 2009 at 6:00 am
Yesterday I didn't know how to spell "trace" now I. R. one.
I played with the Trace functions in Profiler. The Default trace program is running, but really didn't help me.
So, I jumped in and created a generic trace from one of the templates (T-SQL), Kind of looked like what I wanted, but WOW, was there a lot of stuff in there. (I only ran it about 2 minutes)
So, I went in deeper and created a custom trace based on what I saw when I reviewed the template results. I got everything ready and prepared all the entries on the "Misc Receipt" on the ERP screen.. I started the trace, then hit "ENTER" on the ERP screen, gave it a couple seconds for the trace to settle down, then stopped the trace. Total trace run time, about 20 seconds.
PRESTO - It took me about thirty minutes to go through the resulting trace file and find out what tables are being modified by this ERP system for a simple "Misc Receipt" transaction. ( As I suspected, there were about 16 tables, two of them I would have never guessed, but I now understand why they are being modified)
I still have more testing to do, but I now have total confidence that We can build this external interface, and "trick" the ERP system into thinking that it actually entered the data.
THANKS for the replies, As I say, everytime I come up here, I learn something new.
John
February 25, 2009 at 6:09 am
Just make sure you didn't put too many filters out there.
You're better of grabbing everything, then importing everything in table and then do manual filter based on connections, applications, workstations, etc.
If you filter too much from the trace, then you may miss out on something important (yup got burned on that one).
February 25, 2009 at 7:08 am
Yep, traces rock.
- Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
Property of The Thread
"Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon
March 3, 2009 at 7:25 pm
I've had to do the same thing with some uncooperative vendors, and agree that traces are the best way to handle this. Triggers are an option, but can be a lot of work if you've got a lot of tables to monitor. When you're trying to isolate/reproduce database activity, there's nothing that beats a trace.
Tim Mitchell
TimMitchell.net | @tmitch.net | Tyleris.com
ETL Best Practices
Viewing 7 posts - 1 through 7 (of 7 total)
You must be logged in to reply to this topic. Login to reply