Click here to monitor SSC
SQLServerCentral is supported by Red Gate Software Ltd.
 
Log in  ::  Register  ::  Not logged in
 
 
 
        
Home       Members    Calendar    Who's On


Add to briefcase 12»»

Data Distribution - Part 2 Expand / Collapse
Author
Message
Posted Wednesday, October 13, 2010 8:01 PM
SSC-Enthusiastic

SSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-EnthusiasticSSC-Enthusiastic

Group: General Forum Members
Last Login: Wednesday, August 13, 2014 7:25 AM
Points: 138, Visits: 272
Comments posted to this topic are about the item Data Distribution - Part 2


Post #1004089
Posted Thursday, October 14, 2010 2:18 AM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Sunday, August 31, 2014 10:04 AM
Points: 89, Visits: 294
Hello,

The article is interesting but I believe we have to avoid triggers at all price for performance and maintainability reason. I usually put a timestamp or datetime column like ModificationDate on each table I like to extract data and avoid deletion at all price as well.
No matter what and even if you have triggers, you will not have continuous flow, it is not possible.
Your job if running will not be triggered even though you send an alert to trigger it.

Overall you stay with an asynchroneous mecanism (replication is somewhat asynchroneous as well as it triggers a log to read by the disztribution and apply the changes anyway).
With my previous work, I would rather set up a datetime column and extract the data storing the extract date at the local server to filter what has to be changed. This way you can manage the frequency and you do not touch the OLTP performance of your most important server, the one upfront.

If you even want to improve it, you can add a tinyint column referring the operation (insert, update or delete). This way you know what to do when you extract.



Clement
Post #1004170
Posted Thursday, October 14, 2010 6:53 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Monday, January 30, 2012 5:12 AM
Points: 40, Visits: 143
clementhuge (10/14/2010)
Hello,

The article is interesting but I believe we have to avoid triggers at all price for performance and maintainability reason. I usually put a timestamp or datetime column like ModificationDate on each table I like to extract data and avoid deletion at all price as well.
No matter what and even if you have triggers, you will not have continuous flow, it is not possible.
Your job if running will not be triggered even though you send an alert to trigger it.

Overall you stay with an asynchroneous mecanism (replication is somewhat asynchroneous as well as it triggers a log to read by the disztribution and apply the changes anyway).
With my previous work, I would rather set up a datetime column and extract the data storing the extract date at the local server to filter what has to be changed. This way you can manage the frequency and you do not touch the OLTP performance of your most important server, the one upfront.

If you even want to improve it, you can add a tinyint column referring the operation (insert, update or delete). This way you know what to do when you extract.



I think that this is misconcept that "we have to avoid triggers at all price for performance and maintainability reason". We should avoid lengthy triggers and avoid place business logic in the trigger. This article is talking about very simple triggers which are helping the processes and do not posting the performance issues. We have many databases and tables with millions of rows. And this mechanism very robast. For 3 years it syncronizing data from one server to 10-12 servers. And those servers running 10,000 clients at pick time. We are not experiencing any performance hits nor any other issues for 3 years.
Post #1004322
Posted Thursday, October 14, 2010 9:08 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Thursday, August 21, 2014 12:08 PM
Points: 1,414, Visits: 4,540
i think that if you're adding triggers to try to keep track of every single change you have problems, security problems. it's a huge performance hit to have triggers on every table and for those of us with databases on SAN's it's a lot of money for the extra storage.

if you have people making changes they shouldn't be then you need to go to management to make security changes. we used to be all SQL logins and people would always share passwords so there was no control. we've changed to mostly windows authentication now and it's a lot easier to control things


https://plus.google.com/100125998302068852885/posts?hl=en
http://twitter.com/alent1234
x-box live gamertag: i am null
[url=http://live.xbox.com/en-US/MyXbox/Profile[/url]
Post #1004467
Posted Thursday, October 14, 2010 9:25 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Monday, January 30, 2012 5:12 AM
Points: 40, Visits: 143
alen teplitsky (10/14/2010)
i think that if you're adding triggers to try to keep track of every single change you have problems, security problems. it's a huge performance hit to have triggers on every table and for those of us with databases on SAN's it's a lot of money for the extra storage.

if you have people making changes they shouldn't be then you need to go to management to make security changes. we used to be all SQL logins and people would always share passwords so there was no control. we've changed to mostly windows authentication now and it's a lot easier to control things


It all depence from the level of the changes and the data you need to have in real time. For example, if you need only demographic data like address, phone, and email to be real time then it will be resonable to do such a process. And if you need many tables then this approach may not work. If the rate of changes is low it may works for the big databases. If the rate of changes if extenceve then it may not work at all. This article shows one of the methodologies. And if it used wrong way it will create harm as anything else. Whith non properly defined indexes performance can be killed. But no one complaining that indexes are bad idea or will not be working. As always, you need to do impact analysis and understand what and how to use.
Post #1004492
Posted Thursday, October 14, 2010 9:46 AM
SSC Veteran

SSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC VeteranSSC Veteran

Group: General Forum Members
Last Login: Today @ 3:02 PM
Points: 261, Visits: 908
...And if it used wrong way it will create harm as anything else. Whith non properly defined indexes performance can be killed. But no one complaining that indexes are bad idea or will not be working. As always, you need to do impact analysis and understand what and how to use.


Seconded.

Self-referential best practice suggestion: Unconditional application of a rule is always wrong.
Post #1004518
Posted Thursday, October 14, 2010 9:52 AM
Ten Centuries

Ten CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen CenturiesTen Centuries

Group: General Forum Members
Last Login: Thursday, August 21, 2014 12:08 PM
Points: 1,414, Visits: 4,540
i have a db that i designed to hold data from security logs from servers. security was a big concern for me. it's not the most glam solution but what i did was i created staging tables to temporarily hold the data coming in via ETL. there is a sql login that has write access to these tables with the password in plain text on 30 some servers if not more. every hour or so there is a process to move data to the real tables from which the reports run. these real tables are locked down to the point where only a few logins even have read access. most of the data is presented via email.

it doesn't work for everyone, but for ETL instead of keeping track of the data changes what about staging the data first and then inserting it into tables with more security on them to negate the need for auditing?


https://plus.google.com/100125998302068852885/posts?hl=en
http://twitter.com/alent1234
x-box live gamertag: i am null
[url=http://live.xbox.com/en-US/MyXbox/Profile[/url]
Post #1004528
Posted Thursday, October 14, 2010 9:59 AM
SSC Rookie

SSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC RookieSSC Rookie

Group: General Forum Members
Last Login: Monday, January 30, 2012 5:12 AM
Points: 40, Visits: 143
It is always load zone for the data. Article is not suggested to enter data directly in trigger from server to server. It telling that there is job on each server that will be started on the real time notification. How you organize the job and the level of protection is defined by the implementation. It can be very simple or very complex validation layer before data is synchronized. But remember, it synchronized in real time (1-3 seconds.
Post #1004540
Posted Thursday, October 14, 2010 11:52 AM
SSC Journeyman

SSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC JourneymanSSC Journeyman

Group: General Forum Members
Last Login: Sunday, August 31, 2014 10:04 AM
Points: 89, Visits: 294
I do understand that everything handled properly will not alter too much performance but I do not think the use of triggers is a good practice. I prefer ETL/asynchroneous processes to handle the extract via a Datetime column on the table you like to extract.

I disagree with architecturing around triggers, that is all.


Clement
Post #1004685
Posted Friday, October 15, 2010 10:04 AM
Mr or Mrs. 500

Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500Mr or Mrs. 500

Group: General Forum Members
Last Login: Yesterday @ 3:24 PM
Points: 536, Visits: 762
Thank you for the article, Leo! I found it interesting. A couple of questions please. Could you clarify what is behind this line:
raiserror (51080, 10, 1); -- Alert that will start job
I am just curious about possible implementations. I have been starting jobs from T-SQL code using sp_start_job procedure, but is there a way to actually trigger it from the alert?

Secondly, I would like to better understand the mechanism for transferring changed records. Here is my understanding. We have a control table on the Destination server and, based on this table, we know the ID's of the records we need to bring in (where Processed_FLG is 'N'). We then wrap these ID's into an XML and pass this XML to the stored procedures on the Source server. When these procedures run, they grab necessary records and somehow push them to the Destination (via linked server or how?). Please let me know if this is correct.

Thank you!



Post #1005329
« Prev Topic | Next Topic »

Add to briefcase 12»»

Permissions Expand / Collapse