February 19, 2018 at 12:12 pm
Do you think transactional replication is a good choice for 500 tables. Any performance impact on one vs the other?
Thanks!
February 19, 2018 at 12:15 pm
That is right. Using the audit columns to help identify data inserted/modified. Data load is planning to do daily using ssis. Thanks!
February 19, 2018 at 12:18 pm
Out warehouse is is implemented on oracle. Not sure that makes a difference. Thanks!
February 19, 2018 at 12:23 pm
Think this through a bit.
You are creating a process to save changed records, then another process to read these records, and still another one to do a transform and load into a data warehouse.
Would it make more sense to add a few new columns to each table, create and modified times, and then you ETL simply looks for new or changed records, and move that into the data warehouse?
Why the insistence on adding a completely new sub-system to get changes? Are you worried about performance?
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
February 19, 2018 at 12:24 pm
SQL!$@w$0ME - Monday, February 19, 2018 12:18 PMOut warehouse is is implemented on oracle. Not sure that makes a difference. Thanks!
You can replicate to Oracle, if that's the route you choose.
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
February 19, 2018 at 12:32 pm
Hi Mike, we are planning to implement what you have mentioned. Adding few audit columns for insert and update and use those columns for ssis to extract/load the change data. Concern is with adding new triggers or modifications to existing triggers. Which one to select as an easy implementation. Thanks!
February 19, 2018 at 12:56 pm
SQL!$@w$0ME - Monday, February 19, 2018 12:12 PMDo you think transactional replication is a good choice for 500 tables. Any performance impact on one vs the other?Thanks!
Heh... "It Depends". Will the Oracle box always be online? And, no... not trying to impart any irony there. I'd ask the same question about another SQL Server. Log files can explode on a busy system if the target goes down.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 19, 2018 at 1:32 pm
SQL!$@w$0ME - Monday, February 19, 2018 12:32 PMHi Mike, we are planning to implement what you have mentioned. Adding few audit columns for insert and update and use those columns for ssis to extract/load the change data. Concern is with adding new triggers or modifications to existing triggers. Which one to select as an easy implementation. Thanks!
It depends. If you are adding a created time field, that can, and should, be populated with a default.
The modified time can be done with a trigger. As for separate or the same, who wrote and controls the existing triggers? If its the devs, then I would probably keep them separate. Let their code do it's thing.
Keep something in mind. An update trigger will fire, and you may be updating the modified field, but the data has in fact not changed.
For example, if there is code in the application that blindly does an update if the user has not actually changed the data.
As for the impact of transnational replication on 500 tables, I'm with Jeff. It depends.
I'm not with Jeff on the "well designed triggers". Well designed triggers exist in one place in the world. On the systems he supports.
Why do you need all 500 tables replicated? I have to think that the data warehouse does not contain every piece of data from every table, There has to be a smaller subset that would be more manageable.
Michael L John
If you assassinate a DBA, would you pull a trigger?
To properly post on a forum:
http://www.sqlservercentral.com/articles/61537/
February 19, 2018 at 1:39 pm
I'm not with Jeff on the "well designed triggers". Well designed triggers exist in one place in the world. On the systems he supports.
If the last part is true, then the first part is not. I don't use triggers often, but when I do, they are well-designed, able to efficiently handle multi row changes. And I'm confident there are others who also create efficient triggers, just as I'm confident that all too many don't.
February 19, 2018 at 1:50 pm
RonKyle - Monday, February 19, 2018 1:39 PMI'm not with Jeff on the "well designed triggers". Well designed triggers exist in one place in the world. On the systems he supports.
If the last part is true, then the first part is not. I don't use triggers often, but when I do, they are well-designed, able to efficiently handle multi row changes. And I'm confident there are others who also create efficient triggers, just as I'm confident that all too many don't.
Michael John just happens to not like triggers at all. Heh... look at what he has in his signature. When I'm in Pittsburgh, I like to say the word "trigger" while he's trying to swallow food.
If you want to see him glow in the dark, mention "cascading triggers" and he goes all alien on ya. :alien:
--Jeff Moden
Change is inevitable... Change for the better is not.
February 19, 2018 at 4:19 pm
Oracle box has 99.99% availability. Thanks
February 19, 2018 at 5:29 pm
SQL!$@w$0ME - Monday, February 19, 2018 4:19 PMOracle box has 99.99% availability. Thanks
K. Are you looking to copy all changes to a table or are you talking about just some columns from each table?
And I'm still having a really hard time understanding why someone would put a DW and the source data on two different RDBMSs and then also have the requirement to copy at least some columns of more than 500+ tables to a disparate system. Seems like a might big sword to fall on.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 19, 2018 at 6:20 pm
And I'm still having a really hard time understanding why someone would put a DW and the source data on two different RDBMSs and then also have the requirement to copy at least some columns of more than 500+ tables to a disparate system. Seems like a might big sword to fall on.
Easy enough to understand if the other way around. I worked in an Oracle shop that used MS's SSAS because of cost and a better product. Harder to understand if the other way around.
February 19, 2018 at 7:27 pm
RonKyle - Monday, February 19, 2018 6:20 PMAnd I'm still having a really hard time understanding why someone would put a DW and the source data on two different RDBMSs and then also have the requirement to copy at least some columns of more than 500+ tables to a disparate system. Seems like a might big sword to fall on.
Easy enough to understand if the other way around. I worked in an Oracle shop that used MS's SSAS because of cost and a better product. Harder to understand if the other way around.
Even then, you have two different paradigms that people need to be good at not to mention moving a shedload of data on a regular basis or smaller but continuous amounts.
--Jeff Moden
Change is inevitable... Change for the better is not.
February 19, 2018 at 7:49 pm
Even then, you have two different paradigms that people need to be good at not to mention moving a shedload of data on a regular basis or smaller but continuous amounts.
The integration could be challenging at time as no one person understood both sides. But better that than have to work with Oracle's data warehouse tools.
Viewing 15 posts - 16 through 30 (of 38 total)
You must be logged in to reply to this topic. Login to reply
This website stores cookies on your computer.
These cookies are used to improve your website experience and provide more personalized services to you, both on this website and through other media.
To find out more about the cookies we use, see our Privacy Policy