DSV Named Query vs SSIS Data Flow

  • I have a group of tables that need to form my Fact table for my cube. I can write up a Named Query for the DSV or I can create an SSIS package and have it send all the data to a brand new table.

    Are there any opinions or hard-won knowledge bases that can help me figure out which would be better for my cube processing?

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • My two cents,

    SSAS approach will be elegant and less painful provided the staged data is ready for OLAP Analysis. Additional columns can be added using named calculations/named query as deemed necessary

    Raunak J

  • So.... You're advocating that I use SSIS to create the table first?

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • Perhaps, yes...for ease of design

    Raunak J

  • I would use the SSIS option, but that's a personal choice. I try to avoid many calculations/custom queries in the DSV. This because they seem to be 'hidden' in your cube, whereas an SSIS package is much more visible to other developers.

    Need an answer? No, you need a question
    My blog at https://sqlkover.com.
    MCSE Business Intelligence - Microsoft Data Platform MVP

  • I too have always favored staging the data into a new table.

  • So it's not a performance issue so much as it is a transparency issue?

    Thanks for the advice, everyone. I appreciate it.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

  • I think there is one thing, unmentioned, that we are all assuming. My comments at least were based on requirements that find a roughly 1 day delay between data updates acceptable. If your requirements are more real time, you may find that you need to use the source rather than staging tables.

  • Well, the source database is already a day behind production. So I need to find out how often the cube will be reprocessed. Or, more accurately, how often the reports based on this will be run.

    Brandie Tarvin, MCITP Database AdministratorLiveJournal Blog: http://brandietarvin.livejournal.com/[/url]On LinkedIn!, Google+, and Twitter.Freelance Writer: ShadowrunLatchkeys: Nevermore, Latchkeys: The Bootleg War, and Latchkeys: Roscoes in the Night are now available on Nook and Kindle.

Viewing 9 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply