SQL Clone
SQLServerCentral is supported by Redgate
 
Log in  ::  Register  ::  Not logged in
 
 
 


Data warehousing - Disk Storage - Thoughts on architecture


Data warehousing - Disk Storage - Thoughts on architecture

Author
Message
SQL_Hound
SQL_Hound
SSC-Enthusiastic
SSC-Enthusiastic (166 reputation)SSC-Enthusiastic (166 reputation)SSC-Enthusiastic (166 reputation)SSC-Enthusiastic (166 reputation)SSC-Enthusiastic (166 reputation)SSC-Enthusiastic (166 reputation)SSC-Enthusiastic (166 reputation)SSC-Enthusiastic (166 reputation)

Group: General Forum Members
Points: 166 Visits: 71
Hello,

I am a SQL developer and newbie on the data warehousing world. I am in my first data warehousing project and having some conserns over best practices. I would like to hear opinions of more experienced members.
In particular our architecture team has designed a data warehouse solution based on data vault but with a lot of customizations. The sources are mainly small (5-50GB) sql server databases.

The warehouse is comprised of multiple staging layers, the data vault, some datamarts plus a custom metadata framework.

The issue i see is that when loading data, the data volume tends to multiply because of the fact that data is repeated in many layers. For example, an initial load of a dataset of 10 gb results in a datafile of 100+ gb plus a quite large log file. And that's just an initial load no history included. Note that this includes just the multiple staging layers and the data vault itsself. No data marts included.

In my view this design is quite inefficient, because on the one hand it works for the small datasets we use, but it wouldn't scale if later on we would include larger sources. Plus loads tend to be slow. Our architect team thinks this pattern is fine because storage is cheap and it is a very common pattern in the datawarehousing (DW) industry

As i mentioned thought I am quite fresh in the DW world and I cannot judge if this is normal or not, although my intuition says that the design is inefficient.
Any expert opinions welcome!

Cheers,
xsevensinzx
xsevensinzx
SSC-Insane
SSC-Insane (21K reputation)SSC-Insane (21K reputation)SSC-Insane (21K reputation)SSC-Insane (21K reputation)SSC-Insane (21K reputation)SSC-Insane (21K reputation)SSC-Insane (21K reputation)SSC-Insane (21K reputation)

Group: General Forum Members
Points: 21170 Visits: 5789
SQL_Hound - Wednesday, February 7, 2018 9:06 AM
Hello,

I am a SQL developer and newbie on the data warehousing world. I am in my first data warehousing project and having some conserns over best practices. I would like to hear opinions of more experienced members.
In particular our architecture team has designed a data warehouse solution based on data vault but with a lot of customizations. The sources are mainly small (5-50GB) sql server databases.

The warehouse is comprised of multiple staging layers, the data vault, some datamarts plus a custom metadata framework.

The issue i see is that when loading data, the data volume tends to multiply because of the fact that data is repeated in many layers. For example, an initial load of a dataset of 10 gb results in a datafile of 100+ gb plus a quite large log file. And that's just an initial load no history included. Note that this includes just the multiple staging layers and the data vault itsself. No data marts included.

In my view this design is quite inefficient, because on the one hand it works for the small datasets we use, but it wouldn't scale if later on we would include larger sources. Plus loads tend to be slow. Our architect team thinks this pattern is fine because storage is cheap and it is a very common pattern in the datawarehousing (DW) industry

As i mentioned thought I am quite fresh in the DW world and I cannot judge if this is normal or not, although my intuition says that the design is inefficient.
Any expert opinions welcome!

Cheers,

Questions you should be asking yourself is why? Why is this a problem from your standpoint? When someone simply responds with, "Disk is cheap", then why is that still a problem to you? Not trying to be smart here, just trying to get you to emphasize why it's really an issue to have large data files or slow load times or whatever.

In general, I personally find minimal is best. I don't overly index my data warehouse nor have full recovery set because I am bulk loading data once a day. I also take billion row files and break them up into smaller files, then load those fragments in parallel and even distribute those loads across many machines to make loading extremely fast. The same falls for transformation. Most is happening on disk where it's super fast before SQL Server even touches it. This is something I feel most strive to do in order to scale and so forth because outside of disk, other resources are expensive like compute and memory.

But others have different approaches. Different models and different methodologies for different reasons. Some good, some silly.

ZZartin
ZZartin
One Orange Chip
One Orange Chip (25K reputation)One Orange Chip (25K reputation)One Orange Chip (25K reputation)One Orange Chip (25K reputation)One Orange Chip (25K reputation)One Orange Chip (25K reputation)One Orange Chip (25K reputation)One Orange Chip (25K reputation)

Group: General Forum Members
Points: 25501 Visits: 17205
Well storage is cheap and yes if you're trying to keep historical dimension data it's entirely expected you might end up with a much larger data warehouse than your sources. Now when you say multiple staging layers what do you mean? I would expect that the staging areas would stay a relatively similar size to the source data, those are generally truncated as part of the load. But you likely shouldn't be processing all the data through multiple layers and ultimately down to the individual data marts, that would definitely lead to bloat and a slow load as well as not scaling well.
Chris Harshman
Chris Harshman
SSC-Dedicated
SSC-Dedicated (36K reputation)SSC-Dedicated (36K reputation)SSC-Dedicated (36K reputation)SSC-Dedicated (36K reputation)SSC-Dedicated (36K reputation)SSC-Dedicated (36K reputation)SSC-Dedicated (36K reputation)SSC-Dedicated (36K reputation)

Group: General Forum Members
Points: 36882 Visits: 7034
Do you compress the tables in your staging area? I've seen some significant space savings using PAGE level compression on the tables in our data warehouse staging area, ranging anywhere from one half to only one tenth of the space of the original table from the OLTP system. We only update our staging data once a day also, so the savings from all the reads that will happen outweigh any initial write penalty.
Go


Permissions

You can't post new topics.
You can't post topic replies.
You can't post new polls.
You can't post replies to polls.
You can't edit your own topics.
You can't delete your own topics.
You can't edit other topics.
You can't delete other topics.
You can't edit your own posts.
You can't edit other posts.
You can't delete your own posts.
You can't delete other posts.
You can't post events.
You can't edit your own events.
You can't edit other events.
You can't delete your own events.
You can't delete other events.
You can't send private messages.
You can't send emails.
You can read topics.
You can't vote in polls.
You can't upload attachments.
You can download attachments.
You can't post HTML code.
You can't edit HTML code.
You can't post IFCode.
You can't post JavaScript.
You can post emoticons.
You can't post or upload images.

Select a forum








































































































































































SQLServerCentral


Search