I am a SQL developer and newbie on the data warehousing world. I am in my first data warehousing project and having some conserns over best practices. I would like to hear opinions of more experienced members.
In particular our architecture team has designed a data warehouse solution based on data vault but with a lot of customizations. The sources are mainly small (5-50GB) sql server databases.
The warehouse is comprised of multiple staging layers, the data vault, some datamarts plus a custom metadata framework.
The issue i see is that when loading data, the data volume tends to multiply because of the fact that data is repeated in many layers. For example, an initial load of a dataset of 10 gb results in a datafile of 100+ gb plus a quite large log file. And that's just an initial load no history included. Note that this includes just the multiple staging layers and the data vault itsself. No data marts included.
In my view this design is quite inefficient, because on the one hand it works for the small datasets we use, but it wouldn't scale if later on we would include larger sources. Plus loads tend to be slow. Our architect team thinks this pattern is fine because storage is cheap and it is a very common pattern in the datawarehousing (DW) industry
As i mentioned thought I am quite fresh in the DW world and I cannot judge if this is normal or not, although my intuition says that the design is inefficient.
Any expert opinions welcome!