Would be interested to hear on some of the best ways to approach this situation.
I've recently started at a company as a DBA that has lot's of developers, but has never had a DBA or database developer. They have recently started developing products which receive huge amounts of data, very quickly (300 transactions a second), which needs to be stored, and queried for 5 years. They had already created the database before I came along and are now looking to test it and see if improvements can be made.
One way they have implemented the insert process of the main bulk of the transactions is as follows. In an effort to keep the solution "generic", so that they can adapt it to fit multiple solutions, they have the following process flow.
A top level products table receives a product line information. An Id, product type etc.
Each product has attributes which are associated with it. A lookup is done on the product attributes table to get the attributes just for the product type.
Then a row is inserted into the productdetails table for each product attribute in a cursor loop process. For each attribute type, insert the corresponding value for that attribute type of the product into the table.
Product header inserted.
select all attribute types on attributes table based product type.
For each attribute type, insert into productdetails table the products value of that attribute.
The idea is that if a new product comes along with a whole new set of unique attributes,, then they just add the new attribute types to the attributes table and the information can be stored.
To me this seems very inefficient as one product, with lots of details, is being split to on average 10 inserts into one table.
They predict there will be about 9 billion transactions a year, so on average that is going to be 90 billion rows in one table!! This has to be queried too.
I'm thinking it's gonna fall on it's face!
I realise this is a very open question with not much information, but any pointers on the best way to improve this would be great!