Why Object Databases will always be Tomorrow's Technology

  • Tony Davis

    SSCarpal Tunnel

    Points: 4385

    Comments posted to this topic are about the item Why Object Databases will always be Tomorrow's Technology

  • Grant Fritchey

    SSC Guru

    Points: 396617

    Excellently stated. I wish I had been able to come up with these exact arguments a year ago when one of our major projects started down the object database path. Now... we're expecting their delivery in about a year and no one has a clue how we're going to integrate it with all the other systems. It was developed with the idea that integration wasn't a need for the initial delivery and therefore, no integration points at all. It's going to be the perfect silo, but instead of making it more useful to the enterprise, it will be less useful. Fun times ahead. Thanks for posting such a concise summary of the issues.

    ----------------------------------------------------
    The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood...
    Theodore Roosevelt

    The Scary DBA
    Author of: SQL Server 2017 Query Performance Tuning, 5th Edition and SQL Server Execution Plans, 3rd Edition
    Product Evangelist for Red Gate Software

  • Florian Reischl

    SSC-Dedicated

    Points: 37299

    Nice article!

    Another reason for the "tomorrow technology" I heard sometimes:

    "Tomorrows hardware will fit the performance lack of object databases"

    Problem

    "Tomorrows data have tomorrows storage and performance requirements"

    😉

    Again, really nice article!

    Flo

  • GSquared

    SSC Guru

    Points: 260824

    Actually, some of the uses I've seen for Cache (http://www.intersystems.com/cache/) are quite functional. It's a pain to work with, at least what little I've done and from what I've read, but as far as scalability, stability, security and performance, it seems to be a very solid product.

    Rows and columns are very easy on the human mind. Breaking them down into normal forms takes training, discipline and experience, but the basic concept is easy. On the other hand, in the class object-model, does a door inherit characteristics from a car or is it the other way around? Do all doors have the same methods and properties? Does the presence of a lock on a door come about through polymorphism, or are locks their own object with their own methods and properties? Where does one even start?

    - Gus "GSquared", RSVP, OODA, MAP, NMVP, FAQ, SAT, SQL, DNA, RNA, UOI, IOU, AM, PM, AD, BC, BCE, USA, UN, CF, ROFL, LOL, ETC
    Property of The Thread

    "Nobody knows the age of the human race, but everyone agrees it's old enough to know better." - Anon

  • rboggess

    SSC-Addicted

    Points: 447

    I've never been able to fully understand this drive, beyond the obvious. If you want object-relational access, write stored procedures.

    It appears from the ORDB argument and from the primary arguments against stored procedures, that many of the folks that develop software and create the need for databases don't seem to understand that the data, although used by their application, perhaps even the primary driver for the storage of the information, are not the owners of that information. And the owner of that information may have other uses for that data beyond the scope of the original application. Enterprise Data Warehousing being one example. Manufacturing Automation, although less obvious, being another.

    I can see where someone might create a wizard or some other technology similar to the query analyzer that will recommend or even automatically configure OR functions (.Net or Java?) or stored procedures, but to sacrifice orthoganility for the convenience of the developer--? Isn't that rather short-sighted?

  • Steve Jones - SSC Editor

    SSC Guru

    Points: 720436

    rboggess (6/17/2009)


    I've never been able to fully understand this drive, beyond the obvious. If you want object-relational access, write stored procedures.

    For the most part I'm with you on this. Developers complain that it's too hard to work with, and I think they want to be able to store an object structure without having to worry about how it's stored. However it's the type of mapping they often have to do when working with the data on a screen, so I'm not sure why it's that hard.

    And if it's too time consuming, why not have one person dedicated to writing the mapping in a data layer?

    I just don't buy that it's a ton of time.

    It is a knowledge investment, and maybe that's an issue. We don't have a way to quickly and easily train someone how to design a db.

  • DangerMouseKaBoom

    SSC Enthusiast

    Points: 191

    Hello,

    I actually have nothing to add,just to say thanks,this has opened my eyes a little more,it just educated me....but I do wish I had these options say 3 months ago

  • peter-757102

    SSCertifiable

    Points: 6877

    I am running the risk of stating nonsense here as honestly I have no real experience with object database implementations today.

    To my possibly outdated knowledge the technical mechanism that is supporting object databases is visualization of local memory. Instead of having an object in local memory, you now indirectly work with an object that is persisted elsewhere. Thus a linked list or any other structure in local memory is persisted like that in the database...its persistent memory. This is what I got from it in the early 90s, so correct me if I am horribly off. The idea to me is just completely illogical from a quality and manageability point of view.

    For me any persistent storage has to be:

    * Decoupled from the processes acting on it;

    * Has a state you can snapshot easily and independently verify;

    * Accessible by many processes concurrently and handle operations in a way will not result in a corrupt state;

    * Needs to be based on a low complexity model, thus only support a limited number of relation types between entities and data types;

    * Does not require domain specific software for accessing the data and be maintainable by standardized software;

    Any deviation from these things will result in errors accumulating and persisting and other problems. Say I as a programmer have everything I do in memory, persisted instantly for objects I tagged as needing this. Then the data and its relationships is tightly coupled to my algorithm that operates on it, if I use a linked list in memory, the data is structured that way. Now I made a mistake somewhere or I need to do an optimization..that likely will result in a change in both. But wait, what if there are multiple and different clients acting on the data....o shit....you are so horribly screwed.

    On top of it, locality of reference will be totally out of whack. Algorithms that programmers use are in general NOT optimized to be efficient on remote virtual memory. They often aren't even optimized for locality at all, incurring penalties for doing accesses outside the processors level 1 cache. How this would translate to memory across a network, even further away from the processing in the CPU.....not good.

    You end up with situation you will have to make mappers and wrappers for their past mistakes, in effect persisting the problem of an inadequate design or implementation. And because everyone programs to his own view, you can say goodbye to interoperability of software too. Code can't become any cleaner for any significant complex project...its all smoke and mirrors.

    And that physics lab used an object database says nothing to me and that it is the biggest database in the world neither. It says nothing about scaling. Scaling is not just a function of size in storage, even a flat file with a fixed record lengths is scalable in that sense. Scaling applies to operations in the face of accumulating data, things like concurrent access and the efficiency of retrieval and mutation (the algorithms). Thus an Object database with algorithms that aren't going to be change, say they hash everything and this is persisted as the access structure in their database. And lets assume this suits the work they do perfectly, then yes it will be amazingly fast at retrieval, but that is a very specialized case.

    I believe object databases by very definition are domain specific beasts and as a result will never find broad acceptance. I seen the OO hype in the early 90s and always felt people thought more if it then it actually proved to be. Now a new generation is at it doing exactly the same things but with improved technology at its base. I still haven't seen the fundamentals change and thus expect similar outcome.

    Instead of learning how to do things right and understand the data they operate on, people take any solution that "promises" to take away that sore. OOP is always perceived as having this "promise", but I know the hard core OOP folks are pretty smart and put a lot more emphasis on design then the vast majority of developers do. They just want the promise of not having to do any design beforehand in the first place and start implementing quickly.

    I would classify object databases as having the promise to do really rapid prototyping. But don't think they are a good idea to adopt as a replacement for actual software that is going to be used.

    Now, prove me totally wrong! 🙂

  • Dennis Puliwingeefarger

    SSC Enthusiast

    Points: 101

    I can give you a better reason: the relational model is the most powerful adhoc querying system ever created. The output of a query is just another table, which can itself be queried; this recursiveness is not present in any object system I've seen. When an object database can query several classes of objects and present you with a brand new class of objects, they'll start to have something interesting. Until then, relational databases will rule wherever adhoc querying is important...ie., most businesses.

    That said, for specific applications, alternative databases systems can do very well. For business applications, the relational database is usually the main point of it all, and websites are just front ends get data in and out. But for some projects, the website is the whole deal. For those, there are sometimes options to get dramatically better performance or scalability by using alternatives...object databases, graph databases like Neo4J, or distributed systems like Cassandra (runs part of Facebook), Scalaris, Google's Bigtable (available on Google App Engine), and Amazon's SimpleDB.

  • Irish Flyer

    SSCrazy

    Points: 2245

    Those who persist in advocating OODBs truly show either (1) a complete lack of practical experience in actual operational management of an implemented system, or (2) a failure to fully think out the basic principles of good design. I realize that statement is provocative, but many years of experience with both relational and object DBs lead me to make it.

    IMHO, the single biggest problem with the OODB model is a complete lack of integrity control. An object has attributes which in turn may also be objects. I cannot tell you how many times I have seen circular ownership rings, where ten or fifteen levels in, an attribute object claims the original object as one of its attributes.

    This is, admittedly, a product of poor design, but I have had many developers declare that it is a true model of their system's reality. What it actually does is create numerous untestable pathways in the supporting application code, and databases that go belly up through no fault of the DBA trying to keep that illogical DB afloat.

    A nearly perfect example supporting in support of the case against the use of OODBs is the registry DB in MS Windows. Anyone who has much experience with Windows internals knows just how fragile and corruptible that puppy is. MS keeps multiple copies of the registry DB just so they have a reference point in support of attempts to fix it when it inevitably breaks. I am also convinces that that OODB underlayment is one of the reasons Windows is such "bloatware." In my experience, the application code built around OODBs has been some of the worst, and most inefficient code I've ever seen.

    Relationals, on the other hand, can use built-in relational integrity checks to prevent this type of foul design from ever becoming reality. Yes, you can turn integrity checks off, but any decent DBA, faced with that need for performance reasons, would also write scripts to do periodic and frequent integrity checks. Another benefit of using the relational DB to store objects is the design discipline it imposes to accurately and logically define an object and its attributes. Tables are objects; columns are attributes; attributes that are also objects are foreign keys. What could be simpler? All of the esoteric object model terminology really boils down to just the previously stated basics.

  • Chris Harshman

    SSC-Forever

    Points: 42145

    The only Object Relational Database technology that I've had experience with was Oracle's implementation of this. From what I've seen personally, and from what I've read and heard from others, the main problem is that everything an object data model can represent can also be represented in a relational data model, and usually more easily. But not everything that can be represented in a relational data model can be represented in an object data model.

    Probably the best concept of object based thinking for databases is already implemented in SQL Server, domains or what SQL Server calls alias data types. it's simple, doesn't require any odd conversions or extra "." notation in the column reference, and can simplify setting up standards for your database.

  • Gift Peddie

    SSC Guru

    Points: 73570

    This is a manifestation of the YAGNI (You Ain't Gonna Need It) principle that emerged from the bowels of the Extreme Programming movement. It refers to the idea that you should only add a feature if there is an immediate need for it, not because you can predict a future need for it. Some advocates of Domain Driven Design (DDD) apply this principle to the relational database.

    We all remember Borland a British Cobol company bought it for a few dollars, Microsoft recently dropped Agile component in VS2010, in software all you need is implementation so when Microsoft have used Agile to RTM VS,SQL Server, SharePoint and Office then Agile should be included in VS because there will be implementation detail.

    After all, it has found a place in a few industries, such as telecoms, and the largest database underpins the Stanford linear accelerator system, and it's an object database, so there seems to be no issue with scalability.

    Oracle 8i was object relational and according to Jim Melton scalability was an issue, just number crunching without the tedious repetitive tasks of RDBMS is partial implementation because when used for the current business needs scalability becomes an issue.

    Kind regards,
    Gift Peddie

  • cy-dba

    SSCarpal Tunnel

    Points: 4149

    Fyi...for those needing a basic understanding of object databases (like me), this page is a good start:

    http://www.service-architecture.com/object-oriented-databases/articles/index.html

  • peter-757102

    SSCertifiable

    Points: 6877

    cy (6/17/2009)


    Fyi...for those needing a basic understanding of object databases (like me), this page is a good start:

    The specified request cannot be executed from current Application Pool

    Good one! 😉

    i think you made a mistake in formatting the URL there!

  • vliet

    SSCommitted

    Points: 1986

    First my compliments to you for an excelent article.

    Relational databases have their foundation in the mathematical theory of tuples and sets. You might think that today's engines are miles away from this original foundation, but it's still the cornerstone for any relational solution: you can prove it's correct. The results of a query might not be what you expected, but it will always follow very strict rules and is thus 100% predictable. Those guys building the engines know exactly what the results should be, and can entirely focus on the most effeicient way to gather these results from the stored data.

    Another feature of most relational databases is the ACID property: Atomicity, Consistency, Isolation, Durability. This is already much harder to achieve with the advent of declarative referential integrity, where an action on a single table might cascade to multiple tables. Despite its name, a relational database has usualy far less intrinsic relations than an object database. Notice that any serious object oriented language supports garbage collection, to allow the developer to move away from the question: is this object still referenced? By the way, you can't remove a row that is referenced by a foreign key unless the delete is propagated, but a row that is not referenced will not be deleted or even tagged as garbage.

    A good object relational framework knows for every query how much it should do with the objects and how much should be forwarded in a query to the relational engine. It's still hard to find such frameworks, but with LINQ it is at least possible to create one for the .NET environment. Bridging the gap between object and relational should IMHO done at this level, not within the database itself. Allowing instances of objects to be bound to database rows (using .NET within SQL Server) shows the power of something so well designed as a relational engine to store something as versatile as an instance of an object. Is SQL Server an object database? Certainly not, but it is another approach to joining the object and the relational world.

    I could tell you so much about both worlds, because I reside in both worlds frequently, being both a .NET developer and a SQL Server DBA. But that's too much for a comment, so maybe I'll submit an article about this instead of bloating your forum.

Viewing 15 posts - 1 through 15 (of 43 total)

You must be logged in to reply to this topic. Login to reply