• NJDave (3/26/2013)


    Hello

    I was recently pulled into an ongoing project that is running into problems. The old project manager let me know that there were problems failing over more than 500 GB and the new project manager has it listed as a "Windows problem". His boss is calling it a "SQL Server problem". I heard from the old project manager that there are SAN/Hitachi replicator issues. So the storage team is looking to blame SQL somehow.

    Is it true that a failover cluster has problems at the SQL or Windows level when doing a failover cluster for a database > 500 GB? How about 1TB or 10TB?

    Does anyone have a good article for this - its hard to find documentation of something that is possibly not true.

    Any help is appreciated.

    Thanks

    Dave

    That sounds familiar to me, lol ... I mean, pointing fingers that way. Do you work for the famous company that make printers and PCs? DO NOT REPLY! lol ...

    MS-SQL 2008 and above (do not remember SQL 2005) does not have such limitation. You can put 32k databases if you want, but the problem is how much RAM they need, so they can run properly.

    Also, I faced an issue where the SAN was able to allocate up to 500GB max only. I do not remember the specifics, but it was a SAN hardware limitation. So managing the databases was a little bit tricky as we were forced to use that Data LUN only.

    Now, I am also familiarized with Veritas Cluster (not SQL failover). Because the SAN to SAN replication across regions (one was in Texas, the other one in GA I think), we limited the amount of data that we put there. But that's because the huge amount of data that has to be moved in case of a crash. However, we were able to fail-over using Veritas in a matter of minutes, which it is actually amazing good for mission critical databases.

    Bottom line, most recent SQL versions do not have such limitation, but SAN and replication may affect that.