Filestream, AlwaysOn and large files

  • hi all,

    I'm dealing with some new topics. The more I read about the more questions I have, not finding answers.

    Maybe some of the questions can be answered here. Would be great 🙂

    Background:

    we are starting a new project using SQL 2012 and AlwaysOn High Availability Groups.

    On on the requests: we have to store files.

    What for? A customer, using a frontend, can execute different queries an save the results to different files. Then the customer can show a list of his files and download them at any time.

    Some of the files to store can grow to a size of 200-500 MB.

    Our database wil be placed on a high availability group, using synchronous mirroring.

    The questions

    1) Will the files on the file system copied to the secondary replicas too?

    2) If adding a 500 MB file to a file stream table, will it automatically be transferred to the storage of the seondary replica(s)?

    3) If yes, will this be done as a single transaction? This would take some time blocking all other transactions transferred to the secondary replicas?

    4) What difference would it be to use the FileTable-Feature with non_transacted_access instead of FileStream only?

    5) Are there other senseful possibilities for saving files of this size having context to the database?

    Thank you for any help 🙂

  • Short answer, yeah, the files will be available. Longer answer, go here to get Microsoft's word on how this works.

    "The credit belongs to the man who is actually in the arena, whose face is marred by dust and sweat and blood"
    - Theodore Roosevelt

    Author of:
    SQL Server Execution Plans
    SQL Server Query Performance Tuning

  • Thank you for your answer 🙂 I already read the site you suggest. Unfortunately this gives me no information about how handling of large files while affect the whole system and transaction log. I keep on searching 😉

  • AFAIK, it works like everything else in synchronous mode, e.g. the transaction won't commit until the data has reached the secondary, this could obviously lead to some slow down with large files (it's the price you pay for synchronous).

    Also, make sure you're at least on SP1 as there was a serious bug affecting failover times in RTM.

    This functionality is all pretty new and there isn't a lot of data on how this performs in the wild yet, so I'd plan some time for thorough performance testing before committing.

Viewing 4 posts - 1 through 3 (of 3 total)

You must be logged in to reply to this topic. Login to reply