Manipulating Filetable Files Programatically

  • Comments posted to this topic are about the item Manipulating Filetable Files Programatically

  • Thanks for the article, Steve. Very cool that these files can be manipulated either from Windows or from SQL Server and they remain in sync.

    It may be my browser but it appears that the last part of the Moving Files to a Subfolder section was lost. The last thing I see is this:

    "Let's move the two from the Covers to the Content folder. To do this, we need to understand the hierarchyID and use a few of its functions: Get_Level() and GetReparentedValue(). We will use the GetLevel "

    I was really interested in the remainder of this paragraph! 🙂

  • Apologies, I think I published the wrong version. I'll go dig out the original on my machine and get the rest up, or I'll just rewrite it. It's not too hard, but a neat bit of code.

  • Thanks for the article.

  • ‘Not sure why we don’t see more published about Filetable (I don’teven recall a session at PASS Summit about it). To me this is the most coolcapability in SQL Server behind Service Broker. I’ve been using it since itcame out and still find interesting things to do with it. The ability to manipulatefiles within DML gives me ways to easily stay within TSQL. A current processintakes ~40000 jpg’s, compares the contents with existing files and renamesthose with like names that are new images. I also run PowersHell scripts storedin a Filetable that is in the same database as the stored procedures for an ETLso that the process objects are in the same place and under SQL server’scontrol. – Thanks for writing

  • Warren Bauer - Friday, January 18, 2019 11:24 AM

    ‘Not sure why we don’t see more published about Filetable (I don’teven recall a session at PASS Summit about it). To me this is the most coolcapability in SQL Server behind Service Broker. I’ve been using it since itcame out and still find interesting things to do with it. The ability to manipulatefiles within DML gives me ways to easily stay within TSQL. A current processintakes ~40000 jpg’s, compares the contents with existing files and renamesthose with like names that are new images. I also run PowersHell scripts storedin a Filetable that is in the same database as the stored procedures for an ETLso that the process objects are in the same place and under SQL server’scontrol. – Thanks for writing

    Perhaps you should write up how you are using Filetable.  Others may draw inspiration from it.

  • Filestream is being used. It's part of the basis for MOT.

    I think the bigger issue is that Filetable doesn't easily extend well to the cloud, which means that this isn't a feature that sees much work. There also is the issue of relatively low adoption. Some customers love it, but I think the move with Polybase and external tables might supersede this over time and become the de facto way of handling files.

  • Steve Jones - SSC Editor - Sunday, January 20, 2019 10:26 AM

    Filestream is being used. It's part of the basis for MOT.

    I think the bigger issue is that Filetable doesn't easily extend well to the cloud, which means that this isn't a feature that sees much work. There also is the issue of relatively low adoption. Some customers love it, but I think the move with Polybase and external tables might supersede this over time and become the de facto way of handling files.

    Care to define MOT?  What I am finding makes no sense.

  • Memory Optimized Tables. The Hekaton technology. These are essentially filestream filegroups.

    Firestream (and Filetable SMB access) is great for people that used to store a lot of documents in the database (PDFs, images, etc.). The access and storage/retreival above 1MB is much better than using TDS and T-SQL.

  • Steve Jones - SSC Editor - Tuesday, January 22, 2019 9:52 AM

    Memory Optimized Tables. The Hekaton technology. These are essentially filestream filegroups.

    Firestream (and Filetable SMB access) is great for people that used to store a lot of documents in the database (PDFs, images, etc.). The access and storage/retreival above 1MB is much better than using TDS and T-SQL.

    Memory Optimized Tables was not what I got when I Googled the TLA.  That makes sense.  Thank you.

Viewing 10 posts - 1 through 9 (of 9 total)

You must be logged in to reply to this topic. Login to reply