Previously I wrote about In-Memory Optimized Tables, in this blog I am going to discuss how to determine which tables could benefit from being In-Memory by using a tool called Memory Optimization Advisor (MOA). This a is a tool built into SQL Server Management Studio (SSMS) that will inform you of which tables could benefit using In Memory OLTP capabilities and which may have non supported features. Once identified, MOA will help you to actually migrate that table and data to be optimized. Let’s see how it works by walking through it using a table I use for demonstrations in AdventureWorks2016CTP3. Since this is a smaller table and doesn’t incur a ton writes it is not a good use case, however, for simplicity I am using it for this demo.
Once create Right click on the table and choose Memory Optimization Advisor.
Click Next in the subsequent window that appears.
Next it will validate if your table is able to be migrated. It looks for things like unsupported data types, sparse columns, seeded columns, foreign keys, constraints and replication just to name a few. If any item fails, you must make changes and/or remove those features in order to move the table to a memory optimized table. This would be things like foreign keys, constraints, triggers, replication and partitioning to name a few.
Next it will go over some warnings. These items, unlike the ones in the validation screen, won’t stop the migration process but can cause behavior of that option to fail or act abnormally so be aware of this. Microsoft goes a step further here and gives us links to more information so that we can make an informed decision as whether or not to move forward with the migration.
The next screen below is very important as it lets you choose options for your migration to a memory optimized table. I want to point out a few things in this next screen shot.
First, in the RED box, you will be a requirement for a file group name. A memory optimized table must have a special file group when migrating. This is a requirement to allow you to rename the original table and keep it in place thus avoiding naming conflicts. You will note also in this screen you can choose what to rename the original table.
Next in the PURPLE box, you will see the option to check to also have the data moved to the new table. If you do not check this option, your table will be created with no rows and you will have to manually move your data.
Next in YELLOW box, this is the create able option that is equivalent to DURABILITY= SCHEMA_ONLY or SCHEMA_AND_DATA I talked about in my last blog. If you do check this box, then you will not have any durability and your data will disappear due to things like restart of SQL Services or reboots (this may be what you want if you are using this table as though it was a TEMP TABLE and the data is not needed). Be very aware of these options because by default this is not checked. If you are not sure which option to choose, don’t check the box. That will ensure the data is durable. Click NEXT.
Remember this is making a copy of your table for migration so the new optimized table cannot have the same primary key name. This next screen assists with renaming that key as well as setting up your index and bucket counts. I’ll explain bucket counts more below.
Note in the screen above it provides you a space to rename your primary key and create a new index. As we know a primary key is an index so you must set that up. We have two options to do this. We can use a NONCLUSTERED INDEX which is great for tables with many range queries and needs a sort order or we can use a NONCLUSTERED HASH index which is better for those direct lookups. If you choose the latter you also need to provide a value for the Bucket Count. Bucket count can dramatically impact the performance of the table, and you should read the documentation on how to properly set this value. In the case above I am leaving it to the pre-populated value and choosing NEXT.
This table has existing indexes so the next step is to run through the setup up of those for conversion. If you do not have any existing indexes this part is bypassed.
Note the two index migration options on the left. This means I have two indexes to migrate.
The next screen to come up is just a summary of all our migration options we choose in the set up. By choosing to migrate, you will migrate your table and its data to be an In Memory optimized table so proceed with caution. This maybe a good time to hit the SCRIPT button and script this out for later use. Keep in mind that I already have a memory optimized file group for this database so one is not created for me. If one didn’t already exist, you would see its creation in Summary screen.
As shown below, the migration was successful. A new table was created while the old table was renamed and the data was copied over.
Let’s look at the results.
If I script out the new table now you will see that it notates it is a memory optimized table and has appropriate bucket counts. Also note I DID NOT check the box that would have made my table SCHEMA_ONLY durable and you see that reflected with the DURABILTIY = SCHEMA_AND_DATA below.
As you can see the Memory Optimization Advisor makes it fairly simplistic to identify and migrate tables to In Memory Optimized Tables. I highly advise testing this process before trying to convert any tables in your databases. Not all workloads are viable candidates for this feature, so be sure to do your due diligence before implementation. When you are ready to implement, this tool can help make that process a lot easier for you.