OK scaled up my Azure VM to 8 core / 14GB and the rows to 50 million. Created similar(ish) test rigs for: normal table, columnstore, in-memory OLTP table and the results are as follows:
COUNT(*) on 50 million rows
Clustered Columnstore - 0.11s
Normal table (cdx) - 1.7s
Normal table (ncdx) - 1.8s
In-memory OLTP - native compiled proc with range index hint - 8.8s
In-memory OLTP - interop SQL with range index hint - 12s
In-memory OLTP - native compiled proc with hash index hint - 18s
In-memory OLTP - interop SQL with hash index hint - 21s
Notes: Consider the results as approximate as I'm on a VM. The time it took to load the normal table ( ~30 mintes ) was significantly longer than the time it took to load the in-memory OLTP table ( ~1 minute ).
So I think we all agree COUNT(*) is not a great idea for in-memory OLTP tables, expect it to be slower than disk-based and columnstore tables. If you do have to do it, try with a natively compiled stored proc and range index hint. Consider using in-memory OLTP for what it's designed for, eg shock absorber pattern, high concurrency etc