I could be mistaken but I thought Luis was talking the performance of an in-memory tally table vs a traditional tally table.
I was looking for this yesterday and just found it: https://www.sqlservercentral.com/Forums/1101315/Tally-OH-An-Improved-SQL-8K-CSV-Splitter-Function?PageIndex=36 It would appear that, in this case, the memory optimized table was faster but, for me, I have not had the same level of success. In my personal experience I have never seen a performance improvement switching from a CTE tally table to a memory optimized tally table.
That said, I have never had a primary key on mine; here's the DDL for the one I use:
CREATE TABLE dbo.eTally
N INT NOT NULL,
UNIQUE NONCLUSTERED (N ASC)
WITH (MEMORY_OPTIMIZED = ON , DURABILITY = SCHEMA_ONLY);
The one Magoo used in his testing had a PK (nonclustered).
On a separate note - here's a great example of "there being no spoon or default ORDER BY in SQL Server":
SELECT TOP (10) t.N
FROM dbo.eTally AS t
Returns: 998753, 998754.....998762
-- Alan Burstein
Helpful links:Best practices for getting help on SQLServerCentral -- Jeff ModenHow to Post Performance Problems -- Gail ShawNasty fast set-based string manipulation functions:For splitting strings try DelimitedSplit8K or DelimitedSplit8K_LEAD (SQL Server 2012+)To split strings based on patterns try PatternSplitCMNeed to clean or transform a string? try NGrams, PatExclude8K, PatReplace8K, DigitsOnlyEE, or Translate8KI cant stress enough the importance of switching from a sequential files mindset to set-based thinking. After you make the switch, you can spend your time tuning and optimizing your queries instead of maintaining lengthy, poor-performing code. -- Itzik Ben-Gan 2001