Organizations increasingly want Snowflake and Microsoft Fabric to coexist without duplicating data or fragmenting governance. With Fabric OneLake and open table formats like Iceberg and Delta, there are now multiple ways to make Snowflake data available inside Fabric—each with different tradeoffs around cost, performance, and ownership.
This post walks through three practical architectures for using Snowflake data in Fabric OneLake, when each option makes sense, and the key tradeoffs to consider.
The most important decision across all three options is which platform is the system of record and primary writer—most tradeoffs flow directly from that choice.
1) Snowflake-managed Iceberg table + OneLake shortcut
In this pattern, Snowflake reads and writes an Iceberg table stored in external object storage such as ADLS or S3. Microsoft Fabric does not copy the data; instead, Fabric creates a OneLake shortcut to the Iceberg table’s storage location so Fabric engines can query the data in place. Snowflake remains the system of record and the primary writer.
This option is best when you already have Iceberg tables managed by Snowflake and want zero data duplication while still enabling Fabric analytics. The tradeoff is that Fabric access is largely read-oriented, and the experience is less native than using Delta format (how everything is stored in OneLake) in terms of table discovery and management.
From a cost perspective, you pay for Snowflake compute to create and maintain the Iceberg tables and for Fabric compute when querying the data. You do not pay for OneLake storage, ingestion pipelines, or replication. Any additional cost depends on where the external storage lives and whether cross-service or cross-region reads apply.
When to use this option
- You already have Snowflake-managed Iceberg tables in external object storage
- Snowflake must remain the system of record and primary writer
- You want zero data duplication and minimal architectural change
- Fabric is used mainly for read-only analytics or exploration
When to avoid this option
- You want Power BI Direct Lake or a fully native Fabric experience
- You expect Fabric to write, optimize, or manage the tables
- You want first-class table discovery and lifecycle management inside Fabric
One-line summary:
Best for quick, low-friction access to existing Snowflake Iceberg data with minimal change, but limited Fabric-native capabilities.
More info: Unify data sources with OneLake shortcuts – Microsoft Fabric | Microsoft Learn
2) Snowflake writes Iceberg tables directly into OneLake
In this model, Snowflake is configured to write Iceberg tables directly into OneLake, creating a single shared physical copy of the data that both Snowflake and Fabric can access. No shortcut and no replication are required, and Fabric works directly against the Iceberg data where it lives.
This is the cleanest long-term architecture for open table interoperability, assuming current Iceberg support in OneLake meets your needs and Snowflake remains the primary writer (or you are comfortable defining shared write semantics).
Cost-wise, you pay for Snowflake compute to write and maintain the Iceberg tables and for Fabric compute to run analytics workloads. You do not pay for duplicate storage, replication, or ingestion jobs. This is often the most cost-efficient option at scale when both platforms need access to the same data.
When to use this option
- You want a single shared physical copy of data across Snowflake and Fabric
- Snowflake remains the primary writer, but Fabric needs deep analytical access
- You are intentionally adopting open table formats for long-term interoperability
- You want to minimize storage duplication and ongoing data movement costs
When to avoid this option
- You need Fabric-native Delta features like Direct Lake today
- You want Fabric to be the primary write engine
- Your organization is not ready to define clear write and schema ownership
One-line summary:
Best long-term architecture for shared analytics using open table formats, with strong cost efficiency and clean data ownership.
More info: Use Snowflake with Iceberg tables in OneLake – Microsoft Fabric | Microsoft Learn, Building an Iceberg Lakehouse with Snowflake and Microsoft OneLake
3) Fabric mirroring from Snowflake to OneLake (Delta format)
With mirroring, Fabric continuously replicates Snowflake tables into OneLake and stores them as Delta tables, delivering the most seamless Fabric-native experience across Lakehouse, Warehouse, Power BI Direct Lake, and AI workloads.
This option is ideal when you want maximum simplicity and performance inside Fabric and are comfortable with Snowflake no longer being the primary analytics engine for those datasets.
From a cost standpoint, you pay for Snowflake compute and cloud services to read source tables and capture changes, and you pay for Fabric capacity to perform replication, optimization, and downstream analytics. You do not pay for OneLake storage for the mirrored data (the mirroring storage cost is free up to a limit based on capacity – for more information, see Cost of mirroring and Microsoft Fabric Pricing), nor do you pay for building or operating ingestion pipelines. While storage is effectively free in OneLake, mirroring is typically the most compute-intensive option.
When to use this option
- You want the most seamless, high-performance Fabric experience
- Power BI Direct Lake, Warehouse, and AI workloads are top priorities
- Fabric will be the primary analytics platform going forward
- You prefer simplicity over shared-write complexity
When to avoid this option
- Snowflake must remain the primary analytics engine
- You want to avoid continuous replication compute costs
- You are standardizing on open formats like Iceberg across platforms
One-line summary:
Best for Fabric-first analytics teams that want maximum performance and simplicity, at the cost of higher compute usage.
More info: Microsoft Fabric Mirrored Databases From Snowflake – Microsoft Fabric | Microsoft Learn, Tutorial: Configure a Microsoft Fabric Mirrored Database From Snowflake – Microsoft Fabric | Microsoft Learn
Summary comparison table
| Dimension | Snowflake-managed Iceberg + Shortcut | Snowflake writes Iceberg to OneLake | Fabric Mirroring (Delta) |
|---|---|---|---|
| Primary writer | Snowflake | Snowflake | Fabric |
| Physical data duplication | ? None (shared external storage) | ? None (shared OneLake storage) | Yes (replicated into OneLake) |
| Format in OneLake | Iceberg | Iceberg | Delta |
| Fabric experience | Read-oriented, less native | Good, improving | ? Best / most native |
| Power BI Direct Lake | ? Limited | ? Limited | Yes |
| Ongoing sync | Manual / external | Native (shared) | Automatic (CDC-based) |
| Separate storage charges | Yes (external storage) | ? No (included with Fabric capacity) | ? No (included with Fabric capacity) |
| Compute cost | Snowflake + Fabric | Snowflake + Fabric | Snowflake + Fabric (highest) |
| Best for | Minimal change, zero-copy | Long-term open architecture | Fabric-first analytics |
Iceberg support in OneLake is evolving quickly, and the gap between Iceberg and Delta experiences in Fabric is narrowing. Over time, Option 2 becomes increasingly attractive as a long-term, open-table architecture—while mirroring remains the fastest path to a fully Fabric-native experience today.
The post Three Ways to Use Snowflake Data in Microsoft Fabric first appeared on James Serra's Blog.
Yes (replicated into OneLake)