Microsoft is pushing hard for everyone to move from Azure Synapse Analytics to Microsoft Fabric. The messaging is clear: Fabric is the future, Synapse is the past. But when you strip away the marketing and look at the actual pricing, the picture is far less straightforward.
We have worked with data teams running both platforms. The answer to "should we migrate?" is almost never a simple yes or no. It depends entirely on what you are running, how you are running it, and whether anyone has actually optimised your current Synapse spend.
How Synapse Pricing Works
Synapse pricing is a patchwork of different billing models stitched together, which is part of the problem and part of the opportunity.
Dedicated SQL Pools are billed by DWU (Data Warehouse Units) on a per-hour basis. A DW100c starts around $1.20/hour. A DW30000c will set you back roughly $360/hour. These pools run continuously unless you explicitly pause them.
Serverless SQL Pools charge per TB of data scanned, currently around $5 per TB. No infrastructure to manage, but costs scale with query volume and data layout.
Apache Spark Pools are billed per node-hour based on the node size you choose. A small node runs around $0.22/hour, while a large node is closer to $0.88/hour. Multiply that by your minimum node count and auto-scale ceiling.
Data Integration charges per pipeline activity run and per data movement unit. Individual runs are cheap, but high-frequency pipelines add up quickly.
The complexity here is actually an advantage for FinOps teams. Each component can be independently right-sized, paused, or replaced.
How Fabric Pricing Works
Fabric takes a fundamentally different approach. You purchase a capacity tier (F2 through F2048) and everything runs against that shared pool of Capacity Units (CUs).
An F2 capacity starts at roughly $263/month. An F64 runs around $8,400/month. An F2048 will cost you north of $268,000/month. Everything, from Data Factory pipelines and Spark notebooks to SQL analytics endpoints, Power BI, and real-time analytics, draws from the same CU pool.
OneLake storage is billed separately at around $0.023 per GB/month, comparable to Azure Data Lake Storage Gen2 pricing.
The simplicity is appealing on paper. One bill, one capacity, one pool. But simplicity and predictability are not the same thing.
The Synapse Cost Traps
Before you consider migrating, make sure you are not falling into these common Synapse cost traps. Fixing these alone could save you 30-50% on your current bill.
Dedicated SQL Pools Running 24/7
This is the single biggest waste we see. Teams provision a dedicated SQL pool for a reporting workload that runs for four hours a day, then leave it running the other twenty. At DW200c, that is roughly $57/day in idle costs, or over $1,700/month for nothing.
The fix: Pause dedicated SQL pools outside of active query windows. Use Azure Automation runbooks or Logic Apps to schedule pause/resume cycles. If your workload is genuinely intermittent, consider whether serverless SQL could replace it entirely.
Spark Pools with High Minimum Node Counts
Setting a minimum node count of 5 or 10 "just in case" means you are paying for those nodes from the moment the pool starts, regardless of workload. If auto-pause is not configured or the timeout is too generous, those nodes sit idle for hours.
The fix: Set minimum nodes to 3 (the lowest allowed), configure aggressive auto-pause timeouts, and review whether your Spark workloads actually need the node sizes you have selected.
Pipelines Scanning More Data Than Needed
Data integration pipelines that use SELECT * or scan entire partitions when they only need a slice of the data generate unnecessary costs on both the integration and serverless SQL side.
The fix: Partition your data by date. Use folder pruning and predicate pushdown. Only scan what you need.
Managed VNet Overhead
Enabling the Managed Virtual Network on Spark pools adds cost for the managed private endpoints and can increase Spark pool start-up times. If you are not connecting to private-endpoint-protected resources, you may not need it.
The Fabric Cost Traps
Fabric has its own set of pitfalls, and because the platform is newer, teams often discover these after committing to a capacity tier.
Capacity Runs 24/7 by Default
When you provision a Fabric capacity, it runs continuously. Unlike Synapse dedicated SQL pools where pausing is a well-known pattern, many teams do not realise Fabric capacities can also be paused. If your data workloads only run during business hours, you could be paying for 16 hours of unused capacity every day.
The fix: Use Azure Automation or the Fabric REST API to pause capacity outside business hours. Microsoft has published guidance on this, but it is not the default configuration.
Throttling Instead of Overage Charges
When your workloads exceed your capacity tier, Fabric does not charge you extra. Instead, it throttles your workloads, queuing them or reducing their performance. This sounds like a cost protection mechanism, but in practice it means your reporting is late, your pipelines miss SLAs, and your data scientists are waiting around. Time is money too.
The implication: You need to right-size your capacity tier proactively, which is difficult when you do not yet understand your workload patterns on the new platform.
Unpredictable Capacity Requirements
With Synapse, you can look at each component independently. With Fabric, all workloads compete for the same CU pool. A heavy Spark job can starve your Power BI reports. A burst of pipeline activity can throttle your SQL analytics. Capacity planning requires understanding the aggregate demand across all workloads, which is significantly harder to model upfront.
OneLake Storage is Separate
The capacity fee covers compute only. OneLake storage is billed separately and can grow quickly if you are consolidating multiple data sources into a lakehouse architecture. Teams that assumed "one bill" sometimes miss this.
When Fabric is Cheaper
Fabric tends to win on cost in specific scenarios:
- Multiple Synapse workloads that could share capacity. If you are running separate Spark pools, dedicated SQL pools, and data integration pipelines, consolidating onto a single Fabric capacity can eliminate the overhead of multiple always-on resources.
- Teams already paying for Power BI Premium. Fabric capacity includes Power BI workload support. If you are paying separately for Power BI Premium (P1 starts at $4,995/month), folding that into a Fabric capacity can offset a significant portion of the cost.
- Greenfield data projects. If you are building from scratch, Fabric's integrated experience means less time stitching services together and fewer moving parts to manage.
When Synapse is Cheaper
Synapse holds the cost advantage in other scenarios:
- Single-workload environments. If you are only using serverless SQL to query a data lake, a Fabric capacity is overkill. Serverless SQL at $5/TB scanned is hard to beat for ad-hoc or low-volume analytics.
- Already-optimised Synapse deployments. If you have already right-sized your pools, implemented pause schedules, and tuned your pipelines, your current costs may be lower than the equivalent Fabric capacity.
- Workloads requiring dedicated SQL pool performance guarantees. Dedicated SQL pools offer predictable, isolated performance. Fabric's shared capacity model cannot make the same guarantee.
The Migration Reality
Even if the numbers favour Fabric, migration is not a lift-and-shift exercise.
Synapse dedicated SQL pools use a T-SQL dialect with specific distribution and indexing strategies. There is no direct equivalent in Fabric. Your warehouse code will need refactoring, not just copying.
Synapse pipelines are compatible with Fabric Data Factory, but there are differences in supported connectors, linked service configurations, and runtime behaviour. Plan for testing, not just deployment.
Spark notebooks are the closest to a like-for-like migration, but differences in runtime versions, library management, and environment configuration mean you should expect adjustments.
And throughout all of this, your existing Synapse environment needs to keep running. Nobody wants a gap in their data platform.
Our Recommendation
Do not migrate to Fabric just because Microsoft is pushing it. Run the numbers first.
If your Synapse environment is already cost-optimised and meeting your needs, there is no urgency. Microsoft has not announced an end-of-life date for Synapse Analytics as a whole, and the pricing is not changing unfavourably. The one exception is Dedicated SQL Pools, which have reached end-of-life for new feature development (security updates continue, but the engine is frozen). If you're heavily reliant on Dedicated SQL Pools specifically, that's a stronger argument for planning a migration path. But Synapse pipelines, serverless SQL, and Spark pools all continue to be fully supported with active roadmaps.
If you do decide to migrate, do it incrementally. Start new workloads on Fabric to build experience and understand your capacity requirements. Keep your existing Synapse running in parallel. Migrate workloads one at a time, validating costs at each step.
And above all, measure before and after. The only way to know if migration makes financial sense is to have clear visibility of your current spend and a realistic model of your projected Fabric costs.
Not sure whether your Synapse spend is optimised or if Fabric would save you money? Our free cloud cost assessment will give you the numbers you need to make the right decision.