Back to Blog
FinOps
8 min read

Azure Monitor vs Log Analytics: Untangling the Monitoring Bill

AzureMonitorLog AnalyticsObservabilityCost Optimisation

If there's one area of the Azure bill that reliably confuses everyone, it's monitoring. Not because the services are particularly exotic, but because the billing model is a tangled mess of overlapping meters, shared infrastructure, and naming conventions that seem designed to prevent anyone from working out what they're actually paying for.

Azure Monitor. Log Analytics. Application Insights. Microsoft Sentinel. Four names that appear as separate line items on your invoice. Four services that all feed data into the same underlying infrastructure. Good luck figuring out which one is responsible for that five-thousand-pound monthly monitoring line.

We see this confusion in virtually every cost review we do. The monitoring bill is always higher than expected, and nobody can explain where the money is going. So let's untangle it.

The Relationship Nobody Explains Clearly

Here's the fundamental thing to understand: Azure Monitor is the umbrella service. It's the overarching platform that collects metrics, logs, and traces from your Azure resources. Log Analytics workspaces are where the log data actually lives. You pay for data ingestion into the workspace, plus certain Azure Monitor features on top.

Think of it this way. Azure Monitor is the front door. Log Analytics is the warehouse behind it. The data flows in through Azure Monitor's various mechanisms (diagnostic settings, agents, SDKs) and lands in a Log Analytics workspace where it's stored and queried. The billing, however, doesn't follow that neat architecture. Instead, it's split across multiple meters that obscure the true picture.

What Actually Appears on Your Bill

When you look at your Azure cost breakdown for monitoring, you'll typically see some combination of these:

Log Analytics covers the core data ingestion and retention charges. Every gigabyte of log data that lands in a workspace gets metered here. This is usually the largest monitoring cost, and we've covered it in detail in our Log Analytics deep dive.

Azure Monitor covers metrics collection, alert rules, diagnostic settings configuration, and notification actions. Metrics are free for most Azure resources, but the moment you start creating alert rules with frequent evaluation schedules, or exporting metrics somewhere, the costs appear.

Application Insights is the application performance monitoring layer. It collects request traces, dependency calls, exceptions, and custom telemetry from your applications. Here's the key detail: workspace-based Application Insights instances ingest their data into a Log Analytics workspace. So you're paying Log Analytics ingestion rates, but the data shows up under the Application Insights meter. Classic Application Insights instances have their own separate ingestion pricing.

Microsoft Sentinel is the SIEM layer. It also ingests data into Log Analytics workspaces, with its own pricing model on top. Sentinel data appears under the Sentinel meter, not Log Analytics, even though it physically lives in the same workspace.

This is where it gets properly confusing. Data from Sentinel, Application Insights, and diagnostic settings all ends up in the same Log Analytics workspace. But on your bill, they're split across different meters. You can't just look at your Log Analytics costs and know what your monitoring is costing you. You need to add up four different service categories to get the full picture.

The Double-Counting Trap

This overlapping billing model creates a genuine risk of paying twice for the same data. The most common scenario involves Sentinel.

When Sentinel is enabled on a Log Analytics workspace, it has its own ingestion pricing. Certain data types (security events, syslog, common event format logs) can qualify for free ingestion under Sentinel's included data allowance. But if you're also sending the same security events to a separate Log Analytics workspace without Sentinel, you're paying full Log Analytics ingestion rates for what is essentially duplicate data.

We've seen organisations running Sentinel on one workspace while a separate "operational" workspace collects the same security events from the same sources through different diagnostic settings. Two ingestion charges for the same data, with nobody realising because they appear under different meters in different parts of the bill.

The Application Insights Problem

Application Insights deserves its own section because it's responsible for some of the most dramatic cost surprises we encounter.

Microsoft is retiring classic (standalone) Application Insights instances in favour of workspace-based ones. The workspace-based model is cleaner: your APM data flows into a Log Analytics workspace and is billed at Log Analytics rates. But during this transition, many organisations have both classic and workspace-based instances running simultaneously. Some applications were migrated, others weren't, and nobody has a clear picture of which is which.

Classic instances have their own ingestion pricing and their own data caps. Workspace-based instances share the Log Analytics pricing model. If you're managing both, you're dealing with two different billing structures for the same type of telemetry.

But the bigger issue, regardless of which flavour you're running, is sampling.

Application Insights defaults to collecting everything. Every request, every dependency call, every trace. For a low-traffic internal application, that's fine. For a high-traffic production web application handling thousands of requests per second, it's a cost disaster. We routinely find Application Insights instances ingesting 20, 30, even 50 GB per day because nobody configured adaptive sampling.

The fix is straightforward: set the sampling rate to 10 to 25 percent for production workloads. You'll still get statistically meaningful telemetry for performance analysis and debugging. You'll just stop paying to record every single HTTP 200 response from a health check endpoint.

Common Cost Drivers (and What to Do About Them)

Beyond the structural confusion, there are specific patterns that inflate monitoring bills. These are the ones we flag most often:

Diagnostic settings sending data to multiple destinations. It's common to see verbose logs being sent to both a Log Analytics workspace and a storage account. That's two ingestion charges for the same data. If you need long-term archival, send logs to storage. If you need queryable operational data, send them to Log Analytics. Rarely do you need both for the same log categories.

Metrics export to Log Analytics. Azure Monitor metrics are free to collect and query within the Metrics Explorer. But the moment you export those metrics into a Log Analytics workspace, whether through diagnostic settings or a data collection rule, you start paying Log Analytics ingestion fees. We've seen organisations exporting platform metrics into workspaces "just in case," adding gigabytes of billable data per day for metrics that were already freely available in the portal.

High-frequency alert rule evaluation. Azure Monitor charges per alert rule evaluation. An alert that evaluates every minute costs more than one that evaluates every five minutes. If you have dozens of alert rules all set to one-minute frequency on resources that don't require that level of responsiveness, the evaluation charges add up quietly.

Sentinel ingesting data that's also going elsewhere. As mentioned above, duplicate data flows between Sentinel workspaces and standalone Log Analytics workspaces are more common than you'd expect. Map your data flows before assuming your monitoring architecture is efficient.

How to Actually Untangle Your Bill

The practical first step is to open Azure Cost Management and filter for all four service categories simultaneously: Log Analytics, Azure Monitor, Application Insights, and Microsoft Sentinel. Add them together. That's your true monitoring cost.

Then break it down:

  1. Identify your workspaces. List every Log Analytics workspace across your subscriptions. For each one, note whether Sentinel is enabled and which Application Insights instances are connected to it.

  2. Map your data flows. For each workspace, understand what's ingesting into it: diagnostic settings, agents, Application Insights, Sentinel connectors. Look for overlaps where the same data arrives through multiple paths.

  3. Check Application Insights sampling rates. For every Application Insights instance, verify that adaptive sampling is configured. If any production instance is running at 100 percent collection, that's an immediate cost reduction opportunity.

  4. Review diagnostic settings destinations. If logs are going to both a workspace and a storage account, decide which you actually need. Eliminate the duplicate.

  5. Audit metrics exports. Check whether platform metrics are being exported into Log Analytics workspaces. If you're not actively querying those metrics via KQL, stop exporting them.

  6. Check alert rule frequencies. Review your Azure Monitor alert rules and reduce evaluation frequency where real-time alerting isn't critical.

For workspace-specific optimisation (commitment tiers, Basic Logs, retention policies, and ingestion analysis) we've covered the details in our Log Analytics cost guide. The two posts are complementary: this one helps you understand what's driving the overall monitoring bill, that one helps you optimise the biggest component of it.

The Monitoring Bill Is a Systems Problem

The reason monitoring costs are so hard to control isn't that any single service is expensive. It's that the billing model fragments a fundamentally interconnected system into separate meters that obscure the relationships between them. Data flows from your applications through Azure Monitor into Log Analytics workspaces, gets picked up by Sentinel, and generates charges under four different service names. Without understanding the architecture, you're optimising in the dark.

The organisations that keep their monitoring costs under control are the ones that treat it as a single, connected system rather than a collection of independent line items. They map their data flows, eliminate duplicates, configure sampling, and review the whole picture regularly.

The ones that don't end up paying thousands a month for logs nobody queries, metrics nobody looks at, and alerts nobody responds to.


Not sure what's driving your Azure monitoring bill? Book a free cost assessment and we'll map out exactly where your monitoring spend is going and where to cut it.

How mature is your cloud cost management?

Take our free 2-minute FinOps maturity test and get a personalised improvement roadmap.