There is a particular line item that shows up in almost every Azure cost review we do. Nobody panics about it. But month after month, it grows — quietly, reliably, and with nobody paying attention.
That line item is Log Analytics data ingestion.
We regularly see monitoring costs accounting for three to five percent of total Azure spend. For a larger environment, that easily runs into thousands per month. And the trajectory is always the same: up. Every time you onboard a new resource, enable another diagnostic setting, or deploy another monitoring agent, the ingestion volume ticks upward. Nobody ever goes back and reviews what they're collecting.
How Ingestion Costs Add Up
Log Analytics charges roughly £2.30 per GB for data ingestion. A few gigabytes sounds manageable — until "a few" becomes fifty gigabytes per day without anyone noticing. At 50 GB/day, you're looking at approximately £3,400 per month. Over £40,000 per year, for storing logs that in many cases nobody is querying.
The growth pattern is predictable. You start with a handful of VMs and basic diagnostics. Then someone enables verbose container logging. Then security tooling starts pushing events. Then an audit requirement triggers "enable diagnostic logging on everything." Each addition is small. The cumulative effect is not.
The "Log Everything" Problem
A compliance requirement comes down: "We need diagnostic logging on all resources." Perfectly reasonable. The implementation usually looks like this: someone enables diagnostic settings on every resource with the category set to "allLogs."
Nobody asks how much that costs. Nobody asks which log categories are useful. Nobody asks whether the same data is already being collected through another mechanism.
The irony is that logging everything often makes monitoring worse. When every table is flooded with noise, the signal gets buried. Your security team can't find genuine threats because they're drowning in routine audit events. You've increased costs and decreased effectiveness simultaneously.
Commitment Tiers: The Discount Most People Miss
Azure offers commitment tiers for Log Analytics ingestion. If you're consistently ingesting 100 GB/day or more, a commitment tier reduces your per-GB cost by roughly thirty percent. Higher tiers offer steeper discounts.
Yet most organisations are still on pay-as-you-go, even when their volumes clearly justify a commitment. The reason: nobody is looking. The workspace was set up once and became part of the furniture.
If your daily ingestion averages above 100 GB, you should be on a commitment tier. Full stop.
Data Retention: The Hidden Multiplier
You get 30 days of interactive retention included with ingestion. After that, you pay per GB per month. We've seen organisations retaining twelve months across multiple workspaces because somebody set it that way two years ago and nobody questioned it.
If you're keeping data beyond 30 days, ask: do we actually query it? If the answer is no — and it usually is — archive to storage at a fraction of the cost.
Workspace Sprawl
Multiple workspaces scattered across subscriptions, often collecting overlapping data. Three different workspaces all ingesting the same security events from the same resources.
Workspace consolidation isn't always simple — there are legitimate reasons for separation. But duplicate ingestion is never a good reason. A workspace architecture review can eliminate significant redundant costs.
Basic Logs: Pay Less for Tables You Rarely Query
The Basic Logs tier drops ingestion to roughly £0.60 per GB — a significant reduction. The trade-off is a per-query charge and queries limited to eight days.
For high-volume, low-value tables — verbose container logs, raw network flow data, detailed trace telemetry — Basic Logs can dramatically reduce costs. If a table only gets queried during incident investigations, this tier was designed for exactly that.
Common Sources of Excessive Ingestion
These are the usual suspects we find:
VM monitoring agents sending everything. Default data collection rules can be surprisingly verbose. Review them and filter out what you don't need.
Diagnostic settings on "allLogs". Not every resource needs every log category. Some services generate enormous access log volumes that would be better sent directly to storage.
Container logging with no filtering. Kubernetes clusters can generate staggering stdout and stderr volumes. Filter to the namespaces and log levels that matter.
High sampling rates on application telemetry. Adaptive sampling should control volume for production workloads. Ingesting every request from a high-traffic application is almost never necessary.
Security event collection on "All Events". The "Common" or "Minimal" settings cover what matters for most threat detection scenarios.
Where to Start
- Know your number. Check daily ingestion volume in the workspace's Usage and estimated costs blade.
- Evaluate commitment tiers. Consistently above 100 GB/day? Switch immediately.
- Review diagnostic settings. Disable log categories that provide no operational or security value.
- Move low-query tables to Basic Logs. The savings are immediate.
- Reduce retention. If you're beyond 30 days, ensure there's a documented reason. Archive for long-term needs.
- Consolidate workspaces and eliminate duplicate ingestion.
- Tune collection rules. Collect what you need, not everything available.
The goal isn't to stop logging. Monitoring is essential. The goal is to stop paying for data that provides no value — and in most environments, there's a lot of it.
Not sure where your cloud cost management stands? Take our 2-minute FinOps maturity test — 10 questions, instant results, no sign-up required.
Want a deeper look? Get a free FinOps assessment and we'll show you exactly where the savings are.