The request comes in casually via Teams: "Can you provide the storage account access key for the analytics connector?" Sounds reasonable. An access key is the fastest way to make it work. Ship the key, close the ticket.
Do not do this.
Why Storage Account Keys Are Dangerous
Each key provides full read-write access to everything in the storage account: every container, every blob, every queue, every table, every file share. No granularity. A key requested for read access to one container grants write access to every container.
Storage account keys don't expire. They can't be scoped to specific operations, containers, or IP addresses. They're the master keys to the entire account.
When a key is shared, it gets copied into config files, committed to Git repos, cloned by other developers, pasted into Teams messages. We've seen storage account keys in public GitHub repos, Confluence pages, and shared OneNote notebooks.
What to Send Back
Thanks for the request. Before we provision access, I need a few details:
- Which storage account? (Name and resource group)
- Which specific container or file share?
- Read, write, or both?
- Where does the connector run? (Azure, on-prem, SaaS, local machine)
- Temporary or ongoing?
We don't provide storage account keys directly as they grant unrestricted access to the entire account. Depending on your answers, we'll set up one of:
- Managed Identity (if Azure-hosted)
- Service Principal with RBAC (if outside Azure)
- SAS token (if time-limited access is sufficient)
This gathers the right information, explains why keys aren't appropriate, and presents alternatives that are better for the developer too: no key rotation headaches, scoped access, time-limited credentials.
The Better Alternatives
Managed Identity. For anything running on Azure (VMs, App Services, Functions, AKS). The application authenticates using its platform identity. Zero credentials to manage, rotate, or store. Scope RBAC to a specific container with read-only access. This should be your default for every Azure-hosted workload. No secrets means no rotation, no expiry, no ops overhead. It makes life simpler for everyone.
Service Principal with RBAC. For applications running outside Azure where Managed Identity isn't available. Create an app registration, assign a role scoped to exactly what's needed. If the credential is compromised, blast radius is one container with read-only access, not the entire estate. Use certificates over client secrets where possible.
SAS Tokens. Avoid if possible. They're time-limited and scoped, which sounds good, but in practice they end up hardcoded in config files and nobody notices when they expire until something breaks at 2am. If you must use them, keep expiry short and treat them as temporary bridges, not permanent solutions.
| Scenario | Method | Preference |
|---|---|---|
| App runs on Azure | Managed Identity | Ideal. Always use this |
| App outside Azure, ongoing | Service Principal + RBAC | OK. Best option for external access |
| Temporary / one-off access | SAS Token | Avoid if possible. Use only as a bridge |
| Third-party SaaS connector | Service Principal or SAS | Depends on connector support |
| Storage account key | Access keys | Never. Disable completely |
Disable Access Keys Entirely
Here's the step most teams don't take: once you've moved everything to Managed Identity or Service Principals, disable storage account key access altogether. Microsoft now recommends this as a security baseline. Defender for Cloud will flag storage accounts with key access enabled.
With key access disabled, even if someone obtains a key through a leaked config file or compromised repo, it won't work. The storage account simply refuses key-based authentication. Only Entra ID-based access (Managed Identity, Service Principal) is accepted.
This is the end state you should be working towards for every storage account. Move workloads to identity-based access, then turn off the keys.
"But the Connector Only Supports Keys"
Sometimes true with older tools. Check if it supports SAS-based connection strings or service principal auth first. Newer versions often add these. If keys are genuinely the only option, provision a dedicated storage account for that integration containing nothing beyond what the connector needs. If the key is compromised, the blast radius is one account with one dataset.
The Broader Pattern
The key request is a symptom: defaulting to the most permissive access because it's the fastest. Every time you push back with the right questions and alternatives, the environment gets incrementally more secure. Over time, the team learns to ask for scoped access by default.
Want to review access patterns across your Azure storage accounts? Request a FinOps assessment. We identify over-permissive access alongside cost optimisation opportunities.