When Marketplace contracts or subscriptions expire or change without visibility, Azure may automatically continue billing at higher on-demand or list prices. These lapses often go unnoticed due to lack of proactive tracking, ownership, or renewal alerts, resulting in substantial cost increases. The issue is amplified when contract records are siloed across procurement, finance, and engineering teams, with no centralized mechanism to monitor entitlement status or reconcile expected versus actual billing.
In many organizations, AWS Marketplace purchases are lumped into a single consolidated billing line without visibility into individual vendors. This lack of transparency makes it difficult to identify which Marketplace spend is eligible to count toward the EDP cap. As a result, teams may either overspend on direct AWS services to fulfill their commitment unnecessarily or miss the opportunity to right-size new commitments based on existing Marketplace purchases. In both cases, the absence of vendor-level detail hinders optimization.
Azure Marketplace offers two types of listings: transactable and non-transactable. Only transactable purchases contribute toward a customer’s MACC commitment. However, many teams mistakenly assume that all Marketplace spend counts, leading to missed opportunities to burn down commitments and risking budget inefficiencies. Selecting a non-transactable listing, when a transactable equivalent exists, can result in identical services being acquired at higher effective cost due to lost discounts. This confusion is exacerbated when procurement and engineering teams do not coordinate or consult Microsoft's guidance.
Many organizations mistakenly believe that all AWS Marketplace spend automatically contributes to their EDP commitment. In reality, only certain Marketplace transactions, those involving EDP-eligible vendors and transactable SKUs, will count towards a portion of their EDP commitment. This misunderstanding can lead to double counting: forecasting based on the assumption that both native AWS usage and Marketplace purchases will fully draw down the commitment. If the assumptions are incorrect, the organization risks failing to meet its EDP threshold, incurring penalties or losing expected discounts.
Organizations frequently inherit continuous recording by default (e.g., through landing zones) without validating the business need for per-change granularity across all resource types and environments. In change-heavy accounts (ephemeral resources, CI/CD churn, autoscaling), continuous mode drives very high CIR volumes with limited additional operational value. Selecting periodic recording for lower-risk resource types and/or non-production environments can maintain necessary visibility while reducing CIR volume and cost. Recorder settings are account/region scoped, so you can apply continuous in production where required and periodic elsewhere.
AWS Fargate supports both x86 and Graviton2 (ARM64) CPU architectures, but by default, many workloads continue to run on x86. Graviton2 delivers significantly better price-performance, especially for stateless, scale-out container workloads. Teams that fail to configure task definitions with the `ARM64` architecture miss out on meaningful efficiency gains. Because this setting is not enabled automatically and is often overlooked, it results in higher compute costs for functionally equivalent workloads.
S3 buckets configured with SSE-KMS but without Bucket Keys generate a separate KMS request for each object operation. This behavior results in disproportionately high KMS request costs for data-intensive workloads such as analytics, backups, or frequently accessed objects. Bucket Keys allow S3 to cache KMS data keys at the bucket level, reducing the volume of KMS calls and cutting encryption costs—often with no impact on security or performance.
By default, AWS Config can be set to record changes across all supported resource types, including those that change frequently, such as security group rules, IAM role policies, route tables, or network interfaces frequent ephemeral resources in containerized or auto-scaling setupsThese high-churn resources can generate an outsized number of configuration items and inflate costs — especially in dynamic or large-scale environments.
This inefficiency arises when recording is enabled indiscriminately across all resources without evaluating whether the data is necessary. Without targeted scoping, teams may incur large charges for configuration data that provides minimal value, especially in non-production environments.This can also obscure meaningful compliance signals by introducing noise
Audit logs are often retained longer than necessary, especially in environments where the logging destination is not carefully selected. Projects that initially route SQL Audit Logs or other high-volume sources to LAW or Azure Storage may forget to revisit their retention strategy. Without policies in place, logs can accumulate unchecked—particularly problematic with SQL logs, which can generate significant volume. Lifecycle Management Policies in Azure Storage are a key tool for addressing this inefficiency but are often overlooked.
However, tier transitions are not always cost-saving. For example, in cases where log data consists of extremely large numbers of very small files (such as AKS audit logs across many pods), the transaction charges incurred when moving objects between storage tiers may exceed the potential savings from reduced storage rates. In these scenarios, it can be more cost-effective to retain logs in Hot tier until deletion, rather than moving them to lower-cost tiers first.
Detection
Identify resources with Audit Logging enabled
Determine whether logs are routed to Log Analytics Workspace or Azure Storage
Assess whether current retention aligns with compliance or operational needs
Evaluate volume and cost of logs retained beyond required periods
Review whether lifecycle policies or retention settings are currently configured
Check if any projects have a “set and forget” logging configuration that has never been reviewed
Remediation
Apply Azure Storage Lifecycle Management Policies to transition older logs to lower-cost tiers or delete them after a set retention period. Before implementing tier transitions, assess whether the additional transaction costs from moving large volumes of small log files could outweigh potential storage savings. In such cases, consider retaining logs in Hot tier until deletion if that results in lower overall cost.
For logs in Log Analytics Workspace, assess whether they can be moved to Basic tables or stored in Storage Accounts instead
Establish project-specific retention requirements with stakeholders and enforce them across all audit logging configurations
Periodically audit logging destinations and lifecycle settings to prevent silent cost creep
VPC Flow Logs configured with the ALL filter and delivered to CloudWatch Logs often result in unnecessarily high log ingestion volumes — especially in high-traffic environments. This setup is rarely required for day-to-day monitoring or security use cases but is commonly enabled by default or for temporary debugging and then left in place. As a result, teams incur excessive CloudWatch charges without realizing the logging configuration is misaligned with actual needs.