This inefficiency occurs when Savings Plans are purchased within the final days of a calendar month, reducing or eliminating the ability to reverse the purchase if errors are discovered. Because the refund window is constrained to both a 7-day period and the same month, late-month purchases materially limit correction options. This increases the risk of locking in misaligned commitments (e.g., incorrect scope, amount, or term), which can lead to sustained underutilization and unnecessary long-term spend.
This inefficiency occurs when licensed Azure DevOps users remain assigned after individuals leave the organization or stop using the platform. These inactive users continue to generate recurring per-user charges despite providing no ongoing value, leading to unnecessary spend over time.
This inefficiency occurs when teams assume AWS Marketplace SaaS purchases will contribute toward EDP or PPA commitments, but the SaaS product is not eligible under AWS’s “Deployed on AWS” standard. As of May 1, 2025, AWS Marketplace allows SaaS products regardless of where they are hosted, while separately identifying products that qualify for commitment drawdown via a visible “Deployed on AWS” badge.
Eligibility is determined based on the invoice date, not the contract signing date. As a result, Marketplace SaaS contracts signed prior to the policy change may still generate invoices after May 1, 2025 that no longer qualify for commitment retirement. This can lead to Marketplace spend appearing on AWS invoices without reducing commitments, creating false confidence in commitment progress and increasing the risk of end-of-term shortfalls.
This inefficiency occurs when workloads are constrained to run only on Spot-based capacity with no viable path to standard nodes when Spot capacity is reclaimed or unavailable. While Spot reduces unit cost, rigid dependence can create hidden costs by requiring standby standard capacity elsewhere, delaying deployments, or increasing operational intervention to keep environments usable. GKE explicitly recommends mixing Spot and standard node pools for continuity when Spot is unavailable.
This inefficiency occurs when Kubernetes Jobs or CronJobs running on EKS Fargate leave completed or failed pod objects in the cluster indefinitely. Although the workload execution has finished, AWS keeps the underlying Fargate microVM running to allow log inspection and final status checks. As a result, vCPU, memory, and networking resources remain allocated and billable until the pod object is explicitly deleted.
Over time, large numbers of stale Job pods can generate direct compute charges as well as consume ENIs and IP addresses, leading to both unnecessary spend and capacity pressure. This pattern is common in batch-processing and scheduled workloads that lack automated cleanup.
This inefficiency occurs when ElastiCache clusters continue running engine versions that have moved into extended support. While the service remains functional, AWS charges an ongoing premium for extended support that provides no added performance or capability. These costs are typically avoidable by upgrading to a version within standard support.
This inefficiency occurs when workloads with predictable, long-running compute usage continue to run entirely on on-demand pricing instead of leveraging Committed Use Discounts. For stable environments, such as production services or continuously running batch workloads, failing to apply CUDs results in materially higher compute spend without any operational benefit. The inefficiency is driven by pricing choice, not resource overuse.
This inefficiency occurs when backup data persists longer than intended due to misaligned or outdated retention policies. It often arises when retention requirements change over time, but older recovery points are not evaluated or cleaned up accordingly. In some cases, manually configured backups or legacy policies remain in place even after operational or compliance needs have been reduced.
As a result, backup storage continues to grow and incur cost without delivering additional recovery value.
This inefficiency occurs when Amazon Aurora database clusters are intentionally stopped to avoid compute costs but are automatically restarted by the service after the maximum allowed stop period. Once restarted, re-started database instances begin accruing instance-hour charges even if the database is not needed.
Because Aurora does not provide native lifecycle controls to keep clusters stopped indefinitely, this behavior can result in recurring, unintended compute spend—particularly in non-production, seasonal, or infrequently accessed environments where clusters are stopped and forgotten.
This inefficiency occurs when automated Cloud SQL backups are retained longer than required by recovery objectives or governance needs. Because backups accumulate over the retention window (and can grow quickly for high-change databases), excessive retention drives ongoing backup storage charges without improving practical recoverability.