Cloud Provider
Service Name
Inefficiency Type
Clear filters
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Showing
1234
out of
1234
inefficiencies
Filter
:
Filter
x
AWS Marketplace Annual Subscriptions Reverting to Pay-As-You-Go Rates
Other
Cloud Provider
AWS
Service Name
AWS Marketplace
Inefficiency Type
Suboptimal Pricing Model

When organizations purchase third-party software through AWS Marketplace using annual subscriptions, they typically receive meaningful discounts compared to hourly pay-as-you-go (PAYG) pricing. However, when these annual subscriptions expire without active renewal, billing automatically reverts to the default hourly PAYG rate — which can be substantially higher. This is not a renewal at a higher rate; it is the absence of a renewal action that causes the subscription to lapse and the costlier pricing tier to take effect. Because the subscription simply expires silently, many teams do not realize they have lost their discounted rate until the cost increase appears in the next billing cycle.

This inefficiency is especially difficult to manage in enterprise environments where multiple Marketplace subscriptions are purchased at irregular intervals throughout the year, each with its own expiration date. Private offers — which provide custom-negotiated pricing — add further complexity because they cannot auto-renew by design; when a private offer expires, the customer either moves to the product's higher public pricing or loses the subscription entirely. The financial impact can be severe: in some cases, the licensing cost at PAYG rates can exceed the cost of the underlying compute infrastructure itself, as commonly seen with enterprise software such as SUSE Linux for SAP workloads.

Additionally, for AMI-based products, annual subscriptions are tied to specific instance types. Changing instance types during the subscription period causes billing to revert to hourly rates for the new type, creating another avenue for unintended cost increases even before the subscription formally expires.

Orphaned Azure Function Apps with No Active Functions or Triggers
Compute
Cloud Provider
Azure
Service Name
Azure Functions
Inefficiency Type
Unused Resource

Azure Function apps can persist long after the applications or workflows they supported have been retired — particularly in development, testing, and experimentation environments where cleanup is often overlooked. Even when no functions are deployed or no triggers are active, the underlying infrastructure dependencies continue to generate charges. The nature and severity of this waste depends heavily on the hosting plan type: function apps on Premium or Dedicated (App Service) plans incur continuous compute charges for allocated instances regardless of activity, while even Consumption plan function apps still require an associated storage account that accrues transaction and capacity costs from internal runtime operations.

Each function app is provisioned with a required Azure Storage account used for storing function code, managing triggers, and maintaining execution state. This storage account generates costs through read/write transactions and capacity usage even when the function app is completely idle — driven by the Functions runtime's internal health checks and state management. Additionally, if Application Insights was enabled for monitoring, telemetry data ingestion charges can accumulate silently in the background. Across an organization with dozens of abandoned function apps spanning multiple subscriptions, these individually modest charges compound into meaningful and entirely avoidable waste.

Fixed Instance Count on Virtual Machine Scale Set Without Autoscaling
Compute
Cloud Provider
Azure
Service Name
Azure Virtual Machine Scale Sets
Inefficiency Type
Inefficient Configuration

Azure Virtual Machine Scale Sets can operate in two modes: manual scaling with a fixed instance count, or autoscaling with dynamic instance counts that respond to demand. When a scale set is configured with manual scaling, it maintains the same number of VM instances at all times — regardless of whether those instances are actively processing workload. Every provisioned instance continues to incur per-second compute charges, meaning the organization pays for full capacity even during off-peak hours, weekends, or seasonal lulls when only a fraction of that capacity is needed.

This pattern is especially wasteful for workloads with variable demand — web applications with daily traffic cycles, batch processing jobs that run at specific intervals, or services with clear seasonal peaks. If a scale set is sized for peak demand but runs at that capacity around the clock, the gap between provisioned resources and actual utilization translates directly into unnecessary spend. Microsoft explicitly identifies autoscaling as a mechanism to reduce scale set costs by running only the number of instances required to meet current demand.

There are legitimate reasons to maintain fixed capacity — stateful applications that cannot tolerate dynamic instance changes, workloads with licensing constraints tied to specific instance counts, or scenarios where consistent performance without scale-up latency is critical. However, many scale sets running at fixed capacity do so simply because autoscaling was never configured, not because it was deliberately excluded. Identifying and addressing these cases represents a significant cost optimization opportunity.

Azure Firewall Premium SKU Deployed Without Using Premium Features
Networking
Cloud Provider
Azure
Service Name
Azure Firewall
Inefficiency Type
Overprovisioned Resource

Azure Firewall is available in three SKUs — Basic, Standard, and Premium — each designed for different security requirements and priced accordingly. The Premium SKU includes advanced threat protection capabilities such as TLS inspection, signature-based intrusion detection and prevention (IDPS), URL filtering, and web categories. These features are designed for highly sensitive and regulated environments, such as those processing payment card data or requiring PCI DSS compliance. However, many organizations deploy the Premium SKU by default — often during initial provisioning or as a precautionary measure — without actively configuring or requiring any of these Premium-exclusive features.

The cost impact is significant because the Premium SKU carries a substantially higher fixed hourly deployment charge compared to the Standard SKU — approximately 40% more — while the per-gigabyte data processing rate remains the same across both tiers. Since this hourly charge accrues continuously regardless of whether Premium features are enabled or traffic is flowing, every firewall instance running on the Premium SKU without leveraging its advanced capabilities represents a persistent and avoidable cost premium. In organizations with multiple firewall deployments across subscriptions and environments, this waste compounds quickly.

This pattern is especially common in non-production environments such as development and staging, where advanced threat protection features like TLS inspection and IDPS provide little practical value. Microsoft has recognized this as a frequent optimization opportunity and introduced a zero-downtime SKU change feature specifically to simplify the downgrade process from Premium to Standard.

Idle or Underutilized Azure Bastion Deployment
Networking
Cloud Provider
Azure
Service Name
Azure Bastion
Inefficiency Type
Underutilized Resource

Azure Bastion incurs continuous hourly charges from the moment it is deployed until the resource is deleted — regardless of whether any connections are actively being made. This means a Bastion host sitting idle in a development or test environment generates the same cost as one actively serving remote sessions. Because there is no ability to pause or stop a Bastion deployment, the only way to eliminate charges is to delete the resource entirely.

This inefficiency is especially common in non-production environments where Bastion may have been provisioned for occasional troubleshooting or administrative access but then left running indefinitely. Teams often deploy Bastion during initial environment setup and forget about it, or assume it only costs money when sessions are active. Over time, these idle deployments quietly accumulate significant charges — particularly when deployed at the Basic, Standard, or Premium SKU tiers, which use dedicated infrastructure and carry meaningful hourly rates.

The cost impact compounds across an organization with multiple subscriptions or environments. A single idle Bastion host may seem modest in isolation, but dozens of forgotten deployments across dev, test, staging, and sandbox environments can represent a substantial and entirely avoidable expense.

Overprovisioned Azure NetApp Files Capacity Pools
Storage
Cloud Provider
Azure
Service Name
Azure NetApp Files
Inefficiency Type
Overprovisioned Resource

Azure NetApp Files bills based on provisioned capacity pool size — not on the actual data stored within volumes. This means that when a capacity pool is provisioned at a size significantly larger than the sum of volume quotas allocated within it, the organization pays for stranded, unallocated capacity every hour. For example, a 10 TiB capacity pool with only 6 TiB of volume quotas allocated has 4 TiB of capacity that generates cost but serves no purpose.

This overprovisioning commonly occurs for several reasons. Capacity pools do not automatically shrink — since April 2021, pool sizing is entirely a manual customer responsibility. When volumes are deleted, the freed capacity remains in the pool unless an administrator explicitly resizes it downward. Additionally, with auto QoS pools, volume quotas directly determine throughput performance, which incentivizes teams to set larger quotas than their data requires, further inflating pool sizes. Over time, these dynamics create a growing gap between provisioned pool capacity and what is actually needed, resulting in persistent, avoidable charges that compound across multiple pools and regions.

Overprovisioned Azure Cache for Redis Instance
Databases
Cloud Provider
Azure
Service Name
Azure Cache for Redis
Inefficiency Type
Overprovisioned Resource

Azure Cache for Redis is billed at a fixed rate determined entirely by the provisioned tier and cache size — not by actual utilization. A cache instance that consumes only a fraction of its available memory and throughput incurs the same cost as one running at full capacity. This means that when a cache is sized larger than the workload demands, the unused memory and throughput headroom represent pure waste with no corresponding benefit.

Overprovisioning commonly occurs when teams size caches for anticipated peak loads that never materialize, or when workload patterns shift over time — such as after a migration, application refactor, or traffic decline — without a corresponding review of cache sizing. Because there is no option to stop or pause billing on a cache instance, and charges accrue continuously from the moment the cache is created until it is deleted, oversized caches quietly accumulate unnecessary costs around the clock.

An important constraint compounds this issue: scaling down between tiers is not supported. An organization that initially provisions a Premium-tier cache but later determines that a Standard tier would suffice cannot simply downgrade in place — it must create a new cache at the appropriate tier and migrate data. This friction often delays right-sizing efforts and prolongs overspend.

Idle or Untriggered Azure Logic Apps Generating Continuous Charges
Other
Cloud Provider
Azure
Service Name
Azure Logic Apps
Inefficiency Type
Unused Resource

Azure Logic Apps can quietly accumulate costs even when no workflows are actively executing, but the mechanism differs significantly depending on the deployment model. In the Consumption (multitenant) plan, Logic Apps with polling triggers continue to generate billable trigger executions every time the trigger checks for events — even when no events are found and no workflow runs are initiated. A polling trigger configured to check every 30 seconds produces thousands of billable executions per day, all charged at the per-execution rate, regardless of whether any useful work is performed. Webhook or push-based triggers avoid this particular waste, but retained run history and storage operations can still accrue minor costs over time.

In the Standard (single-tenant) plan, the cost driver is fundamentally different. Customers pay for reserved compute capacity — vCPU and memory — on an hourly basis, whether or not any workflows execute. An idle Standard Logic App incurs the full hosting plan charges around the clock. Disabling a Standard Logic App prevents triggers from firing but does not stop the hosting plan billing; only deletion or consolidation of the underlying plan reduces costs.

These idle Logic Apps commonly arise after application decommissioning, migration projects, or proof-of-concept work that was never cleaned up. At enterprise scale, where dozens or hundreds of Logic Apps may exist across multiple environments, the cumulative waste from untriggered workflows and unused hosting plans can become substantial — particularly when the resources are spread across teams and subscriptions with no centralized review process.

ECR Archive Storage Class Used Below 150 TB Threshold
Storage
Cloud Provider
AWS
Service Name
AWS ECR
Inefficiency Type
Inefficient Configuration

In November 2025, AWS introduced an Archive storage class for private ECR repositories, marketed as a way to reduce storage costs for large volumes of rarely used container images. However, Archive storage pricing is identical to Standard storage pricing for the first 150 TB per month. Below this threshold, Archive provides no storage savings yet introduces a per-gigabyte retrieval charge, a retrieval delay of up to 20 minutes, and a 90-day minimum storage duration. Adopting the Archive storage class before meeting the 150 TB threshold means paying the same storage price but taking on additional fees and operational overhead.


This inefficiency is easy to miss because the AWS announcement emphasized cost savings for "large volumes" without quantifying "large" or prominently disclosing the retrieval charge or the minimum storage duration. In other AWS services, optional storage classes typically offer a storage price reduction from the first byte, in exchange for access penalties. With ECR, however, access penalties apply as described, but the storage price is unchanged for the first 150 TB, a container storage volume that few organizations achieve.

S3 Standard - Infrequent Access Used Where Intelligent Tiering Would Be Cheaper
Storage
Cloud Provider
AWS
Service Name
AWS S3
Inefficiency Type
Suboptimal Pricing Model

Organizations often use the Standard - Infrequent Access (Standard-IA) storage class based on documentation and code that predate 2021 updates to the Intelligent Tiering storage class. Intelligent Tiering became suitable as an initial S3 storage class even for objects that are small and/or will be deleted early. It also gained a heavily-discounted access tier. Older internal runbooks, lifecycle policies (including ones specified in infrastructure-as-code templates), scripts, programs, and public examples may still default to Standard-IA, inflating storage costs.


This inefficiency report compares Standard-IA with Intelligent Tiering. It is not intended to cover other storage classes. S3 storage is billed per gibibyte or GiB (powers of 2) rather than per gigabyte or GB (powers of 10), which matters for small objects and also for large volumes of storage.


Relative to the Standard storage class, the Standard-IA storage class offers a moderate, constant storage price discount but imposes a minimum billable object size of 128 KiB, a minimum storage duration of 30 days, and a per-GiB retrieval charge.


In contrast, AWS updated the Intelligent Tiering storage class in September, 2021, eliminating the minimum storage duration and exempting small objects from a monthly per-object monitoring and automation charge. Intelligent Tiering never had retrieval charges. In November, 2021, AWS added the heavily-discounted Archive Instant Access tier.


For objects stored beyond a few months, Intelligent Tiering's progressive storage price discounts surpass Standard-IA's constant discount. Storage savings accumulate each month. Objects in the Intelligent Tiering storage class automatically move through progressively cheaper access tiers unless the objects are accessed. Intelligent Tiering also avoids Standard-IA's minimum billable object size and minimum storage duration penalties.

There are no inefficiency matches the current filters.