Cloud Provider
Service Name
Inefficiency Type
Clear filters
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Showing
1234
out of
1234
inefficiencies
Filter
:
Filter
x
Excessive KMS Charges from Missing S3 Bucket Key Configuration
Storage
Cloud Provider
AWS
Service Name
AWS S3
Inefficiency Type
Misconfiguration

S3 buckets configured with SSE-KMS but without Bucket Keys generate a separate KMS request for each object operation. This behavior results in disproportionately high KMS request costs for data-intensive workloads such as analytics, backups, or frequently accessed objects. Bucket Keys allow S3 to cache KMS data keys at the bucket level, reducing the volume of KMS calls and cutting encryption costs—often with no impact on security or performance.

Detection

• Identify S3 buckets with SSE-KMS encryption enabled

• Check if Bucket Keys are disabled or not configured

• Analyze object access frequency and KMS request volume

• Estimate potential cost savings by enabling Bucket Keys

• Prioritize buckets with high object counts or frequent read/write operations

Remediation

• Enable S3 Bucket Keys for eligible buckets using SSE-KMS

• Document any security exceptions or requirements that prevent Bucket Key use

• Note: Enabling Bucket Keys applies only to newly encrypted objects; existing objects must be re-encrypted or re-uploaded to benefit

• Track KMS request metrics before and after rollout to validate cost impact

Unfiltered Recording of High-Churn Resource Types in AWS Config
Other
Cloud Provider
AWS
Service Name
Inefficiency Type

By default, AWS Config can be set to record changes across all supported resource types, including those that change frequently, such as security group rules, IAM role policies, route tables, or network interfaces frequent ephemeral resources in containerized or auto-scaling setupsThese high-churn resources can generate an outsized number of configuration items and inflate costs — especially in dynamic or large-scale environments.

This inefficiency arises when recording is enabled indiscriminately across all resources without evaluating whether the data is necessary. Without targeted scoping, teams may incur large charges for configuration data that provides minimal value, especially in non-production environments.This can also obscure meaningful compliance signals by introducing noise

Excessive Retention of Audit Logs
Storage
Cloud Provider
Azure
Service Name
Azure Blob Storage
Inefficiency Type
Over-Retention of Data

Audit logs are often retained longer than necessary, especially in environments where the logging destination is not carefully selected. Projects that initially route SQL Audit Logs or other high-volume sources to LAW or Azure Storage may forget to revisit their retention strategy. Without policies in place, logs can accumulate unchecked—particularly problematic with SQL logs, which can generate significant volume. Lifecycle Management Policies in Azure Storage are a key tool for addressing this inefficiency but are often overlooked.

However, tier transitions are not always cost-saving. For example, in cases where log data consists of extremely large numbers of very small files (such as AKS audit logs across many pods), the transaction charges incurred when moving objects between storage tiers may exceed the potential savings from reduced storage rates. In these scenarios, it can be more cost-effective to retain logs in Hot tier until deletion, rather than moving them to lower-cost tiers first.

Detection

Identify resources with Audit Logging enabled

Determine whether logs are routed to Log Analytics Workspace or Azure Storage

Assess whether current retention aligns with compliance or operational needs

Evaluate volume and cost of logs retained beyond required periods

Review whether lifecycle policies or retention settings are currently configured

Check if any projects have a “set and forget” logging configuration that has never been reviewed

Remediation

Apply Azure Storage Lifecycle Management Policies to transition older logs to lower-cost tiers or delete them after a set retention period. Before implementing tier transitions, assess whether the additional transaction costs from moving large volumes of small log files could outweigh potential storage savings. In such cases, consider retaining logs in Hot tier until deletion if that results in lower overall cost.

For logs in Log Analytics Workspace, assess whether they can be moved to Basic tables or stored in Storage Accounts instead

Establish project-specific retention requirements with stakeholders and enforce them across all audit logging configurations

Periodically audit logging destinations and lifecycle settings to prevent silent cost creep

Overly Permissive VPC Flow Log Filters Sent to CloudWatch Logs
Other
Cloud Provider
AWS
Service Name
AWS CloudWatch
Inefficiency Type

VPC Flow Logs configured with the ALL filter and delivered to CloudWatch Logs often result in unnecessarily high log ingestion volumes — especially in high-traffic environments. This setup is rarely required for day-to-day monitoring or security use cases but is commonly enabled by default or for temporary debugging and then left in place. As a result, teams incur excessive CloudWatch charges without realizing the logging configuration is misaligned with actual needs.

Overprovisioned Throughput in EFS
Storage
Cloud Provider
AWS
Service Name
AWS EFS
Inefficiency Type

When file systems are launched with Provisioned Throughput, teams often overestimate future demand — especially in environments cloned from production or sized “just to be safe.” Over time, many workloads consume far less throughput than allocated, especially in dev/test environments or during periods of reduced usage. These overprovisioned settings can silently accrue substantial monthly charges that go unnoticed without intentional review.

This inefficiency is not flagged by AWS Trusted Advisor and is easy to miss. Elastic Throughput mode now offers a scalable alternative that automatically adjusts capacity — but isn’t always cheaper, depending on the workload’s sustained throughput.

Inefficient Use of Azure Pipelines
Other
Cloud Provider
Azure
Service Name
Inefficiency Type

Teams often overuse Microsoft-hosted agents by running redundant or low-value jobs, failing to configure pipelines efficiently, or neglecting to use self-hosted agents for steady workloads. These inefficiencies result in unnecessary cost and delivery friction, especially when pipelines create queues due to limited agent availability.

Overreliance on Lambda at Sustained Scale
Compute
Cloud Provider
AWS
Service Name
AWS Lambda
Inefficiency Type
Suboptimal Pricing Model

Lambda is designed for simplicity and elasticity, but its pricing model becomes expensive at scale. When a function runs frequently (e.g., millions of invocations per day) or for extended durations, the cumulative cost may exceed that of continuously running infrastructure. This is especially true for predictable workloads that don’t require the dynamic scaling Lambda provides.

Teams often continue using Lambda out of convenience or architectural inertia, without revisiting whether the workload would be more cost-effective on EC2, ECS, or EKS. This inefficiency typically hides in plain sight—functions run correctly and scale as needed, but the unit economics are no longer favorable.

Excessive Lambda Duration from Synchronous Waiting
Compute
Cloud Provider
AWS
Service Name
AWS Lambda
Inefficiency Type
Inefficient Configuration

Some Lambda functions perform synchronous calls to other services, APIs, or internal microservices and wait for the response before proceeding. During this time, the Lambda is idle from a compute perspective but still fully billed. This anti-pattern can lead to unnecessarily long durations and elevated costs, especially when repeated across high-volume workflows or under memory-intensive configurations.

While this behavior might be functionally correct, it is rarely optimal. Asynchronous invocation patterns—such as decoupling downstream calls with queues, events, or callbacks—can reduce runtime and avoid charging for waiting time. However, detecting this inefficiency is nontrivial, as high duration alone doesn’t always indicate synchronous waiting. Understanding function logic and workload patterns is key.

Oversized Hosting Plan for Azure Functions
Compute
Cloud Provider
Azure
Service Name
Inefficiency Type

Teams often choose the Premium or App Service Plan for Azure Functions to avoid cold start delays or enable VNET connectivity, especially early in a project when performance concerns dominate. However, these decisions are rarely revisited—even as usage patterns change.

In practice, many workloads running on Premium or App Service Plans have low invocation frequency, minimal execution time, and no strict latency requirements. This leads to consistent spend on compute capacity that is largely idle. Because these plans still “work” and don’t cause reliability issues, the inefficiency is easy to overlook. Over time, this misalignment between hosting tier and actual usage creates significant invisible waste.

Overbilling Due to Tier Switches and Allocation Overlaps in DTU Model
Databases
Cloud Provider
Azure
Service Name
Azure SQL
Inefficiency Type
Suboptimal Pricing Model

Workloads that frequently scale up and down within the same day—whether manually, via automation, or platform-managed—can encounter hidden cost amplification under the DTU model. When a database changes tiers (e.g., S7 → S4), Azure treats each tiered segment as a separate allocation and applies full-hour rounding independently. In some cases, both tiers may be billed for the same time period due to failover, reallocation delays, or timing mismatches during transitions.

This behavior is opaque to most users because billing granularity is daily, and Azure does not explicitly surface overlapping charges. The result is unexpected overbilling where a single database may appear to consume 28 or more “hours” of DTU in a single calendar day. While technically aligned with Azure’s billing design, this creates inefficiencies when tier switches are frequent and uncoordinated.

There are no inefficiency matches the current filters.