Cloud Provider
Service Name
Inefficiency Type
Clear filters
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Showing
1234
out of
1234
inefficiencis
Filter
:
Filter
x
Idle SageMaker Notebook Instances Left Running Continuously
AI
Cloud Provider
AWS
Service Name
AWS SageMaker
Inefficiency Type
Unused Resource

SageMaker notebook instances are billed continuously while in an active state — and critically, they do not automatically shut down when idle. Closing a browser tab, shutting down a Jupyter kernel, or simply walking away does not stop the underlying compute instance. This creates a pervasive waste pattern in ML and data science teams: a developer spins up a powerful GPU instance for experimentation, finishes their work, closes the browser, and assumes the resource is no longer running. In reality, the instance continues accruing per-second charges around the clock until it is explicitly stopped.

This is particularly costly because ML workloads often require high-performance instance types with GPUs. A single forgotten GPU notebook instance can generate thousands of dollars in monthly charges with zero productive use. The problem is compounded in team environments where multiple data scientists each maintain their own notebook instances, and there is no organizational process for reviewing or reclaiming idle resources. The classic scenario — an instance left running over a weekend or holiday — is one of the most common and avoidable sources of ML infrastructure waste.

Unlike SageMaker Studio, which offers native automatic shutdown of idle applications, traditional notebook instances have no built-in idle detection or auto-stop capability. Without explicit lifecycle configuration scripts or external automation, these instances will run indefinitely. The user experience itself is deceptive: the act of closing a notebook feels like shutting down, but the billable compute continues silently in the background.

Orphaned MLflow Training Artifacts and Model Checkpoints in Object Storage
AI
Cloud Provider
AWS
Service Name
AWS S3
Inefficiency Type
Unused Resource

Machine learning experimentation workflows — particularly those managed through experiment tracking platforms — generate large volumes of artifacts in object storage. Every training run produces model checkpoints, evaluation outputs, feature snapshots, and tensor logs. Hyperparameter tuning and AutoML workflows amplify this by creating hundreds or thousands of individual runs, each writing its own set of artifacts to locations in S3. When experiments are abandoned, models are never promoted to production, or team members depart, these artifacts remain in storage indefinitely because there is no native lifecycle management for ML experiment artifacts — cleanup must be implemented manually.

The cost impact is driven entirely by object storage capacity charges, which accumulate per GB-month regardless of whether the artifacts are referenced, the experiments are active, or the models are registered. Critically, even when experiment metadata is deleted through the tracking platform, the underlying artifacts in object storage are not automatically purged — they must be removed separately. For organizations training large models, checkpoint files alone can reach hundreds of gigabytes each, and production training pipelines may checkpoint every few hours. Without retention policies, it is common for ML artifact storage costs to grow unchecked and eventually rival or exceed compute costs.

Using High-Cost Models for Low-Complexity Tasks
AI
Cloud Provider
GCP
Service Name
GCP Vertex AI
Inefficiency Type
Overpowered Model Selection

Vertex AI workloads often include low-complexity tasks such as classification, routing, keyword extraction, metadata parsing, document triage, or summarization of short and simple text. These operations do **not** require the advanced multimodal reasoning or long-context capabilities of larger Gemini model tiers. When organizations default to a single high-end model (such as Gemini Ultra or Pro) across all applications, they incur elevated token costs for work that could be served efficiently by **Gemini Flash** or smaller task-optimized variants. This mismatch is a common pattern in early deployments where model selection is driven by convenience rather than workload-specific requirements. Over time, this creates unnecessary spend without delivering measurable value.

Using High-Cost Bedrock Models for Low-Complexity Tasks
AI
Cloud Provider
AWS
Service Name
AWS Bedrock
Inefficiency Type
Overpowered Model Selection

Many Bedrock workloads involve low-complexity tasks such as tagging, classification, routing, entity extraction, keyword detection, document triage, or lightweight summarization. These tasks **do not require** the advanced reasoning or generative capabilities of higher-cost models such as Claude 3 Opus or comparable premium models. When organizations default to a high-end model across all applications—or fail to periodically reassess model selection—they pay elevated costs for work that could be performed effectively by smaller, lower-cost models such as Claude Haiku or other compact model families. This inefficiency becomes more pronounced in high-volume, repetitive workloads where token counts scale quickly.

Always-On PTUs for Seasonal or Cyclical Azure OpenAI Workloads
AI
Cloud Provider
Azure
Service Name
Azure Cognitive Services
Inefficiency Type
Unnecessary Continuous Provisioning

Many Azure OpenAI workloads—such as reporting pipelines, marketing workflows, batch inference jobs, or time-bound customer interactions—only run during specific periods. When PTUs remain fully provisioned 24/7, organizations incur continuous fixed cost even during extended idle time. Although Azure does not offer native PTU scheduling, teams can use automation to provision and deprovision PTUs based on predictable cycles. This allows them to retain performance during peak windows while reducing cost during low-activity periods.

Non-Production Azure OpenAI Deployments Using PTUs Instead of PAYG
AI
Cloud Provider
Azure
Service Name
Azure Cognitive Services
Inefficiency Type
Misaligned Pricing Model

Development, testing, QA, and sandbox environments rarely have the steady, predictable traffic patterns needed to justify PTU deployments. These workloads often run intermittently, with lower throughput and shorter usage windows. When PTUs are assigned to such environments, the fixed hourly billing generates continuous cost with little utilization. Switching non-production workloads to PAYG aligns cost with actual usage and eliminates the overhead of managing PTU quota in low-stakes environments.

Underutilized PTU Quota for Azure OpenAI Deployments
AI
Cloud Provider
Azure
Service Name
Azure Cognitive Services
Inefficiency Type
Overprovisioned Capacity Allocation

When organizations size PTU capacity based on peak expectations or early traffic projections, they often end up with more throughput than regularly required. If real-world usage plateaus below provisioned levels, a portion of the PTU capacity remains idle but still generates full spend each hour. This is especially common shortly after production launch or during adoption of newer GPT-4 class models, where early conservative sizing leads to long-term over-allocation. Rightsizing PTUs based on observed usage patterns ensures that capacity matches actual demand.

Suboptimal Bedrock Inference Profile Model
AI
Cloud Provider
AWS
Service Name
AWS Bedrock
Inefficiency Type
Outdated Model Selection

AWS frequently updates Bedrock with improved foundation models, offering higher quality and better cost efficiency. When workloads remain tied to older model versions, token consumption may increase, latency may be higher, and output quality may be lower. Using outdated models leads to avoidable operational costs, particularly for applications with consistent or high-volume inference activity. Regular modernization ensures applications take advantage of new model optimizations and pricing improvements.

Missing Reserved PTUs for Steady-State Azure OpenAI Workloads
AI
Cloud Provider
Azure
Service Name
Azure Cognitive Services
Inefficiency Type
Unoptimized Pricing Model

Many production Azure OpenAI workloads—such as chatbots, inference services, and retrieval-augmented generation (RAG) pipelines—use PTUs consistently throughout the day. When usage stabilizes after initial experimentation, continuing to rely on on-demand PTUs results in ongoing unnecessary spend. These workloads are strong candidates for reserved PTUs, which provide identical performance guarantees at a substantially reduced hourly rate. Migrating to reservations usually requires no architectural changes and delivers immediate cost savings.

Suboptimal Azure OpenAI Model Type
AI
Cloud Provider
Azure
Service Name
Azure Cognitive Services
Inefficiency Type
Outdated Model Selection

Azure releases newer OpenAI models that provide better performance and cost characteristics compared to older generations. When workloads remain on outdated model versions, they may consume more tokens to produce equivalent output, run slower, or miss out on quality improvements. Because customers pay per token, using an older model can lead to unnecessary spending and reduced value. Aligning deployments to the most current, efficient model types helps reduce spend and improve application performance.

There are no inefficiency matches the current filters.