CER-0330
Machine learning experimentation workflows — particularly those managed through experiment tracking platforms — generate large volumes of artifacts in object storage. Every training run produces model checkpoints, evaluation outputs, feature snapshots, and tensor logs. Hyperparameter tuning and AutoML workflows amplify this by creating hundreds or thousands of individual runs, each writing its own set of artifacts to locations in S3. When experiments are abandoned, models are never promoted to production, or team members depart, these artifacts remain in storage indefinitely because there is no native lifecycle management for ML experiment artifacts — cleanup must be implemented manually.
The cost impact is driven entirely by object storage capacity charges, which accumulate per GB-month regardless of whether the artifacts are referenced, the experiments are active, or the models are registered. Critically, even when experiment metadata is deleted through the tracking platform, the underlying artifacts in object storage are not automatically purged — they must be removed separately. For organizations training large models, checkpoint files alone can reach hundreds of gigabytes each, and production training pipelines may checkpoint every few hours. Without retention policies, it is common for ML artifact storage costs to grow unchecked and eventually rival or exceed compute costs.
The billing waste from orphaned ML artifacts occurs in the underlying cloud provider's object storage, not in the ML platform itself. Storage costs are billed separately by the cloud provider. The key cost dimensions are:
Because each training run, checkpoint, and experiment version writes separate objects, artifact counts and total storage volume grow rapidly during active experimentation and remain permanently unless governed by retention policies.