Amazon FSx file systems are designed for performance-sensitive workloads such as shared enterprise file systems, high-performance computing, analytics, and machine learning. Storage costs are driven by provisioned capacity (measured in GB-months) and throughput capacity (measured in MBps-months), regardless of how frequently the stored data is actually accessed. When datasets become archival, historical, or reference-only in nature — often after project completion, workload migration, or data lifecycle changes — retaining them on high-performance FSx storage results in sustained premium charges for data that could reside on significantly cheaper alternatives.
The severity of this inefficiency varies by FSx variant. FSx for Windows File Server is most directly exposed because it lacks native automatic tiering to external cold storage tiers — all data remains on provisioned SSD or HDD capacity with no built-in mechanism to move cold data to lower-cost object storage. FSx for NetApp ONTAP, by contrast, offers automatic data tiering to a lower-cost capacity pool tier, but this feature must be properly configured with appropriate tiering policies per volume; if left at default settings or misconfigured, cold data may still occupy expensive SSD storage. FSx for Lustre and FSx for OpenZFS support tiering storage classes that automatically move data between access tiers, but only when this storage class is selected at deployment. In all cases, the waste stems from the same root cause: high-performance storage capacity being consumed by data that no longer requires — or never required — that level of performance.