Cloud object storage powers a wide range of workloads, from AI training datasets to customer-facing media libraries. As your data grows into the petabyte scale, managing storage costs and ensuring reliability requires fine-grained visibility. You need answers to questions like: Which specific teams, services, workloads, or datasets are driving spend? Which data is cold and should be archived? What fixes will have the biggest impact on cost and performance?
By surfacing metrics that connect data in your buckets to teams, pipelines, and datasets—and tracking access patterns—Datadog Storage Management provides the visibility and guidance you need to answer these questions. Storage Management also delivers cost-cutting recommendations tha…
Cloud object storage powers a wide range of workloads, from AI training datasets to customer-facing media libraries. As your data grows into the petabyte scale, managing storage costs and ensuring reliability requires fine-grained visibility. You need answers to questions like: Which specific teams, services, workloads, or datasets are driving spend? Which data is cold and should be archived? What fixes will have the biggest impact on cost and performance?
By surfacing metrics that connect data in your buckets to teams, pipelines, and datasets—and tracking access patterns—Datadog Storage Management provides the visibility and guidance you need to answer these questions. Storage Management also delivers cost-cutting recommendations that help you make confident decisions about your cloud object storage to improve efficiency.
In this post, we’ll show you how to:
- Pinpoint which teams, services, workloads, or datasets are driving cost
- Identify cold data and move it to cheaper tiers
- Use recommendations to act on savings opportunities
Pinpoint which teams, services, workloads, and datasets are driving cost
Bucket-level metrics—which apply to the overall storage container—can tell you how much you’re paying for storage, but they don’t give fine-grained visibility into the specific drivers of those costs. Datadog Storage Management breaks down your cost drivers to the prefix level so you can attribute spend directly to particular teams, services, and workloads. For example, if you run a data warehouse, prefix-level visibility lets you associate prefixes with database tables. If you manage media workloads, you can see whether prefixes like images/raw/ or video/ingest/ are ballooning and decide whether to archive original assets after processing is complete.
To learn more about prefix-level metrics, see our blog post about optimizing and troubleshooting cloud storage at scale using Storage Management.
Identify cold data and move it to cheaper tiers
Most organizations store data in the default Amazon S3 Standard class, even when large portions are rarely accessed. Determining what can be archived usually requires deep analysis of access logs and metadata. Without prefix-level insight, you can’t separate hot production paths from idle datasets that are stored in the same bucket.
Storage Management enables request metrics by prefix, so you can visualize which datasets are actively used and which remain untouched. Combined with object count and age metrics, these views make it easy to spot opportunities to transition cold data into lower-cost tiers.
Imagine that you’ve used terabytes of Amazon S3 data to train an LLM. Storage Management can show that your training dataset prefixes were heavily accessed during initial runs but haven’t been touched since deployment 5 months ago. You now have the evidence to justify moving those objects to Amazon S3 Glacier or other archive tiers, reducing storage costs without risking availability.
Use recommendations to act on savings opportunities
Identifying inefficiencies is only part of the challenge. For customers who are using Cloud Cost Management (CCM) Recommendations, Storage Management provides a prioritized queue of cost-saving recommendations that you can immediately act on. Each recommendation lists impacted prefixes, criteria, and estimated savings.
Examples of actions you can take based on recommendations include:
- Transition infrequently accessed objects to Amazon S3 Glacier or Deep Archive
- Add expiration rules for non-current versions in version-enabled prefixes and identify deleted markers
- Consolidate small files to minimize per-object storage charges
You can also use the Bits AI Dev Agent (currently in Preview) to take steps to act on a recommendation, such as by automatically generating or updating an Infrastructure as Code (IaC) component (such as an Amazon S3 life cycle policy), opening a PR, and routing the PR for review.
If you are not ready to act, you can manage recommendations just as you would a backlog by acknowledging, snoozing, or integrating them with Jira or other workflow tools. This lets your team immediately capture quick wins while planning longer-term improvements.
Customer scenario: Investigating a sudden storage cost spike
To get a sense of how this works in practice, consider a DevOps team that recently noticed that their monthly AWS bill has jumped by 40%. At first glance, the Amazon S3 cost report showed that the increase was spread across several buckets, with no single obvious culprit.
Using Storage Management, the team filtered by the team and service tags to discover that among the buckets they own, one shared bucket had grown substantially. Because it was a shared bucket, it wasn’t immediately clear exactly where the increase was. They were able to narrow it down to a single prefix, logs/ingestion/, which had grown by several terabytes in just 1 week. Focusing in further, they saw that the data was stored in the Amazon Glacier Instant Retrieval tier and consisted of thousands of small files generated by a new deployment, which was amplifying per-object overhead due to minimum storage and request costs.
The Storage Management recommendations panel flagged the following two clear savings opportunities:
- Consolidate the small files to reduce per-object overhead
- Transition older logs to an infrequent access tier (because request metrics showed almost no reads after 30 days)
By following these recommendations, the team was able to cut their projected storage costs by thousands of dollars per month without impacting application performance or compliance requirements.
Get started with Storage Management
Datadog Storage Management provides insights into exactly where your spend is going so you can move cold data to cheaper tiers, fix retention policy gaps, and optimize costs with confidence. To get started, see the Storage Management documentation. Or, if you’re new to Datadog, sign up for a 14-day free trial.