In the last blog (Day 12), I explored how networking in AWS forms the foundation for secure and scalable compute - using VPCs, Subnets, Gateways, and Security Layers.
But then I faced a new challenge:
“Where do I keep the data that shouldn’t vanish after my EC2 shuts down?”
That’s when I discovered Amazon S3 (Simple Storage Service) - the unsung hero of the cloud.
In this chapter, I’ll cover:
- S3 Buckets and Objects
- Regions
- S3 Storage Classes
- Versioning
- Access Control
- Buckets and Keys
- S3 Data Consistency Models
- Object Lifecycle Management
- Encryption
And I’ll share how I applied these in my AWS Sandbox project
1. What Is S3?…
In the last blog (Day 12), I explored how networking in AWS forms the foundation for secure and scalable compute - using VPCs, Subnets, Gateways, and Security Layers.
But then I faced a new challenge:
“Where do I keep the data that shouldn’t vanish after my EC2 shuts down?”
That’s when I discovered Amazon S3 (Simple Storage Service) - the unsung hero of the cloud.
In this chapter, I’ll cover:
- S3 Buckets and Objects
- Regions
- S3 Storage Classes
- Versioning
- Access Control
- Buckets and Keys
- S3 Data Consistency Models
- Object Lifecycle Management
- Encryption
And I’ll share how I applied these in my AWS Sandbox project
1. What Is S3?
Amazon S3 (Simple Storage Service) is AWS’s object storage platform - used to store any amount of data, anywhere, at any time.
Unlike local drives or block storage, S3 isn’t tied to an instance - it’s global, durable, and accessible via APIs, SDKs, or the console.
Think of it like this:
“EC2 runs your application, S3 remembers everything it does.”
It’s where you store logs, backups, images, configurations, or even entire static websites.
2. Buckets and Objects - The Core of S3
Everything in S3 lives inside Buckets (like folders), and the data inside them are called Objects.
Each object can be up to 5TB in size and includes:
- Data (file itself)
- Metadata (info like type, owner, last-modified)
- Key (unique identifier within the bucket)
In my setup:
I created a bucket named my-s3-bucket-mumbai-13 to store log files and system backups from my EC2 sandbox.
Command:
Practical takeaway:
Once uploaded, the object gets a unique key path, like:
3. Regions - Where Your Data Lives
Each S3 bucket exists in a specific AWS region (e.g., ap-south-1 for Mumbai).
This helps reduce latency and ensures compliance with local data laws.
For instance, my project bucket was created in Mumbai region, keeping latency low for my EC2 instance running in the same zone.
Tip: Always keep S3 and EC2 in the same region to avoid unnecessary data transfer costs.
4. S3 Storage Classes - Cost Meets Performance
AWS offers multiple storage classes based on how frequently you access data:
| Storage Class | Description | Use Case |
|---|---|---|
| Standard | High availability, low latency | Frequently accessed data |
| Intelligent-Tiering | Auto-moves data between classes | Mixed access patterns |
| Standard-IA | Lower cost, higher retrieval time | Infrequent access |
| One Zone-IA | Stored in one AZ | Archival data |
| Glacier / Deep Archive | Very low cost, long retrieval | Long-term backups |
In my setup, I used Standard for live log files and Glacier for archived reports.
Command Example:
5. S3 Versioning - Protecting from Mistakes
Ever deleted or overwritten something important?
That’s where Versioning saves lives.
Enabling it keeps every version of every object - even if you delete or overwrite it.
Enable via CLI:
Lesson: In DevOps, versioning isn’t just for code - it’s for everything.
6. S3 Access Control - The Gatekeepers
Security is crucial when multiple users or applications access your S3.
You can manage access using:
- IAM Policies: Control access via user roles (recommended).
- Bucket Policies: JSON rules directly applied to a bucket.
- ACLs (Access Control Lists): Legacy but still useful for object-level control.
Example Bucket Policy (read-only public access):
Note: Always use IAM roles for internal apps - never expose access keys in code.
7. Buckets and Keys - The Naming Convention
Each object inside S3 is identified by a key - similar to a file path.
Example:
s3://my-s3-bucket-mumbai-13/reports/2025/summary.csv
Here:
- my-s3-bucket-mumbai-13 - Bucket
- reports/2025/summary.csv - Key
Understanding this helps when automating tasks like uploading logs or parsing S3 URLs in scripts.
8. S3 Data Consistency Models
S3 ensures:
- Read-after-write consistency for new objects.
- Eventual consistency for overwrites and deletions.
In simpler terms:
New uploads appear instantly, but updates may take a few seconds to propagate globally.
Why it matters:
When building pipelines or backup scripts, always add a short delay after overwrites to ensure consistency.
9. Object Lifecycle Management - Automate Archival
Storage is cheap - but at scale, even “cheap” becomes expensive.
Lifecycle policies help automatically transition or delete old objects.
Result: Fully automated cost control without manual cleanup.
10. S3 Encryption - Security Beyond Access Control
AWS offers three levels of encryption:
| Encryption Type | Description |
|---|---|
| SSE-S3 | AWS manages encryption keys |
| SSE-KMS | Customer-managed keys (via AWS KMS) |
| SSE-C | You manage your own keys |
Enable Server-Side Encryption (KMS):
Tip: Always use SSE-KMS for enterprise-grade compliance and auditing.
My Takeaways
- S3 is not just storage - it’s the foundation for data resilience in AWS.
- Buckets and objects are simple, but their policies and lifecycle define your security posture.
- Versioning + Encryption + Lifecycle rules = A production-grade storage strategy.
“If compute is the brain of cloud, storage is its memory - precise, persistent, and priceless.”
What’s Next (Day 14 - AWS Storage & Distribution Deep Dive)
After mastering object storage with S3, I’ll now explore persistent, file, and database-level storage, and how AWS distributes content globally.
In Day 14, I’ll cover:
- Elastic Block Store (EBS)
- Elastic File System (EFS)
- EBS Snapshots & FSx
- Relational Database Service (RDS)
- DynamoDB
- Route 53
- CloudFront
“S3 taught me how to store data. The next step is learning how to serve it - faster, smarter, and everywhere.”