6 min read5 days ago
–
Press enter or click to view image in full size
Modern applications don’t need servers for every problem. In fact, many systems are still overengineered with backend servers for use cases that don’t justify the operational overhead.
In this article, I’ll walk through how I designed and built a fully serverless resume upload portal on AWS, focusing on:
- Architecture decisions
- Lambda implementation
- IAM and security
- Deployment steps
- Real problems solved
This project is practical, production-oriented, and intentionally simple.
The Problem
Organizations need to collect resumes in a way that is:
- Secure
- Scalable
- Cost-efficient
- Easy to maintain
Traditional approaches usually involve backend servers, load balancers, and database managemen…
6 min read5 days ago
–
Press enter or click to view image in full size
Modern applications don’t need servers for every problem. In fact, many systems are still overengineered with backend servers for use cases that don’t justify the operational overhead.
In this article, I’ll walk through how I designed and built a fully serverless resume upload portal on AWS, focusing on:
- Architecture decisions
- Lambda implementation
- IAM and security
- Deployment steps
- Real problems solved
This project is practical, production-oriented, and intentionally simple.
The Problem
Organizations need to collect resumes in a way that is:
- Secure
- Scalable
- Cost-efficient
- Easy to maintain
Traditional approaches usually involve backend servers, load balancers, and database management. For a simple resume submission workflow, this introduces unnecessary complexity.
The challenge was to eliminate server management entirely while still keeping the system secure and reliable.
The Solution: A Fully Serverless Architecture
I designed a solution using only AWS managed services:
User (Browser) ↓Amazon S3 (Static Website) ↓AWS Lambda Function URL ↓Amazon S3 (Resume Storage) ↓Amazon DynamoDB (Metadata) ↓Amazon SNS (Admin Notification)No EC2.No containers.No API Gateway.
Press enter or click to view image in full size
Why Lambda Function URL Instead of API Gateway?
API Gateway is powerful, but for this use case it adds:
- Additional cost
- Configuration overhead
- Complexity for multipart file uploads
Lambda Function URLs provide:
- Native HTTPS endpoints
- Simple CORS configuration
- Lower cost for low-traffic APIs
- Easier handling of
multipart/form-data
For a resume upload system, Lambda Function URLs were a better fit.
Frontend: Static and Simple
The frontend is a static website hosted on Amazon S3, built using:
- HTML
- Tailwind CSS
- JavaScript
The frontend:
- Collects name, email, and resume file
- Contains no AWS credentials
- Sends requests directly to the Lambda Function URL
Because it’s static:
- It scales automatically
- It’s cheap to host
- There’s nothing to patch or maintain
Press enter or click to view image in full size
Backend: AWS Lambda as the Control Layer
The Lambda function handles all backend responsibilities:
- Parse multipart/form-data
- Validate name, email, and file
- Enforce file type and size limits
- Upload the resume to a private S3 bucket
- Store metadata in DynamoDB
- Send an admin notification via SNS
The function is written in Python 3.11 and designed defensively.
import boto3import uuidimport datetimeimport base64import cgiimport osfrom io import BytesIO# AWS clientss3 = boto3.client("s3")ddb = boto3.resource("dynamodb")sns = boto3.client("sns")# CONFIG (ENV first, fallback to working values)UPLOAD_BUCKET = os.getenv( "UPLOAD_BUCKET",) ## change the bucket name in enviroment variablesTABLE_NAME = os.getenv( "DDB_TABLE",) ## change table name in enviroment variablesTOPIC_ARN = os.getenv( "SNS_TOPIC_ARN",) ## put your TOPIC_ARN in enviroment variablesMAX_FILE_SIZE_MB = int(os.getenv("MAX_FILE_MB", "5"))ALLOWED_CONTENT_TYPE = os.getenv( "ALLOWED_CONTENT_TYPE", "application/pdf")table = ddb.Table(TABLE_NAME)def lambda_handler(event, context): print("Incoming request") body = event.get("body") if not body: return response(400, "Request body missing") # Decode body if event.get("isBase64Encoded"): body = base64.b64decode(body) headers = event.get("headers") or {} content_type = headers.get("content-type") or headers.get("Content-Type") if not content_type or "multipart/form-data" not in content_type: return response(400, "Invalid Content-Type") form = cgi.FieldStorage( fp=BytesIO(body), environ={ "REQUEST_METHOD": "POST", "CONTENT_TYPE": content_type }, keep_blank_values=True ) name = form.getvalue("name") email = form.getvalue("email") file_item = form["file"] if "file" in form else None if not name or not email or file_item is None: return response(400, "Missing required fields") # Validate file type if file_item.type != ALLOWED_CONTENT_TYPE: return response(400, "Only PDF files are allowed") file_bytes = file_item.file.read() file_size_mb = len(file_bytes) / (1024 * 1024) if file_size_mb > MAX_FILE_SIZE_MB: return response(400, "File size exceeds limit") resume_id = str(uuid.uuid4()) filename = f"{resume_id}.pdf" # Upload to S3 s3.put_object( Bucket=UPLOAD_BUCKET, Key=filename, Body=file_bytes, ContentType=ALLOWED_CONTENT_TYPE ) # Store metadata table.put_item(Item={ "resumeId": resume_id, "name": name, "email": email, "filename": filename, "uploadedAt": datetime.datetime.utcnow().isoformat() }) # SNS notification (NON-BLOCKING) try: sns.publish( TopicArn=TOPIC_ARN, Subject="New Resume Uploaded", Message=f"""New resume uploadedName : {name}Email : {email}File : {filename}Time : {datetime.datetime.utcnow().isoformat()}""" ) except Exception as e: print("SNS failed:", str(e)) return response(200, "Resume uploaded successfully")def response(status, msg): return { "statusCode": status, "body": msg }
Secure Resume Storage with Amazon S3
Resume files are stored in a private S3 bucket with:
- Public access fully blocked
- Server-side encryption enabled
- Access restricted to the Lambda IAM role
Files are named using UUIDs to prevent collisions and enumeration.
Press enter or click to view image in full size
Metadata Persistence Using DynamoDB
Resume metadata is stored in Amazon DynamoDB using an on-demand billing model.
Get Kishor Bhairat’s stories in your inbox
Join Medium for free to get updates from this writer.
Stored attributes include:
- Resume ID
- Candidate name
- Candidate email
- File name
- Upload timestamp
DynamoDB was chosen because it:
- Scales automatically
- Requires no schema migrations
- Fits serverless workloads naturally
Press enter or click to view image in full size
Notifications with Amazon SNS
Amazon SNS is used to notify administrators when a resume is uploaded.
A key design decision:
Notification failures do not block uploads
If SNS fails:
- The resume is still stored
- Metadata is preserved
- Errors are logged for visibility
Press enter or click to view image in full size
IAM and Security Model
Security was a first-class concern.
The Lambda function runs with a least-privilege IAM role, granting only:
s3:PutObjectfor the resume bucketdynamodb:PutItemfor the metadata tablesns:Publishfor notifications- CloudWatch logging permissions
There are:
- No wildcard admin permissions
- No AWS credentials in the frontend
- No public access to sensitive data
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:PutObject", "Resource": "arn:aws:s3:::<private_bucket_name>/*" }, { "Effect": "Allow", "Action": "dynamodb:PutItem", "Resource": "arn:aws:dynamodb:*:*:table/ResumeUploads" }, { "Effect": "Allow", "Action": "sns:Publish", "Resource": "arn:aws:sns:<region>:<account_ID>:resume-upload-alerts" }, { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": "*" } ]}
Deployment Walkthrough (High-Level)
The system was deployed using the AWS Console:
- Create S3 buckets (frontend + private uploads)
- Create DynamoDB table
- Create SNS topic and confirm email subscription
- Create Lambda function and attach IAM role
- Configure environment variables
- Enable Lambda Function URL with CORS
- Upload frontend files to S3
- Test end-to-end upload flow
Failure Handling and Observability
The system is designed to handle failures gracefully:
- Invalid uploads are rejected early
- SNS failures don’t break uploads
- Lambda logs are written to CloudWatch
- Errors can be traced via request IDs
This ensures data integrity and operational visibility.
What Problems This Project Solved
- Eliminated server management for resume uploads
- Secured file uploads without exposing infrastructure
- Simplified multipart uploads in a serverless environment
- Applied least-privilege IAM correctly
- Designed for failure without data loss
Future Improvements
Planned enhancements include:
- User authentication with Amazon Cognito
- SES-based co95nfirmation emails to candidates
- User dashboards for uploaded resumes
- Presigned URL downloads
- Infrastructure as Code using Terraform
Final Thoughts
This project demonstrates how real-world problems can be solved using serverless architecture without overengineering. By leaning on AWS managed services, it’s possible to build systems that are secure, scalable, and easy to operate.
The focus wasn’t on building something flashy — it was on building something correctly.