In this article, I present a reliable configuration for deploying React applications to AWS. This approach utilizes Terraform to manage infrastructure as code, Amazon S3 for static asset storage, Amazon CloudFront for efficient content distribution, and AWS CodePipeline to automate the continuous integration and continuous deployment (CI/CD) process.
The resulting pipeline provides scalable global performance, robust security measures, and uninterrupted service during updates, with zero downtime. Operational costs are projected to range from $5 to $20 monthly, depending on traffic volume.
Rationale for the Selected Technologies
Contemporary web applications require low-latency access, consistent reliability, and streamlined automation. The AWS services employed here address th…
In this article, I present a reliable configuration for deploying React applications to AWS. This approach utilizes Terraform to manage infrastructure as code, Amazon S3 for static asset storage, Amazon CloudFront for efficient content distribution, and AWS CodePipeline to automate the continuous integration and continuous deployment (CI/CD) process.
The resulting pipeline provides scalable global performance, robust security measures, and uninterrupted service during updates, with zero downtime. Operational costs are projected to range from $5 to $20 monthly, depending on traffic volume.
Rationale for the Selected Technologies
Contemporary web applications require low-latency access, consistent reliability, and streamlined automation. The AWS services employed here address these requirements comprehensively:
- AWS CodePipeline coordinates the entire deployment workflow, ensuring orderly progression through each stage.
- Amazon S3 offers economical and resilient storage for static files.
- Amazon CloudFront employs edge caching to achieve sub-second load times worldwide.
- Terraform facilitates declarative infrastructure provisioning, promoting repeatability and auditability.
Architectural Overview
The workflow initiates upon commits to the GitHub repository: the code is built using AWS CodeBuild, artifacts are synchronized to S3, CloudFront caches are invalidated through a Lambda function, and content is served via the CloudFront distribution. This design inherently supports zero-downtime deployments.
End users access the application with immediate availability of updated content, free from caching inconsistencies.
Essential Components
- GitHub Repository: Acts as the version control system, triggering the pipeline upon pushes to the
mainbranch. - AWS CodePipeline: Serves as the orchestration layer, sequencing stages for source retrieval, building, deployment, and cache invalidation.
- AWS CodeBuild: Executes Node.js-based builds, including dependency installation and the production bundling command (
npm ci && npm run build). - Amazon S3 Bucket: Provides private, versioned storage for deployment artifacts, with strict access controls.
- Amazon CloudFront with Origin Access Control (OAC): Delivers content securely from S3, incorporating single-page application (SPA) error handling to route 403 and 404 responses appropriately.
- AWS Lambda: Performs automated cache invalidation for CloudFront following each deployment, ensuring content freshness.
Prerequisites
To proceed, ensure the following are in place:
- An active AWS account, preferably utilizing the free tier for initial validation.
- A GitHub repository containing a React application developed with Vite.
- Required tools: AWS Command Line Interface (CLI) with configured credentials, Terraform version 1.0 or later, and Node.js version 18 or higher.
- Proficiency with basic command-line operations.
A detailed checklist is provided in the repository’s README file.
Step 1: Prepare the React Application
Configure the Vite build settings for production deployment by updating vite.config.ts:
import { defineConfig } from 'vite';
import react from '@vitejs/plugin-react';
export default defineConfig({
plugins: [react()],
base: '/',
build: {
outDir: 'dist',
},
});
Execute npm run build locally to confirm that the dist directory contains the optimized assets. A sample application is available for cloning: Repository.
Step 2: Infrastructure Definition Using Terraform
Obtain the Terraform configuration files (.tf) from the repository. The following sections highlight the primary resources.
Configuration of the Private S3 Bucket
resource "random_id" "suffix" {
byte_length = 4
}
# S3 Bucket (Private and Versioned)
resource "aws_s3_bucket" "website_bucket" {
bucket = "${var.project_name}-website-${random_id.suffix.hex}"
force_destroy = true
}
resource "aws_s3_bucket_public_access_block" "website_bucket_public_access" {
bucket = aws_s3_bucket.website_bucket.id
block_public_acls = true
ignore_public_acls = true
block_public_policy = true
restrict_public_buckets = true
}
resource "aws_s3_bucket_versioning" "website_bucket_versioning" {
bucket = aws_s3_bucket.website_bucket.id
versioning_configuration {
status = "Enabled"
}
}
This configuration enforces privacy, permitting access exclusively through authorized CloudFront origins.
Amazon CloudFront Distribution with Origin Access Control
# Origin Access Control for Secure S3 Integration
resource "aws_cloudfront_origin_access_control" "s3_oac" {
name = "${var.project_name}-oac"
description = "OAC for ${var.project_name} S3 bucket"
origin_access_control_origin_type = "s3"
signing_behavior = "always"
signing_protocol = "sigv4"
}
# CloudFront Distribution
resource "aws_cloudfront_distribution" "website" {
origin {
domain_name = aws_s3_bucket.website_bucket.bucket_regional_domain_name
origin_id = "S3Origin"
origin_access_control_id = aws_cloudfront_origin_access_control.s3_oac.id
}
enabled = true
is_ipv6_enabled = true
default_root_object = "index.html"
default_cache_behavior {
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3Origin"
compress = true
forwarded_values {
query_string = true
cookies {
forward = "none"
}
headers = [
"Accept", "Accept-Charset", "Accept-Encoding", "Accept-Language",
"Origin", "Referer", "User-Agent"
]
}
min_ttl = 0
default_ttl = 300 # 5 minutes
max_ttl = 86400 # 24 hours
viewer_protocol_policy = "redirect-to-https"
}
# Extended Caching for Assets
ordered_cache_behavior {
path_pattern = "/assets/*"
allowed_methods = ["GET", "HEAD", "OPTIONS"]
cached_methods = ["GET", "HEAD"]
target_origin_id = "S3Origin"
compress = true
forwarded_values {
query_string = false
cookies {
forward = "none"
}
}
min_ttl = 0
default_ttl = 31536000 # 1 year
max_ttl = 31536000
viewer_protocol_policy = "redirect-to-https"
}
# Error Responses for Single-Page Applications
custom_error_response {
error_code = 404
response_code = 200
response_page_path = "/index.html"
error_caching_min_ttl = 300
}
custom_error_response {
error_code = 403
response_code = 200
response_page_path = "/index.html"
error_caching_min_ttl = 300
}
restrictions {
geo_restriction {
restriction_type = "none"
}
}
viewer_certificate {
cloudfront_default_certificate = true
}
tags = {
Name = "${var.project_name}-distribution"
}
}
This setup ensures compatibility with React Router by redirecting 404 and 403 errors to index.html for client-side handling.
S3 Bucket Policy for CloudFront Access
resource "aws_s3_bucket_policy" "website_bucket_policy" {
bucket = aws_s3_bucket.website_bucket.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "AllowCloudFrontOAC"
Effect = "Allow"
Principal = { Service = "cloudfront.amazonaws.com" }
Action = ["s3:GetObject", "s3:ListBucket"]
Resource = [aws_s3_bucket.website_bucket.arn, "${aws_s3_bucket.website_bucket.arn}/*"]
Condition = {
StringEquals = { "AWS:SourceArn" = aws_cloudfront_distribution.website.arn }
}
}
]
})
}
Inclusion of the s3:ListBucket action avoids erroneous 404 responses during object listings.
AWS CodePipeline and Lambda Integration
The pipeline comprises stages for GitHub source retrieval, CodeBuild execution, S3 synchronization, and Lambda-triggered invalidation.
The Lambda function, implemented in Python with comprehensive error handling, is as follows:
import json
import boto3
import os
from botocore.exceptions import ClientError
def handler(event, context):
"""
Performs CloudFront cache invalidation following deployment.
"""
cloudfront = boto3.client('cloudfront')
codepipeline = boto3.client('codepipeline')
job_id = event['CodePipeline.job']['id']
try:
distribution_id = os.environ.get('DISTRIBUTION_ID')
if not distribution_id:
raise ValueError("DISTRIBUTION_ID environment variable not set")
print(f"Invalidating distribution: {distribution_id}")
response = cloudfront.create_invalidation(
DistributionId=distribution_id,
InvalidationBatch={
'Paths': {'Quantity': 1, 'Items': ['/*']},
'CallerReference': f"pipeline-{job_id}-{context.aws_request_id}"
}
)
invalidation_id = response['Invalidation']['Id']
print(f"Invalidation created: {invalidation_id}")
codepipeline.put_job_success_result(jobId=job_id)
return {
'statusCode': 200,
'body': json.dumps({'invalidation_id': invalidation_id})
}
except ClientError as e:
error_msg = f"AWS Client Error: {str(e)}"
print(error_msg)
codepipeline.put_job_failure_result(
jobId=job_id,
failureDetails={'message': error_msg}
)
return {'statusCode': 500, 'body': json.dumps({'error': error_msg})}
except Exception as e:
error_msg = f"Unexpected error: {str(e)}"
print(error_msg)
codepipeline.put_job_failure_result(
jobId=job_id,
failureDetails={'message': error_msg}
)
return {'statusCode': 500, 'body': json.dumps({'error': error_msg})}
Appropriate IAM permissions must be assigned to the CodePipeline execution role, including lambda:InvokeFunction.
Step 3: Provision the Infrastructure
cd terraform
cp example.terraform.tfvars terraform.tfvars # Update project_name="myapp"
terraform init
terraform plan
terraform apply
Upon completion, Terraform will output the CloudFront distribution URL and pipeline details for reference.
Step 4: Integrate the GitHub Repository
In the AWS Management Console, navigate to CodePipeline > Connections. Authorize the GitHub App and select the target repository and branch to enable webhook-based triggers.
Step 5: Initiate the Deployment
Commit and push changes to the main branch. Monitor progress in the AWS Console:
- Source Stage: Retrieves the latest code from GitHub.
- Build Stage: Installs dependencies and executes
npm run build. - Deploy Stage: Synchronizes the
distdirectory to the S3 bucket usingaws s3 sync. - Invalidation Stage: Invokes the Lambda function to clear CloudFront caches.
The application will be accessible at the provisioned CloudFront URL (e.g., https://dxxxx.cloudfront.net).
Troubleshooting Common Issues
| Issue | Root Cause | Resolution |
|---|---|---|
| Pipeline Status "Pending" | GitHub authorization failure | AWS Console > CodePipeline > Connections > Re-authorize GitHub App |
| CloudFront Returns 404 | Missing s3:ListBucket permission | Update S3 policy to include the action; execute terraform apply |
| Assets Return 403 | "Host" header forwarded | Remove "Host" from CloudFront forwarded_values configuration |
| Blank Application Page | Incorrect MIME types | Configure CodeBuild to set Content-Type during S3 synchronization |
| Lambda Invocation Failure | Insufficient IAM permissions | Attach lambda:InvokeFunction policy to the CodePipeline execution role |
For manual cache invalidation, execute: aws cloudfront create-invalidation --distribution-id <ID> --paths "/*".
Scaling and Cost Management
- Free Tier Benefits: Includes 1 TB of CloudFront data transfer and 5 GB of S3 storage at no additional charge.
- Production Enhancements: Incorporate Amazon Route 53 for custom domains and multi-region CloudFront distributions for reduced latency.
- Monitoring Practices: Establish Amazon CloudWatch alarms for error rates exceeding 1% to enable proactive remediation.
This configuration readily accommodates enterprise-scale workloads.
Conclusion
This solution establishes a solid foundation for deploying React applications on AWS with professional-grade reliability and efficiency. I recommend forking the repository, customizing as needed, and refining based on operational insights. For further inquiries, please open an issue on GitHub.