Part 1 - The Essentials: Shebang, Shell Types, Basic Syntax, Variables & User Input
By Ashish — Learn in Public DevOps Journey (Week 2) 🔗 LinkedIn: https://www.linkedin.com/in/ashish360/ Github: https://github.com/ashish0360/devops-learn-in-public/tree/main/shell-scripting-for-devops
📘 Table of Contents
- Why Shell Scripting Matters in DevOps
- What is a Shell? (bash, sh, dash)
- The Shebang (#!)
- Running a Shell Script
- Basic Syntax: echo, comments, variables
- Reading User Input (read)
- Script Arguments ($0, $1, $2)
- Debugging & Error Handling (set -x, set -e, set -o pipefail)
- Essential Commands You’ll Use Daily 1…
Part 1 - The Essentials: Shebang, Shell Types, Basic Syntax, Variables & User Input
By Ashish — Learn in Public DevOps Journey (Week 2) 🔗 LinkedIn: https://www.linkedin.com/in/ashish360/ Github: https://github.com/ashish0360/devops-learn-in-public/tree/main/shell-scripting-for-devops
📘 Table of Contents
- Why Shell Scripting Matters in DevOps
- What is a Shell? (bash, sh, dash)
- The Shebang (#!)
- Running a Shell Script
- Basic Syntax: echo, comments, variables
- Reading User Input (read)
- Script Arguments ($0, $1, $2)
- Debugging & Error Handling (set -x, set -e, set -o pipefail)
- Essential Commands You’ll Use Daily
Summary & What’s Next (Part 2 Preview) 1.
🚀 Why Shell Scripting Matters in DevOps Shell scripting is the foundation of DevOps automation. Every time you: Deploy an application Configure a server Build a container Parse logs Create CI/CD pipelines Automate infrastructure Trigger AWS CLI, Azure CLI, or gcloud Write Kubernetes helpers
…you are essentially writing or using shell scripts. If you master shell scripting → you unlock real automation power.
- 🧠 What Is a Shell? (bash vs sh vs dash) A shell executes your commands. Common shells: Shell Path Notes bash /bin/bash Default for Linux servers, DevOps standard sh /bin/sh Lightweight shell, usually a link to dash dash /bin/dash Very fast, used in system-level scripts zsh/fish optional For interactive use
➡️ DevOps uses bash the most.
- 🔥 The Shebang (#!) The first line of every script defines which shell should run it. Most common: #!/bin/bash
System-level (mapped to dash on many systems):
!/bin/sh
Why it matters: bash has more features than sh/dash Some expressions don’t work in dash CI pipelines may break if the wrong shebang is used
- ▶️ How to Run a Shell Script Step 1 — Create a script touch script.sh
Step 2 — Add a shebang
!/bin/bash
Step 3 — Give execute permission chmod +x script.sh
Step 4 — Run ./script.sh
Or: bash script.sh sh script.sh # if POSIX compatible
- 📝 Basic Syntax Every DevOps Engineer Must Know echo — print output echo “Hello DevOps”
Comments
single-line comment
Block comments: <<comment This is a block comment comment
Variables User-defined: name=“Ashish” echo “Hello $name”
System variables: echo $HOME echo $PATH echo $USER
- 🔡 Reading User Input (read) read -p “Enter your name: “ username echo “Welcome $username”
read -p “Enter your age: “ age echo “You entered $age”
Used for: interactive scripts
validation
server setup tools
menu-based scripts
- 🧩 Script Arguments ($0, $1, $2 …) When running: ./deploy.sh prod v2
Inside script: echo “Script name: $0” echo “Environment: $1” echo “Version: $2”
Used for: dynamic deployments folder creation automation scripts CI/CD parameter passing
- 🔍 Debugging & Error Handling Essentials set -x — debug mode (prints commands) set -x
set -e — exit when a command fails set -e
set -o pipefail — stop when any pipe command fails set -o pipefail
Most common DevOps combo set -exo pipefail
- 🛠 Essential Shell Commands DevOps Engineers Use Daily Process management ps -ef ps -ef | grep nginx ps -ef | grep python | wc -l ps -ef | grep python | awk -F“ “ ‘{print $2}’
curl vs wget curl https://api.com/data # fetch data wget https://example.com/file # download file
find find / -name app.log
These commands become the backbone of automation scripts.
- 🎯 Part 1 Completed You now understand: What shells DevOps engineers use bash vs sh differences The importance of the shebang How to create and run scripts Variables, input, arguments Debugging with set flags Essential shell commands This is the foundation needed to automate real systems.
=================================================================
Part 2 - Conditions, Expressions, If/Else, Case, Loops (for/while/until) — The Logic Layer of Automation
By Ashish — Learn in Public DevOps Journey (Week 2) 🔗 LinkedIn: https://www.linkedin.com/in/ashish360/
📘 Table of Contents
- Why Logic Matters in DevOps Automation
- Expressions & Operators (-gt, -eq, -lt…)
- If/Else — The Backbone of Script Decisions
- Nested Conditions
- Case Statement (Menu-Based Scripts)
- For Loops — Iteration for Automation
- While Loops — Event-Based Automation
- Until Loops — Run Until Condition Becomes True
- Practical DevOps Examples
Summary & Next Part Preview 1.
🚀 Why Logic Matters in DevOps Automation Every meaningful DevOps script makes decisions. For example: “Is Nginx running? If not, restart it.” “If the age is >= 18, allow access.” “If the directory already exists, exit the script.” “Loop through 100 log files and clean them.” “Run this until the server becomes healthy.”
Without if, loops, conditions, comparisons, shell scripting is basically just a list of commands.
This section gives you the logic power needed for real automation.
- 🧠 Understanding Expressions & Operators
Shell scripts don’t use normal symbols like > or <. They use operators inside [[ ]].
🔹 Common numeric operators: Operator Meaning -gt greater than -lt less than -eq equal -ne not equal -ge greater or equal -le less or equal
Example:
if [[ $age -ge 18 ]]; then echo “Adult” fi
- 🧩 If / Else — Decision Making in Scripts Basic structure: if [[ condition ]]; then # code elif [[ another_condition ]]; then # code else # fallback fi
Example: a=10 b=5
if [[ $a -gt $b ]]; then echo “a is greater than b” else echo “a is smaller than b” fi
- 🎯 Practical Example — Voting Eligibility Script #!/bin/bash
read -p “Enter your age: “ age read -p “Are you Indian? yes/no: “ nation
if [[ $age -ge 18 ]]; then echo “You can vote because you are $age” elif [[ $nation == “yes” ]]; then echo “You are Indian, you can vote” else echo “You cannot vote because you are $age” fi
✔ Demonstrates numeric + string evaluations.
- 🔄 Case Statement — Build Menu-Based Tools
Case is perfect for DevOps automation menus.
read -p “Choose option: “ choice
case $choice in 1) echo “Start service” ;; 2) echo “Stop service” ;; 3) echo “Check status” ;; *) echo “Invalid option” ;; esac
Used for: deployment menus CI/CD utility scripts service management tools
- 🔁 For Loop — Iterate Through Numbers, Files, Commands Example 1 — Simple counter: for ((i=1; i<=10; i++)); do echo “Number: $i” done
Example 2 — Create multiple folders: for (( num=$1 ; num<=$2 ; num++ )); do mkdir “$3$num” done
Run:
./folder.sh 1 50 project
Creates: project1, project2, ..., project50 Perfect for: log directories batch processing repeating commands over files
- ⏳ While Loop — Run While Condition Is True Example: read -p “Enter a number between 1-8: “ num
while [[ $(( num%2 )) -eq 0 && $num -le 10 ]]; do echo “$num” num=$(( num+2 )) done
Useful for: monitors waiting loops retry logic
- 🔁 Until Loop — Opposite of While
Runs until the condition becomes true.
Example:
n=1
until [[ $n -gt 5 ]]; do echo “n = $n” n=$((n+1)) done
🧰 FUNCTIONS (Logic + Reusability) function vote() { read -p “Enter your age: “ age read -p “Are you Indian? yes/no: “ nation
if [[ $age -ge 18 ]]; then echo “You can vote” elif [[ $nation == “yes” ]]; then echo “You can vote” else echo “You cannot vote” fi } vote
Functions are essential in DevOps for: deployments checks re-used scripts AWS CLI automation
- 🛠 Real DevOps Use Case — Create Folder via Script #!/bin/bash
create_directory(){ mkdir test }
if ! create_directory; then echo “Directory already exists — exiting.” exit 1 fi echo “Directory created successfully”
✔ Demonstrates if-condition ✔ Demonstrates error handling ✔ Demonstrates function use
- 🎯 Real DevOps Logic Example — Process Counting ps -ef | grep python | wc -l
Add logic:
count=$(ps -ef | grep python | wc -l)
if [[ $count -gt 1 ]]; then echo “Python service is running” else echo “Python service is DOWN!” fi
These checks form the basis of: monitoring health checks auto-restarts
- 📌 Part 2 Complete
You now understand the full logic system in shell scripting:
✔ Numeric & string comparisons ✔ if / else / elif ✔ case (menu-driven automation) ✔ for, while, until loops ✔ functions for reusable automation
You now have the building blocks for real-world DevOps scripts.
=================================================================
**Part 3 - Error Handling, Debugging Flags, Exit Codes & Real DevOps Automation Scripts
** By Ashish — Learn in Public DevOps Journey (Week 2) 🔗 LinkedIn: https://www.linkedin.com/in/ashish360/
📘 Table of Contents
- Why Error Handling Matters in DevOps
- Exit Codes (0, non-zero, $? usage)
- set -e, set -x, set -o pipefail, set -exo
- Error Handling Patterns
- Debugging Shell Scripts
- Writing Safe & Reliable DevOps Scripts
- Practical DevOps Examples
- Directory creation with error handling
- Process validation
- Automated deployment script
- Django Notes App deployment
- AWS EC2 creation through script
Summary & Next Part Preview 1.
🚨 Why Error Handling Is Critical in DevOps
DevOps engineers write scripts that automate: deployments backups AWS/GCP/Azure infra creation CI/CD steps service monitoring log processing A single silent error can: break deployments corrupt environments delete production data take servers offline So DevOps scripts must never fail silently. That’s why error handling and debugging flags matter.
- 🧯 Understanding Exit Codes Every command in Linux returns an exit code.
✔ Exit Code Meaning: Code Meaning 0 Success 1-255 Error / Failure Check exit code: echo $?
Example: ls /not_found echo $? # 2 → means failure
As a DevOps engineer, you must use exit codes to stop scripts when something breaks.
- 🛑 set Flags — Your Safety Net in Shell Scripts
These are critical for production-grade scripts.
3.1 set -e → Exit Immediately on Error
If any command fails, the script stops. set -e mkdir test cp abc.txt xyz.txt # if this fails → script stops echo “Will NOT run”
When DevOps uses it: CI/CD pipelines AWS provisioning Database backup scripts Deployment scripts
3.2 set -x → Show Commands as They Execute (Debugging) set -x echo “Hello” ls -l
Output looks like:
- echo Hello
- ls -l Helps debug scripts line-by-line.
3.3 set -o pipefail → Fail If Any Command in a Pipeline Fails Without this: command1 | command2 | command3
❌ Only command3 exit code is checked ✔ Earlier failures are ignored silently
With pipefail:
set -o pipefail
Pipeline fails if any command fails — perfect for DevOps pipelines.
3.4 set -exo — The DevOps Gold Standard set -euo pipefail
Meaning: -e → Exit on errors -u → Undefined variables cause errors -o pipefail → Pipelines must succeed This is the correct way to write most DevOps automation scripts.
- ⚙️ Error Handling Patterns (Must Learn) Pattern 1 — Use || for Manual Error Catching mkdir demo || { echo “Failed to create directory” exit 1 }
Pattern 2 — Use a Function and Catch Its Error create_directory() { mkdir demo }
if ! create_directory; then echo “Directory already exists” exit 1 fi
Pattern 3 — Use Combined Flags set -euo pipefail
deploy() { echo “Deploying...” cp app.conf /etc/app/ }
deploy || { echo “Deployment failed” exit 1 }
- 🐞 Debugging Your Script (DevOps Way) Use set -x during debugging: set -x
Or wrap only the part you want to debug: set -x command1 command2 set +x
Print variables to trace execution: echo “User: $USER” echo “Directory: $PWD”
Debugging is a non-negotiable DevOps skill.
- 🔥 Real DevOps Scripts with Error Handling Below are real scripts you wrote with me—now rewritten cleanly, documented, and production-ready.
✔ Example 1 — Safe Directory Creation Script
!/bin/bash
create_directory() { mkdir test }
if ! create_directory; then echo “Directory already exists — exiting.” exit 1 fi
echo “Directory created successfully.”
Used for: log directories deployment folders
✔ Example 2 — Count Running Processes count=$(ps -ef | grep python | wc -l)
if [[ $count -gt 1 ]]; then echo “Python service is running” else echo “Python service is DOWN!” fi
Used in: service monitors cron jobs health checks
✔ Example 3 — Django Notes App Deployment (Full Script) Your complete script, now cleaned and formatted:
!/bin/bash
echo “********** DEPLOYMENT STARTED *********” code_clone() { echo “Cloning Django app...” if [ -d “django-notes-app” ]; then echo “Directory exists, skipping clone.” else git clone https://github.com/LondheShubham153/django-notes-app.git || { echo “Clone failed”; return 1; } fi }
install_requirements() { echo “Installing dependencies...” sudo apt-get update && sudo apt-get install -y docker.io nginx docker-compose || { echo “Dependency installation failed”; return 1; } }
required_restarts() { echo “Performing restarts...” sudo chown “$USER” /var/run/docker.sock || { echo “Failed to change docker.sock ownership”; return 1; } }
deploy() { echo “Building and deploying the app...” docker build -t notes-app . && docker-compose up -d || { echo “Deployment failed”; return 1; } }
if ! code_clone; then cd django-notes-app || exit 1; fi if ! install_requirements; then exit 1; fi if ! required_restarts; then exit 1; fi if ! deploy; then echo “Deployment failed — notifying admin...” exit 1 fi
echo “********** DEPLOYMENT DONE *********”
This is real DevOps-grade code.
✔ Example 4 — AWS EC2 Creation Script (Professional Version)
Your script, polished for publication:
!/bin/bash
set -euo pipefail
check_awscli() { if ! command -v aws &>/dev/null; then echo “AWS CLI is not installed.” exit 1 fi }
install_awscli() { echo “Installing AWS CLI v2...” curl -s “https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip” -o awscliv2.zip sudo apt-get install -y unzip unzip awscliv2.zip sudo ./aws/install rm -rf aws awscliv2.zip aws –version }
wait_for_instance() { local instance_id=“$1”
echo "Waiting for EC2 instance..."
while true; do
state=$(aws ec2 describe-instances \
--instance-ids "$instance_id" \
--query 'Reservations[0].Instances[0].State.Name' \
--output text)
[[ "$state" == "running" ]] && break
sleep 10
done
}
create_ec2_instance() {
instance_id=$(aws ec2 run-instances
–image-id “$1”
–instance-type “$2”
–key-name “$3”
–subnet-id “$4”
–security-group-ids “$5”
–tag-specifications “ResourceType=instance,Tags=[{Key=Name,Value=$6}]”
–query ‘Instances[0].InstanceId’
–output text)
echo "Created instance: $instance_id"
wait_for_instance "$instance_id"
}
main() { check_awscli
AMI_ID=""
INSTANCE_TYPE="t2.micro"
KEY_NAME=""
SUBNET_ID=""
SECURITY_GROUP_IDS=""
INSTANCE_NAME="Shell-Script-EC2-Demo"
create_ec2_instance "$AMI_ID" "$INSTANCE_TYPE" "$KEY_NAME" "$SUBNET_ID" "$SECURITY_GROUP_IDS" "$INSTANCE_NAME"
}
main “$@”
Used in: DevOps automation Infrastructure creation AWS provisioning Demo/testing environments
- 🧾 Part 3 Complete You now understand: ✔ Exit codes ✔ Error handling (||, functions, manual checks) ✔ set -e, set -x, set -o pipefail, set -exo ✔ Debugging shell scripts ✔ Crafting production-grade automation ✔ Real-world scripts for EC2, deployments, directories, processes
This is the level of scripting expected from a DevOps engineer.
=================================================================
Trap, Signals, Cron Jobs & Background Automation (Professional DevOps Guide)
By Ashish — Learn in Public DevOps Journey (Week 2) 🔗 LinkedIn: https://www.linkedin.com/in/ashish360/
📘 Table of Contents
- Understanding Signals in Linux
- trap — The DevOps Lifesaver
- Common Signals Every DevOps Engineer Must Know
- Automating Cleanup with trap
- Running Background Processes
- nohup for long-running jobs
- Scheduling with Cron (Beginner → Advanced)
- Practical DevOps Automation Examples
- Production-Grade Script Templates
Summary & What’s Coming in Part 5 1.
⚠️ Understanding Linux Signals (Critical DevOps Concept) Linux processes constantly interact with signals — notifications sent by users, other programs, the kernel, or system events.
Most commonly used signals: Signal Code Meaning SIGINT 2 Interrupt (Ctrl + C) SIGTERM 15 Request to terminate SIGKILL 9 Force kill (cannot be trapped) SIGHUP 1 Hangup (restart daemons) SIGQUIT 3 Quit & dump core SIGUSR1/2 — Custom signals
As a DevOps engineer, you MUST handle signals safely because: Deployments shouldn’t quit mid-way Cleanup must always run Temporary files must be removed Background tasks must be reliable That’s where… 👇
- 🔗 trap — The Most Underrated DevOps Tool trap executes a command when a signal is received.
Syntax: trap “commands” SIGNALS
- 🧹 Example 1 — Cleanup Temporary Files on Exit #!/bin/bash
tmp_file=“/tmp/myapp.log”
trap “echo ‘Cleaning...’; rm -f $tmp_file” EXIT
echo “Working...” touch $tmp_file sleep 5
echo “Done.”
What it does: Creates temp file Removes it even if you press Ctrl+C Ensures the script never leaves junk behind This is production-safe scripting.
- ❗ Example 2 — Trap SIGINT (Ctrl + C) trap “echo ‘Ctrl+C detected. Stopping safely...’; exit” SIGINT
while true; do echo “Running...” sleep 2 done
Useful for: long-running operations loops automation tasks
- 🛑 Example 3 — Prevent Script from Dying Unexpectedly trap “echo ‘Script terminated unexpectedly’; exit 1” SIGTERM
Used in: CI/CD pipelines systemd scripts deployment hooks
- 🪝 Example 4 — Multiple Signals at Once trap “cleanup” SIGINT SIGTERM EXIT
cleanup() { echo “Cleaning up resources...” rm -rf /tmp/deploy }
- ⚙️ Background Jobs (Essential for DevOps) You learned loops, conditions, debugging… Now let’s add background jobs.
7.1 Run Command in Background command &
Example:
python server.py &
Check jobs: jobs
Resume background job: bg %1
Bring to foreground: fg %1
7.2 nohup — Keep Process Alive After Logout
Used heavily in servers, SSH sessions, and EC2 instances. nohup python app.py &
You close the terminal → the process still runs. Output saved to: nohup.out
- ⏰ Cron Jobs — Scheduling in Linux (DevOps Mandatory Skill) Cron = Scheduler for Linux automation. Open cron: crontab -e
8.1 Cron Syntax
- * * * * command | | | | | | | | | └── day of week (0–6) | | | └──── month (1–12) | | └────── day of month (1–31) | └──────── hour (0–23) └────────── minute (0–59)
8.2 Examples DevOps Engineers Use Daily Run backup every night: 0 0 * * * /home/ashish/backup.sh
Run a health check every 5 minutes: */5 * * * * /home/ashish/check_service.sh
Clear logs weekly: 0 0 * * 0 rm -rf /var/log/*.gz
8.3 Example — Cron + Error Handling
!/bin/bash
set -euo pipefail
logfile=“/var/log/health.log”
if ! systemctl is-active –quiet nginx; then echo “$(date) — Nginx DOWN” >> $logfile systemctl restart nginx fi
Cron entry: */2 * * * * /home/ashish/health.sh
- ⚡ Real-World DevOps Automation Examples ✔ Example: Kill Hanging Process & Log the Event #!/bin/bash trap “echo ‘Terminated!’ >> /var/log/proc.log” SIGTERM
pid=$(pgrep python)
if [[ -z “$pid” ]]; then echo “No python process running” else kill -9 $pid echo “Killed python process $pid” fi
✔ Example: Auto-Restart Docker on Failure
!/bin/bash
trap “echo ‘Docker restarted due to failure’” EXIT
if ! docker ps >/dev/null; then systemctl restart docker fi
✔ Example: Cleanup Docker Resources Automatically
!/bin/bash
set -e
trap “docker system prune -f” EXIT
echo “Running build...” docker build -t webapp .
- 📦 Production Template — “Safe DevOps Script” Use this for all future scripts:
!/bin/bash
set -euo pipefail
trap “echo ‘Script exited unexpectedly’; cleanup” EXIT
cleanup() { echo “Performing cleanup...” }
main() { echo “Starting task...” }
main “$@”
This template = best practice.
🎉 Part 4 Complete You now understand advanced automation fundamentals: trap Signal handling Background tasks nohup Cron jobs Cleanup automation Production-safe scripting patterns This is the professional level expected in DevOps interviews & real jobs.
=================================================================
**File Operations, find, grep, awk, sed, Automation Scripts
** By Ashish — Learn in Public DevOps Journey (Week 2) 🔗 LinkedIn: https://www.linkedin.com/in/ashish360/
📘 Table of Contents
- File Operations in Shell Scripting (Beginner → Pro)
- Reading/Writing Files in Bash
- Using find like a DevOps Engineer
- DevOps-Grade grep, sed, awk Integration
- File Permissions Automation
- Bulk Folder Creation & Organization
- Building Real DevOps Automation Scripts
- Log Automation, Cleanup Scripts, CI/CD Helpers
- Production Template — File-Handling Script
Summary & Next Steps 1.
📁 File Operations in Shell Scripting (Practical Guide) File handling is at the core of DevOps: Editing configs Cleaning logs Managing backups Processing monitoring data Updating YAML/JSON Generating environment files Here are the must-know techniques.
1.1 Create a File touch filename.txt
Inside script:
!/bin/bash
touch report.log
1.2 Write to a File echo “Hello DevOps” > file.txt
Overwrite? → yes.
Append instead: echo “New entry” >> file.txt
1.3 Read a File Line-by-Line while read line; do echo “Line: $line” done < file.txt
Use-case: processing logs, configs, CSVs, access logs
1.4 Read a specific line number sed -n ‘5p’ file.txt
1.5 Replace text in a file sed -i ‘s/old/new/g’ config.yaml
Used in: CI/CD variable changes Environment promotion (dev → prod) Auto-fixing broken configs
- 🔍 find — The Most Important DevOps File Tool find is used everywhere: troubleshoot disk usage locate configs find failing scripts cleanup logs manage permissions
2.1 Find file by name find / -name “nginx.conf”
2.2 Find directories find /var -type d -name “log*”
2.3 Find files modified recently
Last 24 hours: find /var/log -mtime -1
2.4 Delete files older than X days
Critical DevOps script: find /var/log -type f -mtime +7 -delete
Use this for: disk pressure issues log rotation
CI runners storing artifacts
2.5 Find files > 100MB find / -type f -size +100M
2.6 Find and execute a command (🔥 powerful) find /tmp -type f -name “*.log” -exec rm -f {} ;
- 🎯 Integrating grep + sed + awk (DevOps-Level)
These 3 tools together = 90% of DevOps text processing automation.
3.1 grep → find matching lines
Extract errors from logs: grep -i “error” app.log
3.2 sed → modify lines
Bulk edit config values: sed -i ‘s/ENV=dev/ENV=prod/g’ .env
3.3 awk → extract fields/reports
Extract IP & status: awk ‘{print $1, $9}’ access.log
- 🗄 Automating File Permissions
Fix “permission denied” errors: sudo chown -R $USER:$USER /path
Change permission in script: chmod +x deploy.sh
- 🧬 Bulk Folder & File Creation 5.1 Create 100 folders automatically for i in {1..100}; do mkdir “folder$i” done
5.2 Create folders using arguments
!/bin/bash
for (( i=$1 ; i<=$2 ; i++ )) do mkdir “$3$i” done
Run:
./create.sh 1 50 week
5.3 Create multiple files based on list for f in $(cat list.txt); do touch “$f.txt” done
- ⚡Real DevOps Automation Scripts
These are the scripts DevOps engineers actually use daily.
6.1 Cleanup Logs Older Than X Days
!/bin/bash
LOG_DIR=“/var/log” DAYS=7
find $LOG_DIR -type f -mtime +$DAYS -delete
echo “Old logs cleaned.”
6.2 Disk Pressure Emergency Script
!/bin/bash
du -sh /* | sort -h | tail df -h
6.3 Find top 10 IPs hitting your server awk ‘{print $1}’ access.log | sort | uniq -c | sort -nr | head
6.4 Auto-Restart Service When It Goes Down
!/bin/bash
if ! systemctl is-active –quiet nginx; then echo “Nginx down! Restarting...” systemctl restart nginx fi
6.5 Check all Python Processes ps -ef | grep python | awk ‘{print $2}’
6.6 Delete specific type of files safely find . -name “*.tmp” -exec rm -f {} ;
6.7 Backup a directory tar -cvzf backup.tar.gz /etc
6.8 Parse AWS EC2 instance data aws ec2 describe-instances | jq ‘.Reservations[].Instances[].InstanceId’
- 🏗 Production-Grade File Automation Script Template #!/bin/bash set -euo pipefail
trap “echo ‘Exiting safely’; cleanup” EXIT
cleanup() { echo “Performing cleanup...” }
log() { echo “$(date) — $1” }
process_files() { for file in *.log; do log “Processing $file” grep -i “error” “$file” >> errors.txt done }
main() { log “Script started” process_files log “Script completed” }
main “$@”
This is the structure companies expect.
🎉 Part 5 Complete - Advanced Automation Unlocked You now know: Full file operations find for DevOps scenarios Combining grep + awk + sed Bulk file & folder creation Cleanup, backup, monitoring scripts Production-safe file automation templates This is professional-grade shell scripting, not beginner-level theory.
=================================================================
**Part 6 - DevOps Project Scripts, AWS Automation, Error Handling & Deployment
** By Ashish — Learn in Public DevOps Journey (Week 2) 🔗 LinkedIn: https://www.linkedin.com/in/ashish360/
📘 Table of Contents
Why DevOps Engineers Automate Cloud Tasks Understanding AWS CLI for Automation Installing AWS CLI via Shell Script AWS EC2 Creation Script (Deep-Dive) AWS Instance Waiters (Production-Grade) Error Handling, Logging & Exit Codes Automating Docker + Nginx Deployment Django App Deployment Script (Your Notes Included) Real DevOps Cloud Automation Examples CI/CD Integration Patterns Summary & Next Part
- 🚀 Why DevOps Engineers Automate Cloud Tasks
Modern DevOps is automation-first. A DevOps engineer must be able to: spin up servers install dependencies run deployments debug errors restart services scale infra perform health checks all using shell scripting, not just clicking on AWS Console.
Your Week-2 work (EC2 script, error handling, Django deploy script) reflects exactly what real engineers do.
- ⚙️ Understanding AWS CLI for Automation To automate AWS from your shell scripts, AWS CLI must be: installed configured authenticated tested with simple commands
Check if installed: aws –version
If output is missing → install inside the script.
- 🧰 Script: Install AWS CLI Automatically
This is the improved, production-level version of your script.
!/bin/bash
set -euo pipefail
install_aws_cli() { echo “Installing AWS CLI v2...”
curl -s "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
sudo apt-get install -y unzip &>/dev/null
unzip -q awscliv2.zip
sudo ./aws/install
aws --version
rm -rf aws awscliv2.zip
}
check_aws() { if ! command -v aws &>/dev/null; then install_aws_cli else echo “AWS CLI already installed.” fi }
set -euo pipefail is essential in DevOps: -e → exit on error -u → undefined variables error -o pipefail → detect pipeline failures
- 🖥️ EC2 Instance Creation Script (Full Breakdown) This is the exact version DevOps teams use — simple, readable, powerful.
create_ec2_instance() { local ami_id=“$1” local instance_type=“$2” local key_name=“$3” local subnet_id=“$4” local sg_id=“$5” local name=“$6”
instance_id=$(aws ec2 run-instances \
--image-id "$ami_id" \
--instance-type "$instance_type" \
--key-name "$key_name" \
--subnet-id "$subnet_id" \
--security-group-ids "$sg_id" \
--tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=$name}]" \
--query 'Instances[0].InstanceId' \
--output text)
if [[ -z "$instance_id" ]]; then
echo "ERROR: Instance not created!"
exit 1
fi
echo "Instance created: $instance_id"
}
- ⏳ EC2 Waiters — Wait Until Instance Becomes Running
This is pro-level — prevents CI/CD failures due to boot delay.
wait_for_instance() { local instance_id=“$1”
echo "Waiting for instance $instance_id to enter 'running' state..."
while true; do
state=$(aws ec2 describe-instances \
--instance-ids "$instance_id" \
--query 'Reservations[0].Instances[0].State.Name' --output text)
if [[ "$state" == "running" ]]; then
echo "Instance is now running."
break
fi
sleep 10
done
}
- 🐳 Docker + Django Deployment Script (Your Notes → Polished Version) Here is the polished, production-grade version of the Django deployment script you captured in your notes.
!/bin/bash
set -euo pipefail
repo=“https://github.com/LondheShubham153/django-notes-app.git”
log() { echo “$(date) - $1” }
code_clone() { log “Checking code directory...” if [[ ! -d django-notes-app ]]; then log “Cloning repository...” git clone “$repo” || return 1 else log “Repo already exists. Skipping clone.” fi }
install_requirements() { log “Installing dependencies...” sudo apt-get update sudo apt-get install -y docker.io docker-compose nginx || return 1 }
configure_docker() { log “Configuring Docker...” sudo chown “$USER” /var/run/docker.sock || return 1 }
deploy() { log “Deploying Django app...”
docker build -t notes-app . || return 1
docker-compose up -d || return 1
log "Application deployed successfully."
}
main() { log “Deployment started.” code_clone || cd django-notes-app install_requirements configure_docker deploy log “Deployment finished.” }
main “$@”
This is a superb Week-2 DevOps script — you should be proud.
- 🧨 Error Handling in DevOps Scripts (Industry Level)
Use the pattern:
if ! command; then echo “Error occurred” exit 1 fi
Example:
if ! mkdir demo; then echo “Directory exists! Exiting...” exit 1 fi
This prevents: partial deployment corrupted services half-created resources CI/CD failures
- 🛠 Real DevOps Cloud Automation Examples
These are practical, real-world scripts DevOps engineers write constantly.
8.1 Backup your EC2 metadata aws ec2 describe-instances > instances.json
8.2 Restart EC2 instance using script aws ec2 reboot-instances –instance-ids i-123456
8.3 Get EC2 public IP automatically
aws ec2 describe-instances
–instance-ids i-123
–query ‘Reservations[0].Instances[0].PublicIpAddress’
–output text
8.4 Upload files to S3 automatically aws s3 cp backup.tar.gz s3://mybucket/
8.5 Check if AWS credentials are expired aws sts get-caller-identity || echo “AWS credentials expired!”
- 🧩 CI/CD Integration — Best Practices You’ll use your scripts in:
GitHub Actions Jenkins GitLab CI Bitbucket Pipelines AWS CodePipeline
Checklist: ✔ All scripts must return correct exit codes ✔ Avoid echoing secrets ✔ Never hardcode AWS keys ✔ Use IAM role-based credentials ✔ Scripts must work in non-interactive mode ✔ Include retries & timeouts
🎉 Part 6 Complete — AWS + DevOps Automation Achieved You now know how to: ✔ Create, manage, and automate AWS EC2 ✔ Build error-proof deployment scripts ✔ Integrate Docker + Nginx automation ✔ Apply real-world debugging ✔ Use production-grade error handling ✔ Follow DevOps scripting best practices
This Part brings your skill to junior → mid-level DevOps engineer level.
=================================================================
**Part 7 - Real-World DevOps Projects, Interview-Grade Tasks & Advanced Shell Scripting Patterns
** By Ashish — Learn in Public DevOps Journey (Week 2) 🔗 LinkedIn: https://www.linkedin.com/in/ashish360/
📘 Table of Contents
- Why Real-World Shell Scripting Matters for DevOps
- Practical DevOps Project Scenarios (Beginner → Advanced)
- File Automation & Log Processing Scripts
- System Health, Monitoring & Alert Automation
- Networking, Ports & Service Debug Scripts
- Infrastructure Automation (AWS, Docker, Nginx)
- CI/CD Pipeline Shell Tasks
- Interview-Grade Shell Scripting Problems
- Optimization, Best Practices & Production Standards
Final Summary + Week-2 Completion 1.
🚀 Why Real-World Shell Scripting Matters in DevOps Shell scripting is not about writing “scripts”; it’s about automating infrastructure.
In real DevOps jobs, shell scripts are used to automate: ✔ deployments ✔ server provisioning ✔ log parsing ✔ monitoring ✔ backups ✔ docker builds ✔ pipeline steps ✔ cloud tasks ✔ alerts
This chapter focuses on everything interviewers expect AND everything real DevOps teams use daily.
- 🧩 Practical DevOps Project Scenarios
(Beginner → Intermediate → Advanced)
✅ Scenario 1 — Rotate & Archive Logs Daily (Cron + Shell)
!/bin/bash
log_dir=“/var/log/nginx” backup_dir=“/backup/nginx”
timestamp=$(date +%F-%H-%M) mkdir -p $backup_dir
tar -czf “$backup_dir/nginx-$timestamp.tar.gz” $log_dir/*
echo “Log backup completed: $timestamp”
Add cronjob:
0 1 * * * /home/ashish/rotate_logs.sh
✅ Scenario 2 — Clean Old System Logs Automatically
!/bin/bash
find /var/log -type f -mtime +7 -delete
Interviewers LOVE this one.
✅ Scenario 3 — Find Top CPU/Memory Processes & Alert
!/bin/bash
cpu_limit=80
ps aux –sort=-%cpu | awk -v limit=$cpu_limit ’ NR>1 && $3 > limit {print “HIGH CPU:”, $2, $3“%“, $11}’
✅ Scenario 4 — Backup Database Daily
!/bin/bash
mysqldump -u root -pPASSWORD dbname > backup_$(date +%F).sql
With cron: 0 2 * * * /home/ashish/db_backup.sh
✅ Scenario 5 — Restart a Service if It’s Down This script can literally SAVE a production outage:
!/bin/bash
service=“nginx”
if ! systemctl is-active –quiet $service; then echo “$(date): $service DOWN — restarting” systemctl restart $service fi
✅ Scenario 6 — Auto-Deploy Build Artifacts (CI/CD Ready)
!/bin/bash
rsync -avz ./build/ user@server:/var/www/app/ systemctl restart nginx
Perfect for GitHub Actions, Jenkins, GitLab CI.
- 🧪 File Automation & Log Processing Scripts 🔍 Extract all unique IPs from Nginx logs awk ‘{print $1}’ access.log | sort | uniq
🛠 Count HTTP 500 errors grep “ 500 “ access.log | wc -l
📂 Find largest files on server (production debugging) find / -type f -exec du -Sh {} + | sort -rh | head -n 20
🧹 Delete files older than X days find /tmp -type f -mtime +3 -delete
Very common DevOps interview question.
- 🖥 System Health, Monitoring & Alerts Memory alert script (interview favorite) #!/bin/bash threshold=80
used=$(free | awk ‘/Mem/ {print int($3/$2 * 100)}’)
if [[ $used -gt $threshold ]]; then echo “WARNING: RAM usage is ${used}%” fi
Disk space alert
!/bin/bash
df -h | awk ‘$5+0 > 85 {print “HIGH DISK USAGE:”, $0}’
CPU alert top -bn1 | grep “Cpu(s)”
- 🌐 Networking, Ports & Service Debug Scripts Check if a port is open nc -zv localhost 8080
Find which process is using a port sudo lsof -i :8080
Check DNS resolution in script nslookup google.com
- ☁️ AWS Automation Scripts (Real DevOps Work) 🔸 Create EC2 🔸 Wait until ready 🔸 Deploy Docker app 🔸 Restart on failure
You already wrote EC2 creation in Part 6. Here are two more production scripts:
🛢 Get EC2 instance public IP
aws ec2 describe-instances
–instance-ids “$1”
–query “Reservations[0].Instances[0].PublicIpAddress”
–output text
🧹 Clean unused EBS snapshots (AWS cost savings)
aws ec2 describe-snapshots –owner self
| jq -r ‘.Snapshots[] | select(.StartTime < “2024-01-01”) | .SnapshotId’
| xargs -I {} aws ec2 delete-snapshot –snapshot-id {}
- 🔁 CI/CD Pipeline Shell Scripting Patterns
Almost every CI/CD pipeline uses:
✔ mkdir ✔ cp, rsync ✔ sed for config editing ✔ grep/log parsing ✔ exit codes ✔ if conditions ✔ docker build & run
Example: extract version from package.json
version=$(grep version package.json | cut -d ‘“’ -f4) echo “Deploying version: $version”
- 🎯 Interview-Grade Shell Scripting Questions (Your Week 2 Version)
Here are the exact problems companies ask:
- Print Fibonacci series up to N
- Reverse a file content
- Find the largest of 3 numbers
- Monitor a log file for a keyword and alert
- Parse CSV and print a column
- Archive and rotate logs
- Validate if a service is running
- Extract IP address from logs
- Write your own “ls” command using shell
- Automate daily backups
If you want, I can generate solutions to all 10 (optional).
- 📝 Optimization, Best Practices & Production Standards ✔ Use set -euo pipefail ✔ Use functions ✔ Use clear naming ✔ Modular scripts ✔ Avoid hardcoding secrets ✔ Validate inputs ✔ Use logging function ✔ Test scripts with shellcheck
Example:
log() { echo “$(date) — $1” }
🎉 Part 7 Complete — You’ve Reached DevOps-Ready Shell Scripting Level
By completing Part 7, you’ve achieved:
✔ Hands-on DevOps automation skills ✔ Cloud scripting knowledge ✔ Error-safe, production-grade syntax ✔ Ability to write monitoring & alerting scripts ✔ Confidence for interviews ✔ CI/CD scripting proficiency ✔ AWS automation capability ✔ Understanding real infrastructure problems
This completes Week 2 of your Learn-in-Public DevOps Journey.