If youβve ever spent hours wrestling with dependency conflicts, mismatched library versions, or the dreaded βit works on my machineβ problem in robotics development, youβre not alone. After countless frustrating setup sessions, I built a containerized development environment that makes PX4 drone development actually enjoyable.
In this article, Iβll walk you through creating a production-ready development container that integrates PX4 Autopilot, Gazebo Garden simulator, and ROS2 Humbleβall orchestrated through VS Codeβs Dev Containers. Whether youβre building autonomous drones, researching flight control algorithms, or just want a clean development environment, this setup will save you hours of configuration headaches.
Why Docker for Drone Development?
Before we dive into the codeβ¦
If youβve ever spent hours wrestling with dependency conflicts, mismatched library versions, or the dreaded βit works on my machineβ problem in robotics development, youβre not alone. After countless frustrating setup sessions, I built a containerized development environment that makes PX4 drone development actually enjoyable.
In this article, Iβll walk you through creating a production-ready development container that integrates PX4 Autopilot, Gazebo Garden simulator, and ROS2 Humbleβall orchestrated through VS Codeβs Dev Containers. Whether youβre building autonomous drones, researching flight control algorithms, or just want a clean development environment, this setup will save you hours of configuration headaches.
Why Docker for Drone Development?
Before we dive into the code, letβs talk about why containerization matters for aerospace software development.
The Problem with Traditional Setups
PX4 development traditionally requires:
- Specific Ubuntu versions (usually 20.04 or 22.04)
- Precise ROS2 distributions matched to your PX4 version
- Gazebo with exact plugin versions
- A constellation of Python packages at specific versions
- Build tools that need to play nicely together
Miss one dependency or version mismatch? Youβre looking at hours of debugging. Need to switch between projects? Good luck maintaining multiple environments on one machine.
The Docker Solution
Containers give us:
Reproducibility: Your entire development environment is code. Clone the repo, open in VS Code, and youβre coding in minutesβnot hours.
Isolation: Want to experiment with PX4 main branch without breaking your stable setup? Spin up a new container. No conflicts, no fear.
Portability: Development environment works identically on Linux, macOS, and Windows. Your teammate gets exactly what you have.
Team Consistency: Everyone on your team develops against the same toolchain, eliminating βworks on my machineβ bugs before they reach production.
Architecture Overview
Our environment consists of three main layers:
βββββββββββββββββββββββββββββββββββββββββββ
β VS Code Dev Container β
β (Development Interface) β
βββββββββββββββββββββββββββββββββββββββββββ€
β Docker Container β
β ββββββββββββ ββββββββββββ β
β β PX4 β β ROS2 β β
β β Autopilotβ β Humble β β
β ββββββββββββ ββββββββββββ β
β ββββββββββββββββββββββββββββ β
β β Gazebo Garden β β
β β (Simulation) β β
β ββββββββββββββββββββββββββββ β
β ββββββββββββββββββββββββββββ β
β β Micro-XRCE-DDS Agent β β
β ββββββββββββββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββ€
β Host System β
β (X11 Display, GPU Access) β
βββββββββββββββββββββββββββββββββββββββββββ
Key architectural decisions:
Base Image: We use osrf/ros:humble-desktop-full as our foundation. This gives us a fully configured ROS2 installation with rviz2, Gazebo Classic dependencies, and all the visualization tools we need.
1.
Gazebo Garden: While the base image includes Gazebo Classic, we install Gazebo Garden separately for its superior PX4 integration and modern physics engine. 1.
Non-Root User: Running as a non-root user inside the container mirrors real-world development practices and prevents permission issues with mounted volumes. 1.
Persistent Workspace: We bind-mount a workspace directory, so your code persists between container rebuilds.
The Dockerfile: Building Our Foundation
Letβs build this from the ground up. Hereβs our Dockerfile with detailed explanations:
# Start with ROS2 Humble Desktop Full - gives us a complete ROS2 setup
FROM osrf/ros:humble-desktop-full
# User configuration - makes our container user match host UID/GID
# This prevents permission issues with mounted volumes
ARG USERNAME=developer
ARG USER_UID=1000
ARG USER_GID=$USER_UID
# Essential environment configuration
ENV DEBIAN_FRONTEND=noninteractive
ENV LANG=C.UTF-8
ENV LC_ALL=C.UTF-8
# Core development tools
# These are the essentials you'll use daily
RUN apt-get update && apt-get install -y \
ca-certificates \
gnupg \
lsb-release \
sudo \
wget \
curl \
git \
vim \
build-essential \
cmake \
python3-pip \
python-is-python3 \
gdb \
valgrind \
&& rm -rf /var/lib/apt/lists/*
Notice weβre cleaning up apt lists at the end of each RUN command. This keeps our image size reasonableβcritical when youβre pulling images over potentially slow networks.
Python Dependencies for PX4
PX4βs build system has specific Python requirements. Getting these right is crucial:
# PX4 has strict Python dependencies
# empy 3.3.4 specifically - newer versions break PX4's code generation
RUN pip3 install --no-cache-dir --upgrade pip && \
pip3 install --no-cache-dir \
empy==3.3.4 \
pyros-genmsg \
kconfiglib \
jsonschema \
jinja2 \
pyserial \
pyyaml \
packaging \
toml \
numpy \
pandas
The empy==3.3.4 pin is particularly important. PX4 uses empy for template-based code generation, and later versions changed their API in breaking ways. This is exactly the kind of gotcha that Docker helps us avoid.
Installing Gazebo Garden
Gazebo Garden provides superior simulation capabilities for PX4:
# Add Gazebo package repository and install Garden
RUN wget https://packages.osrfoundation.org/gazebo.gpg -O /usr/share/keyrings/pkgs-osrf-archive-keyring.gpg && \
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/pkgs-osrf-archive-keyring.gpg] http://packages.osrfoundation.org/gazebo/ubuntu-stable $(lsb_release -cs) main" | tee /etc/apt/sources.list.d/gazebo-stable.list > /dev/null && \
apt-get update && \
apt-get install -y gz-garden && \
rm -rf /var/lib/apt/lists/*
Why Garden over Classic? Garden offers better physics accuracy, improved sensor models, and native integration with modern PX4 versions. The performance gains alone make it worth the extra setup.
PX4-Specific Dependencies
PX4 needs a lot of libraries for everything from video streaming to testing:
# PX4 dependencies - comprehensive but necessary
RUN apt-get update && apt-get install -y \
astyle \
lcov \
libgstreamer-plugins-base1.0-dev \
gstreamer1.0-plugins-bad \
gstreamer1.0-plugins-base \
gstreamer1.0-plugins-good \
gstreamer1.0-plugins-ugly \
gstreamer1.0-libav \
ninja-build \
protobuf-compiler \
libeigen3-dev \
libopencv-dev \
rsync \
&& rm -rf /var/lib/apt/lists/*
The Critical Piece: Micro-XRCE-DDS Agent
This is what bridges PX4 and ROS2. Understanding this component is crucial:
# Micro-XRCE-DDS Agent enables PX4 <-> ROS2 communication
# This builds from source to ensure compatibility
RUN cd /tmp && \
git clone https://github.com/eProsima/Micro-XRCE-DDS-Agent.git && \
cd Micro-XRCE-DDS-Agent && \
mkdir build && cd build && \
cmake .. && \
make && \
make install && \
ldconfig /usr/local/lib/ && \
cd / && rm -rf /tmp/Micro-XRCE-DDS-Agent
The Micro-XRCE-DDS Agent is what makes the magic happen. PX4 runs an XRCE-DDS client that sends uORB messages, and this agent translates them into ROS2 topics. Without it, your PX4 simulation and ROS2 nodes canβt talk to each other.
Setting Up the Development User
Running as root in containers is tempting but problematic:
# Create a non-root user with matching host UID/GID
# This ensures files created in mounted volumes have correct permissions
RUN groupadd --gid $USER_GID $USERNAME \
&& useradd --uid $USER_UID --gid $USER_GID -m $USERNAME -s /bin/bash \
&& echo "$USERNAME:$USERNAME" | chpasswd \
&& echo "$USERNAME ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/$USERNAME \
&& chmod 0440 /etc/sudoers.d/$USERNAME
# Create workspace directory with correct ownership
RUN mkdir -p /home/$USERNAME/workspace && \
chown -R $USERNAME:$USERNAME /home/$USERNAME
USER $USERNAME
WORKDIR /home/$USERNAME
By matching UIDs between host and container, files you create have the right ownership. This prevents the nightmare scenario where your host canβt modify files created by the container.
Environment Setup
Finally, we configure the shell environment:
# Configure bash environment with all necessary paths
RUN echo "source /opt/ros/humble/setup.bash" >> ~/.bashrc && \
echo "source /usr/share/gazebo/setup.bash" >> ~/.bashrc && \
echo "export GZ_SIM_RESOURCE_PATH=\$GZ_SIM_RESOURCE_PATH:/home/$USERNAME/workspace/PX4-Autopilot/Tools/simulation/gz/models" >> ~/.bashrc && \
echo "export GZ_SIM_SYSTEM_PLUGIN_PATH=\$GZ_SIM_SYSTEM_PLUGIN_PATH:/home/$USERNAME/workspace/PX4-Autopilot/build/px4_sitl_default/build_gz-garden" >> ~/.bashrc
These environment variables are critical. They tell Gazebo where to find PX4βs custom models and plugins. Get these wrong, and your simulations wonβt load.
VS Code Dev Container Configuration
Now letβs wire this up to VS Code. The devcontainer.json file is where the magic happens:
{
"name": "PX4 ROS2 Development Environment",
"build": {
"dockerfile": "Dockerfile",
"args": {
"USERNAME": "developer",
"USER_UID": "1000",
"USER_GID": "1000"
}
},
"remoteUser": "developer",
"containerUser": "developer",
"workspaceFolder": "/home/developer/workspace",
"mounts": [
// Persistent workspace for your code
"source=${localWorkspaceFolder}/workspace,target=/home/developer/workspace,type=bind",
// X11 forwarding for Gazebo GUI
"source=/tmp/.X11-unix,target=/tmp/.X11-unix,type=bind"
],
"containerEnv": {
"DISPLAY": "${localEnv:DISPLAY}",
"WORKSPACE_DIR": "/home/developer"
},
"runArgs": [
"--privileged", // Needed for hardware access
"--network=host", // Simplifies ROS2 discovery
"--gpus=all" // GPU acceleration for Gazebo
],
Letβs break down these critical settings:
--privileged: Required for USB device access if youβre connecting real hardware. Yes, it reduces security isolation, but for development environments, the convenience outweighs the risk.
--network=host: ROS2βs DDS discovery is much happier with host networking. It eliminates a whole class of networking issues.
--gpus=all: Gazebo can leverage your GPU for better rendering performance. If youβre running multiple simulations, this makes a huge difference.
VS Code Extensions
These extensions make development significantly more productive:
"customizations": {
"vscode": {
"extensions": [
"ms-vscode.cpptools-extension-pack", // C++ IntelliSense
"ms-vscode.cmake-tools", // CMake integration
"ms-python.python", // Python debugging
"ms-azuretools.vscode-docker", // Docker management
"redhat.vscode-yaml" // YAML validation
],
"settings": {
"terminal.integrated.defaultProfile.linux": "bash",
"cmake.configureOnOpen": false, // Don't auto-configure - let us control it
"python.defaultInterpreterPath": "/usr/bin/python3"
}
}
}
}
Post-Creation Automation
The real power comes from the postCreateCommand. This runs after your container is created and sets up your entire workspace:
#!/bin/bash
set -e
echo "Setting up PX4 development environment..."
cd /home/developer/workspace
# Clone PX4 Autopilot if not present
if [ ! -d "PX4-Autopilot" ]; then
echo "Cloning PX4-Autopilot release 1.14..."
git clone -b release/1.14 https://github.com/PX4/PX4-Autopilot.git --recursive
fi
# Create ROS2 workspace structure
mkdir -p ros2_ws/src
cd ros2_ws/src
# Clone px4_msgs - the message definitions PX4 uses
if [ ! -d "px4_msgs" ]; then
git clone -b release/1.14 https://github.com/PX4/px4_msgs.git
fi
# Clone px4_ros_com - examples and utilities for PX4-ROS2 integration
if [ ! -d "px4_ros_com" ]; then
git clone -b release/v1.14 https://github.com/PX4/px4_ros_com.git
fi
Helper Scripts for Common Tasks
Development workflows have repeated commands. Letβs script them:
# Create helpful scripts directory
mkdir -p /home/developer/scripts
# Build PX4 firmware
cat > /home/developer/scripts/build_px4.sh << 'EOF'
#!/bin/bash
cd /home/developer/workspace/PX4-Autopilot
make clean
make px4_sitl gz_x500
EOF
chmod +x /home/developer/scripts/build_px4.sh
# Run PX4 SITL with Gazebo
cat > /home/developer/scripts/run_simulation.sh << 'EOF'
#!/bin/bash
cd /home/developer/workspace/PX4-Autopilot
make px4_sitl gz_x500
EOF
chmod +x /home/developer/scripts/run_simulation.sh
# Start the DDS bridge
cat > /home/developer/scripts/run_dds_agent.sh << 'EOF'
#!/bin/bash
MicroXRCEAgent udp4 -p 8888
EOF
chmod +x /home/developer/scripts/run_dds_agent.sh
# Build ROS2 workspace
cat > /home/developer/scripts/build_ros2.sh << 'EOF'
#!/bin/bash
source /opt/ros/humble/setup.bash
cd /home/developer/workspace/ros2_ws
colcon build --symlink-install
source install/setup.bash
EOF
chmod +x /home/developer/scripts/build_ros2.sh
These scripts encapsulate complex commands into simple, memorizable actions. No more hunting through documentation for that one flag you always forget.
Your First Flight: The Complete Workflow
Now that weβve built the environment, letβs fly a drone (virtually). Hereβs the complete workflow from start to finish.
Initial Setup (First Time Only)
- Clone your repository with the Dockerfile and devcontainer.json
- Open in VS Code and click βReopen in Containerβ when prompted
- Wait for post-create script to complete (5-10 minutes first time)
Building Everything
# Terminal 1: Build PX4
~/scripts/build_px4.sh
# Terminal 2: Build ROS2 workspace
~/scripts/build_ros2.sh
The first build takes timeβPX4 is a large codebase. Grab coffee. Future builds will be incremental and much faster.
Running a Simulation
Open three terminals in VS Code (all within the container):
Terminal 1 - PX4 SITL:
~/scripts/run_simulation.sh
You should see Gazebo Garden launch with a quadcopter. The PX4 console will show initialization messages and eventually display a pxh> prompt.
Terminal 2 - DDS Agent:
~/scripts/run_dds_agent.sh
This starts the bridge between PX4 and ROS2. Youβll see connection messages when PX4 links up.
Terminal 3 - ROS2:
source ~/workspace/ros2_ws/install/setup.bash
ros2 topic list
You should see PX4 topics like /fmu/in/vehicle_command and /fmu/out/vehicle_status. Success! PX4 is talking to ROS2.
Try listening to sensor data:
ros2 topic echo /fmu/out/sensor_combined
Youβll see IMU data streaming from the simulated drone.
Taking Off
In Terminal 1 (PX4 console), enter commander mode:
pxh> commander takeoff
Watch your virtual drone lift off in Gazebo!
Performance Optimization Tips
Running a full simulation stack can be resource-intensive. Here are optimization strategies Iβve learned:
For Laptops/Slower Machines
- Headless mode: Run Gazebo without GUI when you donβt need visualization:
HEADLESS=1 make px4_sitl gz_x500
- Speed factor: Run simulations faster than real-time:
export PX4_SIM_SPEED_FACTOR=2 # 2x speed
- Reduce sensor rates: Edit your modelβs SDF file to lower sensor update frequencies
For Workstations
- Pin CPU cores: Improve real-time performance
export PX4_CPUAFFINITY=1
GPU acceleration: Ensure your GPU drivers are properly passed through to Docker 1.
Increase shared memory: Add to runArgs in devcontainer.json:
"--shm-size=512m"
Troubleshooting Common Issues
βPermission deniedβ on mounted volumes
Check that your container user UID matches your host UID:
# On host
id -u
# Update USER_UID in devcontainer.json if they don't match
Gazebo wonβt display
Ensure X11 forwarding is working:
# On host
xhost +local:docker
# Check DISPLAY variable in container
echo $DISPLAY
ROS2 topics not appearing
- Verify DDS agent is running:
ps aux | grep Micro - Check PX4 is connected: Look for βclient connectedβ in agent output
- Ensure PX4 SITL started properly:
pxh>prompt should be available
Build failures
Clear build caches:
# PX4
cd ~/workspace/PX4-Autopilot
make clean
make distclean
# ROS2
cd ~/workspace/ros2_ws
rm -rf build install log
Extending the Environment
This setup is a foundation. Hereβs how Iβve extended it for different projects:
Adding Custom PX4 Models
- Create your model in
~/workspace/PX4-Autopilot/Tools/simulation/gz/models - Define the airframe in
~/workspace/PX4-Autopilot/ROMFS/px4fmu_common/init.d-posix - Rebuild PX4
Integrating Additional ROS2 Packages
cd ~/workspace/ros2_ws/src
git clone <your-ros2-package>
cd ..
colcon build --symlink-install
Adding Computer Vision
Install OpenCV and vision dependencies:
RUN apt-get update && apt-get install -y \
ros-humble-cv-bridge \
ros-humble-image-transport \
ros-humble-vision-msgs
The Business Case for This Setup
If youβre wondering whether this is worth the setup time, consider:
- Onboarding: New team members go from zero to productive in 15 minutes vs. days
- CI/CD: Same container runs in your pipelineβno environment drift
- Multi-project: Switch between PX4 versions without conflicts
- Collaboration: βShare my environmentβ is literally
git push
Iβve seen teams cut onboarding time from 2-3 days to under an hour with containerized development environments. For aerospace projects where regulatory compliance requires reproducible builds, itβs not just convenientβitβs essential.
Real-World Applications
This environment powers several projects I work on:
- Autonomous inspection drones: Computer vision nodes in ROS2 command PX4 waypoint missions
- Swarm coordination: Multiple PX4 instances with different IDs coordinating through ROS2
- Algorithm testing: Rapidly iterate on flight controllers in simulation before hardware testing
- Training simulations: Create scenarios for pilot training without risking hardware
Conclusion
Weβve built a professional-grade development environment that would have taken days to configure manually. The Dockerfile approach gives us reproducibility, the VS Code integration gives us productivity, and the automation scripts give us convenience.
This isnβt just about making development easier (though it does). Itβs about creating a foundation that scalesβfrom solo projects to team collaboration to production deployment.
The complete source code for this setup is available in my repositories. Feel free to adapt it for your projects, and if you build something cool with it, Iβd love to hear about it in the comments.
Next Steps
Want to take this further? Here are some directions to explore:
- Hardware Integration: Extend this setup to program physical flight controllers
- Advanced Simulation: Add wind models, GPS failures, and sensor noise
- Mission Planning: Integrate QGroundControl or build custom mission planners
- ML Integration: Add TensorFlow or PyTorch for autonomous flight research
The aerospace industry is undergoing a software revolution. With tools like these, weβre not just building dronesβweβre building the future of autonomous flight.
Have questions or suggestions? Drop a comment below or reach out. Happy flying!