Docker for Robotics: Run ROS1, ROS2, and OpenCV Anywhere Without Dependency Chaos
- Karan Bhakuni
- 23 hours ago
- 4 min read
"It worked on my laptop" hits robotics harder than most fields. One teammate runs Ubuntu 22.04, another is stuck on Ubuntu 20.04, and the robot ships with something older. Then you add ROS1 vs ROS2 conflicts, OpenCV version drift, and GPU drivers that only behave on one machine. After a few late nights, your "simple" demo becomes a dependency crime scene.
Docker fixes the environment part of that problem.It packages the OS userspace, libraries, ROS distro, Python dependencies, and your workspace into a build you can run almost anywhere.
You still need to handle hardware and real-time requirements, but Docker stops the base development environment from constantly changing.
In this guide you’ll learn:
When Docker helps most in robotics development
The difference between Docker images and containers
How to build a portable ROS1, ROS2, and OpenCV development environment
Best practices for hardware access, GPUs, and deployment
What Docker Chang

es for a Robotics Project (And What It Doesn't)
Docker changes how you ship your development environment.
Instead of every laptop carrying a different mix of apt packages and pip installs, you define one environment once, then run it everywhere.
This brings four real benefits in robotics.
1. Repeatability
If a Docker image builds today, it will likely build next month because dependencies are pinned and versioned.
This prevents dependency drift between developers and robots.
2. Faster Onboarding
Instead of spending hours aligning ROS repositories and dependencies, a new developer just pulls the container.
Example:
docker pull ros:humble
docker run -it ros:humble bashWithin seconds, the developer has a working ROS environment.
3. Cross-Machine Consistency
You can run the exact same stack on:
developer laptops
robotics workstations
robot onboard computers
You can even move images manually between systems.
docker save ros-dev-image > ros-dev.tar
docker load < ros-dev.tar4. Stable Simulation Environments
Simulation stacks like Gazebo, Isaac Sim, and RViz become easier to manage because their dependencies live inside the container.
For GUI tools like RViz:
xhost +local:docker
docker run -it \
-e DISPLAY=$DISPLAY \
-v /tmp/.X11-unix:/tmp/.X11-unix \
ros:humble rviz2Docker Basics for Robotics Developers
Before building robotics environments, it's important to understand two core Docker concepts.
Docker Images
A Docker image is a frozen environment containing:
Linux userspace
ROS distribution
OpenCV libraries
application dependencies
Think of it as a blueprint for a development environment.
Docker Containers
A container is a running instance of an image.
Example workflow:
Build an image:
docker build -t my-ros-project .Run the container:
docker run -it my-ros-projectView running containers:
docker psStop a container:
docker stop <container_id>Where Docker Helps Most in Robotics
Docker shines in day-to-day robotics development.
Running ROS1 and ROS2 on the Same Machine
Instead of fighting dependency conflicts:
docker run -it ros:noetic bash
docker run -it ros:humble bashEach container runs its own ROS environment.
Keeping OpenCV Versions Consistent
Vision pipelines frequently break due to mismatched OpenCV versions.
Docker ensures every developer runs the same vision stack.
Reproducing Robot Runtime Environments
You can replicate the robot runtime environment on a development PC, preventing deployment surprises.
Building a Portable ROS Development Environment
Start by verifying your Docker installation.
docker --version
docker info
docker run hello-worldExample Dockerfile for ROS Development
Create a simple development container.
FROM ros:humble
RUN apt update && apt install -y \
python3-pip \
git \
build-essential
WORKDIR /workspaceBuild the image:
docker build -t ros-dev .Run the container:
docker run -it ros-devNow you have a reproducible ROS environment.
Keeping Docker Images Small and Fast
Slow Docker builds usually happen when developers copy the entire repository too early.
Better pattern:
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY src/ ./srcThis allows Docker to cache dependency layers.
Check image sizes:
docker imagesClean unused resources:
docker system pruneUsing Multi-Stage Builds for Robotics
Multi-stage builds help keep containers smaller.
Example:
FROM ros:humble as builder
RUN apt update && apt install -y build-essential
FROM ros:humble
COPY --from=builder /workspace/install /workspace/installThis removes heavy build tools from the final runtime container.
Handling Hardware in Docker Containers
Robotics systems interact with real hardware.
Docker containers require explicit device mapping.
Serial Devices (Microcontrollers, LiDAR)
docker run -it \
--device=/dev/ttyUSB0 \
ros:humbleCameras
docker run -it \
--device=/dev/video0 \
ros:humbleROS Networking
Sometimes ROS networking works best with host networking.
docker run -it --network host ros:humbleGPU Support for Simulation and Vision
If using NVIDIA GPUs:
docker run --gpus all -it ros:humbleVerify GPU availability:
nvidia-smiFix Docker Permission Issues
If hardware randomly fails due to permission problems:
sudo usermod -aG docker $USERRestart your terminal afterwards.
Deploying Robotics Software with Docker
Once your container works locally, Docker becomes your deployment unit.
Tag the image:
docker tag ros-dev username/ros-dev:v1.0Push it to a registry:
docker push username/ros-dev:v1.0Pull it anywhere:
docker pull username/ros-dev:v1.0This allows robot fleets to run identical software environments.
Running Multi-Container Robot Systems with Docker Compose
Robotics systems rarely run as a single process.
You might have:
perception nodes
localization nodes
navigation stacks
user interfaces
hardware drivers
Docker Compose runs these services together.
Example docker-compose. yml:
version: "3"
services:
perception:
image: ros:humble
command: ros2 run perception_node
navigation:
image: ros:humble
command: ros2 run nav_nodeStart the system:
docker compose upStop it:
docker compose downThis makes robot bring-up reproducible across machines.
Best Practices for Production Robotics Systems
Build Versioned Images
docker build -t robot-stack:v1.2.0 .Avoid relying on latest.
Avoid Running Containers as Root
Inside your Dockerfile:
RUN useradd -m robotuser
USER robotuserScan Containers for Security Issues
docker scan my-imageNever Store Secrets in Images
Instead use environment variables:
docker run -e API_KEY=secret_key robot-stackConclusion
Docker will not write your control algorithms, but it stops your environment from sabotaging your robotics software.
With pinned images, your ROS1, ROS2, and OpenCV dependencies can coexist without conflicts, even across different Ubuntu versions.
Start small.
Run a basic ROS container:
docker run -it ros:humbleThen containerize one workspace or package.
Once that works, add:
hardware access
GPU support
multi-container robotics stacks
The goal is simple:
Your robot software should behave the same on every machine you touch.



Comments