RRT* for a DIY 5-DOF Robot Arm: From 2D Path Planning to Real Servo Motion
- Karan Bhakuni
- 12 hours ago
- 7 min read
If you've only seen RRT* in diagrams or simulation, it can feel like a tidy math problem. Then you try it on real hardware, with real servos, and you learn what matters fast: sampling-based planning can create a path, but your robot still needs a full pipeline to move safely and smoothly.
This post walks through a hands-on test of RRT* (Rapidly-exploring Random Tree Star) on a DIY 5-degree-of-freedom robot manipulator. It starts with a simple 2D "toy" example so you can see what the algorithm is doing, then it moves into a C++ implementation that generates joint-space paths and executes them on an actual servo-driven arm. Finally, it shows the engineering reality: without collision checking, planning gets risky.
The DIY 5-DOF manipulator setup (and what makes it a good RRT* test)
The robot in this test is a DIY 5-degree-of-freedom manipulator driven by five servo motors. That choice matters because servo-based arms are common in hobby and prototype robotics, but they also expose a big gap between "a planner found a path" and "the arm moved correctly."
In a simple mobile robot demo, you can often treat the plan like a sequence of waypoints and drive through them. With a manipulator, each waypoint is a joint configuration (a full set of angles), and the arm's links sweep through space while moving between configurations. As a result, what looks safe at the endpoints can still cause a collision during the motion.
The test also uses a small custom UI to send target joint angles and set a move duration. In the video, example targets include values like 85 degrees for one joint and about 45 degrees for another, followed by experiments that change the requested time (including a fast 1-second move). That's a helpful way to validate more than just the planner, because timing affects how "aggressive" the motion feels on real motors.
RRT* in a 2D toy problem: how the tree grows and improves the path

Before moving the real robot, the video shows RRT* in an XY plane "toy problem." The goal is simple: move from point A to point B while avoiding obstacles. Even though it's only 2D, it's a great mental model because you can literally watch the tree expand and see why the final route looks the way it does.
The core RRT loop (random sampling, nearest node, step, collision test)
At its heart, RRT is a random sampling method that builds a tree of reachable states. The process described in the video follows the classic rhythm:
Sample a random point in the workspace (bounded by a chosen width and height).
Find the nearest existing node in the current tree to that random sample.
Step from the nearest node toward the sample using a fixed step size (so growth is incremental).
Check for collision at the proposed new node (and sometimes along the edge, depending on implementation).
If it's collision-free, add the new node to the set of nodes and connect it into the tree.
That's the "explore quickly" part. The tree spreads out like roots searching for water. It doesn't need a grid, and it doesn't need to enumerate every possibility. Instead, it keeps tossing samples into space and connecting what it can.
If you want an additional explanation with diagrams and variations, this RRT and RRT* path planning overview is a solid companion reference.
What changes in RRT* (best parent selection and rewiring)
RRT* keeps the same sampling behavior, but it adds a key improvement: it tries to reduce path cost over time. In the video's explanation, two ideas define that upgrade:
Choose the best parent for the new node: before locking in the connection, the algorithm checks nearby nodes (within a neighbor radius) and picks the parent that gives the lowest cost-to-come to the new node.
Rewire neighbors through the new node when it helps: after adding the new node, RRT* looks at nearby nodes again and asks, "Would your cost be lower if you used this new node as your parent?" If yes, it rewires them.
That rewiring is the "star" behavior you can see in the 2D visualization. The tree still explores, but it also cleans up its own mess. Early routes can be awkward, then later samples create shortcuts that tighten the path.
A practical writeup that mirrors this intuition is RRT, RRT-connect, and RRT* in path planning, especially if you like seeing algorithm variants side-by-side.
Why a MuJoCo digital twin helps before touching hardware
A big part of this workflow is the "digital twin" built in MuJoCo. The idea is straightforward: model the same arm in simulation so you can visualize motion and test planner behavior before sending anything risky to the servos.
That simulation-first step pays off in a few ways:
You get fast feedback about whether the planner reaches the goal reliably.
You can see the behavior, not just print numbers.
You reduce wear on hardware while still iterating on logic.
In the video, the MuJoCo model is used as a visualization tool alongside the real arm. It's essentially a sanity check layer. Even when you end up running on real motors, it helps to have a virtual mirror that confirms what "should" happen.
If you're getting started with MuJoCo in robotics projects, the open-source MuJoCo tutorials repository is a practical place to browse examples.
Joint-space planning vs Cartesian targets (what the planner actually receives)
One important clarification in the video is that this implementation runs RRT in joint space*, not Cartesian space.
That means the planner's state is a vector of joint angles (one value per joint), and the "obstacles" it could handle (once implemented) are constraints in that joint configuration space. If you instead give a target in Cartesian space (like "put the gripper at X, Y, Z"), you'd normally need inverse kinematics to convert it into one or more joint targets before planning.
In this test, the UI directly provides joint angles, so RRT* can search in the same space the motors understand. That's often simpler for early prototypes because it avoids ambiguity (many arms have multiple joint solutions for the same end-effector position).
Still, this choice comes with a tradeoff: joint-space paths can look odd in Cartesian space unless you add smoothing and careful trajectory generation, which leads to the next piece of the pipeline.
The real robotics pipeline: planning is only the first step

The video makes a point that trips up a lot of first-time motion planning projects: RRT* can output a path, but you still can't just "send the path to the robot" and expect clean motion.
A servo-driven manipulator needs a chain of steps that turns discrete nodes into a continuous, timed motion.
From RRT* nodes to a continuous path (interpolation and smoothing)
RRT* produces a sequence of joint configurations. On their own, they're like dots on paper. Real motors need something closer to a continuous curve.
In the described pipeline, the planned nodes are followed by a path construction step, such as:
Straight-line interpolation between joint configurations (piecewise linear in joint space).
Cubic or quintic interpolation to create smoother transitions (often preferred because it reduces jerk).
Even if you keep it simple early on, interpolation is the difference between "it reaches the goal" and "it moves like a robot arm instead of a jittery plotter."
Adding time: turning a path into a trajectory
After you have a geometric path, you still need to decide how fast to move along it. That's trajectory generation or time parameterization.
In the video, example trajectory profiles include:
Trapezoidal velocity profiles (common, simple, and predictable).
Quintic time scaling (smoother acceleration behavior).
This is also where the UI "time" setting becomes meaningful. Asking for a 1-second move forces the same path to be executed with higher speeds and accelerations, which can stress small servos and make control errors more obvious.
Low-level motor control (P control at the bottom)
At the lowest level, the system uses P control (proportional control) to drive the servo motors toward the commanded joint angles over time.
That's a sensible approach for an early build because it's easy to implement and tune compared to full dynamic control. It also makes failures easier to interpret: if the motion looks wrong, you can usually tell whether the problem came from planning, interpolation, time scaling, or the controller.
Put another way, RRT* is the route on the map. The interpolation is the road surface. The trajectory is your speed plan. The controller is your hands on the steering wheel.
The crash (and the missing piece): collision checking in RRT*
The video's most useful moment is also the most honest one: the robot collides with an obstacle because this version does not include full collision checking.
The test adds an obstacle near the robot, then commands a motion that includes a large negative angle (for example, around minus 70 degrees on a joint). The arm moves into the obstacle instead of planning around it.
A planner that ignores collisions can still produce a "valid" path in math, but it's not valid for your workbench.
In a complete RRT* setup for manipulators, collision handling is not a single checkbox. It usually includes:
Link vs obstacle collision: each link should be tested against objects in the scene.
Self-collision: links should not intersect other links or the base.
Edge validation: it's not enough to test only nodes, you need to check the motion between nodes as well, because collisions can happen between endpoints.
That last point is easy to miss. Two joint configurations might be collision-free, yet the straight interpolation between them might swing a link through an obstacle.
For a deeper academic view of joint-space self-collision avoidance, this paper on real-time self-collision avoidance in joint space shows how serious the topic gets as DOF increases.
What's coming next: obstacles in joint space, then Cartesian space comparisons
The next steps teased in the video follow a natural progression:
Implement proper collision checking in joint space, including obstacle collisions, self-collisions, and edge checks.
Re-test RRT* so the arm can reach targets without crashing.
Explain how a Cartesian obstacle maps into joint-space constraints.
Compare RRT in joint space vs Cartesian space*, and discuss why joint space is often preferred for manipulators.
That roadmap matters because it reflects how real robot motion stacks up. It's rarely "planner only." Instead, it's planner plus collision models plus trajectory generation plus low-level control, all under real timing limits.
(Also, for the video's soundtrack credits, the music comes from Bensound's royalty-free music library.)
Conclusion: RRT* works, but the robot needs the whole system
RRT* successfully plans and executes motion on a DIY 5-DOF servo arm when you treat it as part of a full pipeline: joint-space planning, interpolation, time scaling, then P control to the motors. The same test also proves a harder truth: without collision checking, even a good planner can drive your arm straight into an obstacle.
If you're following along, the most interesting next milestone is adding collision checks and validating edges between nodes, then watching how the planned motion changes. That's when RRT* starts behaving like something you can trust, not just something that sometimes reaches the goal.
