How to Create Your First C++ MoveIt 2 Project – ROS 2 Jazzy

In this tutorial, we will create a basic “Hello World” project to get your feet wet with C++ and MoveIt 2. I will show you how to move the end of the robotic arm (where the gripper is located) to a desired position and orientation. 

The official instructions are on this page, but we will walk through everything together, step by step.

By the end of this tutorial, you will be able to create this:

moveit-cpp-first-program-ros2

Prerequisites

All my code for this project is located here on GitHub.

Create a Package

Let’s start by creating a new package for our project. Open a terminal and navigate to the mycobot_ros2 directory in your ROS 2 workspace:

cd ~/ros2_ws/src/mycobot_ros2/

Now, let’s create a new package using the ROS 2 command line tools:

ros2 pkg create --build-type ament_cmake \
--dependencies moveit_ros_planning_interface rclcpp \
--license BSD-3-Clause \
--maintainer-name ubuntu \
--maintainer-email automaticaddison@todo.com \
--node-name hello_moveit \
mycobot_moveit_demos

This command creates a new package named “mycobot_moveit_demos” with the necessary dependencies and a node named “hello_moveit”. The package will be created inside the mycobot_ros2 directory.

Next, let’s move into our newly created package:

cd mycobot_moveit_demos

Create a ROS Node

Now that we have our package in place, let’s set up our C++ file to work with MoveIt 2. We’ll start by editing the main source file.

Open the new source code file you created.

cd src

Open hello_moveit.cpp.

Add this code:

/**
 * @file hello_moveit.cpp
 * @brief A basic ROS 2 and MoveIt 2 program to control a robot arm
 *
 * This program demonstrates how to use ROS 2 and MoveIt 2 to control a robot arm.
 * It sets up a node, creates a MoveGroupInterface for the arm, sets a target pose for the gripper_base,
 * plans a trajectory, and executes the planned motion.
 *
 * @author Addison Sears-Collins
 * @date December 15, 2024
 */

#include <geometry_msgs/msg/pose_stamped.hpp>
#include <memory>
#include <rclcpp/rclcpp.hpp>
#include <moveit/move_group_interface/move_group_interface.h>

/**
 * @brief The main function that starts our program.
 *
 * This function sets up our ROS 2 environment and prepares it for robot control.
 *
 * @param argc The number of input arguments our program receives.
 * @param argv The list of input arguments our program receives.
 * @return int A number indicating if our program finished successfully (0) or not.
 */
int main(int argc, char * argv[])
{
  // Start up ROS 2
  rclcpp::init(argc, argv);

  // Creates a node named "hello_moveit". The node is set up to automatically
  // handle any settings (parameters) we might want to change later without editing the code.
  auto const node = std::make_shared<rclcpp::Node>(
    "hello_moveit",
    rclcpp::NodeOptions().automatically_declare_parameters_from_overrides(true)
  );

  // Creates a "logger" that we can use to print out information or error messages
  // as our program runs.
  auto const logger = rclcpp::get_logger("hello_moveit");

  // Create the MoveIt MoveGroup Interfaces
  // These interfaces are used to plan and execute movements, set target poses,
  // and perform other motion-related tasks for each respective part of the robot.
  // The use of auto allows the compiler to automatically deduce the type of variable.
  // Source: https://github.com/moveit/moveit2/blob/main/moveit_ros/planning_interface/move_group_interface/include/moveit/move_group_interface/move_group_interface.h
  using moveit::planning_interface::MoveGroupInterface;
  auto arm_group_interface = MoveGroupInterface(node, "arm");

  // Specify a planning pipeline to be used for further planning
  arm_group_interface.setPlanningPipelineId("ompl");

  // Specify a planner to be used for further planning
  arm_group_interface.setPlannerId("RRTConnectkConfigDefault");

  // Specify the maximum amount of time in seconds to use when planning
  arm_group_interface.setPlanningTime(1.0);

  // Set a scaling factor for optionally reducing the maximum joint velocity. Allowed values are in (0,1].
  arm_group_interface.setMaxVelocityScalingFactor(1.0);

  //  Set a scaling factor for optionally reducing the maximum joint acceleration. Allowed values are in (0,1].
  arm_group_interface.setMaxAccelerationScalingFactor(1.0);

  // Display helpful logging messages on the terminal
  RCLCPP_INFO(logger, "Planning pipeline: %s", arm_group_interface.getPlanningPipelineId().c_str());
  RCLCPP_INFO(logger, "Planner ID: %s", arm_group_interface.getPlannerId().c_str());
  RCLCPP_INFO(logger, "Planning time: %.2f", arm_group_interface.getPlanningTime());

  // Set a target pose for the end effector of the arm
  auto const arm_target_pose = [&node]{
    geometry_msgs::msg::PoseStamped msg;
    msg.header.frame_id = "base_link";
    msg.header.stamp = node->now();
    msg.pose.position.x = 0.061;
    msg.pose.position.y = -0.176;
    msg.pose.position.z = 0.168;
    msg.pose.orientation.x = 1.0;
    msg.pose.orientation.y = 0.0;
    msg.pose.orientation.z = 0.0;
    msg.pose.orientation.w = 0.0;
    return msg;
  }();
  arm_group_interface.setPoseTarget(arm_target_pose);

  // Create a plan to that target pose
  // This will give us two things:
  // 1. Whether the planning was successful (stored in 'success')
  // 2. The actual motion plan (stored in 'plan')
  auto const [success, plan] = [&arm_group_interface] {
    moveit::planning_interface::MoveGroupInterface::Plan msg;
    auto const ok = static_cast<bool>(arm_group_interface.plan(msg));
    return std::make_pair(ok, msg);
  }();

  // Try to execute the movement plan if it was created successfully
  // If the plan wasn't successful, report an error
  // Execute the plan
  if (success)
  {
    arm_group_interface.execute(plan);
  }
    else
  {
    RCLCPP_ERROR(logger, "Planning failed!");
  }

  // Shut down ROS 2 cleanly when we're done
  rclcpp::shutdown();
  return 0;
}

This code does the following:

  • Executes the plan if it was successful, or logs an error if planning failed.
  • Sets a target pose for the end effector (gripper_base) of the robotic arm.
  • Plans a path to reach that target pose.

Let’s compile our code to make sure everything is set up correctly. Go back to your workspace directory:

cd ~/ros2_ws

To build the package with debug information, we’ll use the following command:

colcon build --packages-select mycobot_moveit_demos

This command builds only our mycobot_moveit_demos package.

After the build completes successfully, open a new terminal and source your workspace:

source ~/.bashrc

Plan and Execute Using MoveGroupInterface

Now, let’s build our updated code.

Launch the controllers, Gazebo, and MoveGroup with RViz.

bash ~/ros2_ws/src/mycobot_ros2/mycobot_bringup/scripts/mycobot_280_gazebo_and_moveit.sh

In another terminal, run your program to move the robotic arm from one place to another:

ros2 run mycobot_moveit_demos hello_moveit
1-run-the-program

You can safely ignore this warning:

[WARN] [1734267864.788892796] [moveit_2886993382.moveit.ros.robot_model_loader]: No kinematics plugins defined. Fill and load kinematics.yaml!

If everything is set up correctly, you should see the robot move in RViz to the specified pose.

2-move-in-rviz-to-the-pose

You can confirm the gripper is in the correct pose by typing the following command and comparing it to the pose you set in your hello_moveit.cpp file.

ros2 run tf2_ros tf2_echo base_link gripper_base

Note: If you run the hello_moveit node without first launching the demo launch file, it will wait for 10 seconds and then exit with an error message. This is because the MoveGroupInterface looks for a node publishing the robot description topic, which is provided by the demo launch file.

Let’s break down the key components of our hello_moveit.cpp file to understand how it works with MoveIt 2 for controlling the arm.

MoveGroupInterface Creation:

This part sets up a control interface for the robot’s arm. It’s like creating a special remote control that lets us send commands to move the arm.

Setting the Target Pose:

Here, we’re telling the arm where we want it to go. 

Planning the Motion:

This is where the robot figures out how to move from its current position to the target we set. It’s like planning a route on a map – the robot calculates all the steps needed to get from point A to point B safely.

Executing the Plan:

Finally, if the planning was successful, we tell the robot to actually perform the movement we planned. If the planning failed for some reason, we just report an error instead of trying to move.

These steps work together to allow us to control the robot arm in a safe and precise manner, moving it to exactly where we want it to go.

That’s it! Keep building.

Configure MoveIt 2 for a Simulated Robot Arm – ROS 2 Jazzy

In the previous tutorial, you learned how to leverage ROS 2 Control to move the robotic arm manually by sending raw joint commands to the robot. This exercise is great for learning, but it is not how things are done in a real-world robotics project.

In this tutorial, I will show you how to set up a ROS 2 software called MoveIt 2 to send those commands. MoveIt 2 is a powerful and flexible motion planning framework for robotic manipulators.

Using MoveIt 2 instead of sending manual commands through a terminal window allows for more sophisticated, efficient, and safer motion planning, taking into account obstacles, joint limits, and other constraints automatically.

By the end of this tutorial, you will be able to create this:

move-group-ros2

Prerequisites

All my code for this project is located here on GitHub.

How MoveIt 2 Works with ROS 2 Control

Before we dive into the tutorial, let me explain how MoveIt 2 works with ROS 2 Control by walking through a common real-world use case.

Suppose you have a mobile manipulator, which consists of a wheeled robot base with a robotic arm attached to it.

You want to develop an application that performs a pick and place task. 

Your cool app starts by telling MoveIt 2, “Pick up the object at location A and place it at location B.”

MoveIt 2, with help from Nav 2 (the navigation software stack for ROS 2), plans out the detailed movements needed – reaching for the object, grasping it, lifting it, moving to the new location, and placing it down.

These plans are sent to ROS 2 Control’s Controller Management system, which has specific controllers for the arm, gripper, and base.

The arm controller (e.g. JointTrajectoryController), for example, breaks down the plan into specific joint movements and sends these to the Resource Management system.

The Resource Management system communicates these commands through hardware interfaces to the actual robot parts (or their simulated versions).

As the robot executes the pick and place task, sensors in its joints and gripper send back information about position, force, and status.

This feedback flows back up through the system, allowing your app and MoveIt 2 to monitor progress and make adjustments if needed.

This setup allows you to focus on defining high-level tasks while MoveIt 2 and ROS 2 Control handle the complex details of robot control and movement coordination.

Now let’s look at another diagram which shows the specific controller and topic names. This gives us a more detailed view of how MoveIt 2, ROS 2 Control, and the robot components interact:

ros2_control_mobile_manipulator_control_arch_independent_hardware

Your cool application initiates the process, defining tasks like pick and place operations.

MoveIt 2 contains a path planning component that generates trajectory commands. These commands are sent to the Joint Trajectory Controller, which manages the arm’s movements.

For mobile robots, Nav 2 can send twist commands to the Diff Drive Controller, controlling the base’s motion.

The system includes several specialized controllers:

  • Joint Trajectory Controller for arm movements
  • Forward Velocity Controller for speed control (optional…only needed if you need direct velocity control for high-speed operations)
  • Gripper Controller for end-effector actions (e.g. open and close gripper)
  • Diff Drive Controller for mobile base navigation

These controllers communicate with Hardware Interfaces for the arm, gripper, mobile base, and the Gazebo simulation.

As the robot operates, sensor data flows back through these Hardware Interfaces. This information includes position, velocity, force, and status updates.

The Joint State Broadcaster and Force Torque State Broadcaster collect this feedback data.

This state information (e.g. joint angles) is then forwarded to RViz for visualization and to MoveIt 2 for real-time monitoring and adjustment.

Update package.xml

Now that you understand how all the pieces work together on a high level, let’s start setting up our robot with MoveIt 2.

Create a new package called mycobot_moveit_config:

cd ~/ros2_ws/src/mycobot_ros2/mycobot_moveit_config

Your package.xml should look like this:

<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
  <name>mycobot_moveit_config</name>
  <version>0.0.0</version>
  <description>MoveIt configuration, launch files, and SRDF files for myCobot robots</description>
  <maintainer email="automaticaddison@todo.com">ubuntu</maintainer>
  <license>BSD-3-Clause</license>

  <buildtool_depend>ament_cmake</buildtool_depend>

  <depend>moveit_ros_planning_interface</depend>
  <depend>moveit_ros_move_group</depend>
  <depend>moveit_kinematics</depend>
  <depend>moveit_planners_ompl</depend>
  <depend>moveit_ros_visualization</depend>
  <depend>rviz2</depend>
  <depend>rviz_visual_tools</depend>

  <exec_depend>moveit_simple_controller_manager</exec_depend>
  <exec_depend>moveit_configs_utils</exec_depend>
  <exec_depend>pilz_industrial_motion_planner</exec_depend>

  <test_depend>ament_lint_auto</test_depend>
  <test_depend>ament_lint_common</test_depend>

  <export>
    <build_type>ament_cmake</build_type>
  </export>
</package>

Save the file, and close it.

cd ~/ros2_ws/
rosdep install --from-paths src --ignore-src -r -y

Type in your password, and install any missing dependencies.

Now build.

colcon build && source ~/.bashrc

In a real robotics project, the naming convention for this package would be:

<robot_name>_moveit_config

Create the Configuration Files

Let’s create configuration files that will enable us to tailor our robot for MoveIt 2. We’ll do this process manually (recommended), but you are welcome to use the MoveIt Setup Assistant

You can see some official examples of configuration files for other robots on the MoveIt resources ROS 2 GitHub.

Add the Configuration Files to the Config Folder

MoveIt 2 requires configuration files to go into the config folder of the package.

Open a terminal window, and type:

cd ~/ros2_ws/src/mycobot_ros2/mycobot_moveit_config/config/mycobot_280

Add SRDF

The first thing we need to do is set up the SRDF File. 

Semantic Robot Description Format (SRDF) is a file that tells MoveIt how your robot prefers to move and interact with its environment. It is a supplement to the URDF file.

The SRDF adds semantic (meaning-based) information to the robot’s description that isn’t included in the URDF (Unified Robot Description Format). This file includes things like:

  • Naming groups of joints or links (e.g., “arm”, “gripper”)
  • Defining default robot poses (e.g., “home”, “closed”, etc.)
  • Which parts of the robot should be checked for colliding with each other during movement.
    • We need to do this step because checking every possible collision between all parts of the robot for each planned movement can be time-consuming, especially for complex robots.
touch mycobot_280.srdf

Add this code.

Save the file, and close it.

Let’s go through each section briefly:

  1. <group name=”arm”>: This section defines a group named “arm” that includes a list of joints. The joints listed in this group represent the robot arm.
  2. <group name=”gripper”>: This section defines a group named “gripper” that includes a list of joints related to the robot’s gripper or end-effector.
  3. <group_state name=”home” group=”arm”>: This section defines a named state called “home” for the “arm” group. It specifies the joint values for the arm joints when the robot is in its home position.
  4. <group_state name=”open” group=”gripper”>: This section defines a named state called “open” for the “gripper” group, specifying the joint value for the gripper when it is in its open position.
  5. <disable_collisions>: This section is used to disable collision checking between specific pairs of links. Each <disable_collisions> entry specifies two links and the reason for disabling collision checking between them. The reasons can be “Adjacent” (for adjacent links in the robotic arm), “Never” (for links that will never collide), or “Default” (for links that are not expected to collide in most configurations).

Add Initial Positions

Now let’s create a configuration file for the initial positions.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_moveit_config/config/mycobot_280
touch initial_positions.yaml

Add this code.

# Default initial position for each movable joint
initial_positions:
  link1_to_link2: 0.0
  link2_to_link3: 0.0
  link3_to_link4: 0.0
  link4_to_link5: 0.0
  link5_to_link6: 0.0
  link6_to_link6_flange: 0.0
  gripper_controller: 0.0

Save the file. This code sets the default initial positions for each movable joint. 

The initial_positions.yaml file in MoveIt specifies the starting joint positions for motion planning, overriding any default positions set in the URDF. It allows you to set a specific initial configuration for your robot in MoveIt without modifying the URDF, providing flexibility in defining the starting point for motion planning tasks.

Add Joint Limits

touch joint_limits.yaml

Add this code.

default_velocity_scaling_factor: 1.0
default_acceleration_scaling_factor: 1.0

joint_limits:
  link1_to_link2:
    has_velocity_limits: true
    max_velocity: 2.792527
    has_acceleration_limits: true
    max_acceleration: 5.0
    has_deceleration_limits: true
    max_deceleration: -5.0
    has_jerk_limits: false
    max_jerk: 6.0
    has_effort_limits: true
    max_effort: 56.0
  link2_to_link3:
    has_velocity_limits: true
    max_velocity: 2.792527
    has_acceleration_limits: true
    max_acceleration: 5.0
    has_deceleration_limits: true
    max_deceleration: -5.0
    has_jerk_limits: false
    max_jerk: 6.0
    has_effort_limits: true
    max_effort: 56.0
  link3_to_link4:
    has_velocity_limits: true
    max_velocity: 2.792527
    has_acceleration_limits: true
    max_acceleration: 5.0
    has_deceleration_limits: true
    max_deceleration: -5.0
    has_jerk_limits: false
    max_jerk: 6.0
    has_effort_limits: true
    max_effort: 56.0
  link4_to_link5:
    has_velocity_limits: true
    max_velocity: 2.792527
    has_acceleration_limits: true
    max_acceleration: 5.0
    has_deceleration_limits: true
    max_deceleration: -5.0
    has_jerk_limits: false
    max_jerk: 6.0
    has_effort_limits: true
    max_effort: 56.0
  link5_to_link6:
    has_velocity_limits: true
    max_velocity: 2.792527
    has_acceleration_limits: true
    max_acceleration: 5.0
    has_deceleration_limits: true
    max_deceleration: -5.0
    has_jerk_limits: false
    max_jerk: 6.0
    has_effort_limits: true
    max_effort: 56.0
  link6_to_link6_flange:
    has_velocity_limits: true
    max_velocity: 2.792527
    has_acceleration_limits: true
    max_acceleration: 5.0
    has_deceleration_limits: true
    max_deceleration: -5.0
    has_jerk_limits: false
    max_jerk: 6.0
    has_effort_limits: true
    max_effort: 56.0
  gripper_controller:
    has_velocity_limits: true
    max_velocity: 2.792527
    has_acceleration_limits: true
    max_acceleration: 5.0
    has_deceleration_limits: true
    max_deceleration: -5.0
    has_jerk_limits: false
    max_jerk: 300.0
    has_effort_limits: true
    max_effort: 56.0

Save the file.

This code sets the velocity and acceleration limits for each joint in our robotic arm, which is used by MoveIt 2 to control the robot’s movements.

Even though we have joint limits for velocity already defined in the URDF, this YAML file allows for overriding. Defining joint limits specific to MoveIt 2 is useful if you want to impose stricter or different limits for particular applications without modifying the URDF. It provides flexibility in adjusting the robot’s behavior for different tasks.

Add Kinematics

touch kinematics.yaml

Add this code, and save the file. 

# Configures how a robot calculates possible positions and movements of its joints
# to reach a desired point. It specifies which mathematical methods (solvers) to use,
# how long to try, and how precise these calculations should be.
arm: # Name of the joint group from the SRDF file
  kinematics_solver: kdl_kinematics_plugin/KDLKinematicsPlugin # Plugin used for solving kinematics
  kinematics_solver_search_resolution: 0.005 #  Granularity (in radians for revolute joints) used by the solver when searching for a solution.
  kinematics_solver_timeout: 0.05 # Maximum time the solver will spend trying to find a solution for each planning attempt
  position_only_ik: false # If true, the solver will only consider the position of the end effector, not its orientation
  kinematics_solver_attempts: 5 # How many times the solver will try to find a solution

# Typically we don't need a complex kinematics solver for the gripper. This is for demonstration purposes only.
gripper:
  kinematics_solver: kdl_kinematics_plugin/KDLKinematicsPlugin
  kinematics_solver_search_resolution: 0.005
  kinematics_solver_timeout: 0.05
  position_only_ik: false
  kinematics_solver_attempts: 3

arm_with_gripper:
  kinematics_solver: kdl_kinematics_plugin/KDLKinematicsPlugin
  kinematics_solver_search_resolution: 0.005
  kinematics_solver_timeout: 0.05
  position_only_ik: false
  kinematics_solver_attempts: 5

This file configures how MoveIt 2 calculates possible positions and movements of the robot’s arm to reach a desired point in space. It specifies the mathematical methods, precision, and time limits for these calculations, which MoveIt 2 uses to plan the robot’s movements.

Add Controllers

touch moveit_controllers.yaml

Add this code, and save the file. 

# This file configures the controller management for MoveIt, specifying which controllers are available,
# the joints each controller handles, and the type of interface (e.g. FollowJointTrajectory).
moveit_controller_manager: moveit_simple_controller_manager/MoveItSimpleControllerManager  # Specifies the controller manager to use.

# Define the controllers configured using the ROS 2 Control framework.
moveit_simple_controller_manager:
  controller_names:
    - arm_controller   # Name of the controller for the robotic arm.
    - gripper_action_controller # Name of the controller for the gripper

  # Configuration for the arm_controller.
  arm_controller:
    action_ns: follow_joint_trajectory   # ROS action namespace to which MoveIt sends trajectory commands.
    type: FollowJointTrajectory          # Type of the controller interface; follows a joint trajectory.
    default: true                        # This controller is the default for the joints listed below.
    joints:                              # List of joints that the arm_controller manages.
      - link1_to_link2
      - link2_to_link3
      - link3_to_link4
      - link4_to_link5
      - link5_to_link6
      - link6_to_link6_flange

  # Configuration for the gripper_action_controller.
  gripper_action_controller:
    action_ns: gripper_cmd
    type: GripperCommand
    default: true
    joints:
      - gripper_controller


This yaml file tells MoveIt 2 how to convert its motion plans into instructions that the ROS 2 controllers can understand and execute on the real (or simulated) robot. Without this, MoveIt 2 wouldn’t know how to make the robot actually move using the ROS 2 control system.

MoveIt 2 sends desired joint positions or trajectories to ROS 2 controllers, which then manage the low-level control of the robot’s motors. This integration allows MoveIt 2 to focus on high-level motion planning while relying on ROS 2 control for the actual execution on the robot hardware.

We see that three ROS 2 controllers are available:

  • arm_controller: This is a Joint Trajectory Controller for the robot arm
  • gripper_action_controller: This is a Gripper Action Controller for the gripper

We also see a list of joints associated with each controller.

The arm controller uses the “FollowJointTrajectory” action interface. The FollowJointTrajectory action interface is a standardized interface in ROS (Robot Operating System) for controlling the joint positions of a robot over time. It allows MoveIt 2 to send a trajectory, which is a sequence of joint positions and their corresponding timestamps, to the robot’s joint controllers.

How it works:

  1. MoveIt 2 plans a movement, which results in a series of joint positions over time.
  2. This plan is packaged into a “trajectory” – think of it as a list of goal positions (in radians) for each joint.
  3. MoveIt 2 sends this trajectory to the appropriate ROS 2 controller using the FollowJointTrajectory action interface. This interface is like a protocol MoveIt 2 uses to communicate with the controllers.
  4. The ROS 2 controller receives this trajectory, processes it, and relays it to the hardware interface.
  5. The hardware interface sends the commands to the microcontroller (e.g. Arduino) to move the robot’s joints accordingly.

Add Cartesian Speed and Acceleration Limits

For motion planning for our robotic arm, we need to add the Pilz industrial motion planner.

Pilz focuses on simple, predictable movements like straight lines, circles, and direct point-to-point paths. It works by calculating a clear, predefined route for each part of the robot to follow, ensuring smooth and safe motion from start to finish.

Think of Pilz as a careful driver that always takes known, reliable routes rather than trying to find creative shortcuts, making it perfect for precise tasks where you want consistent, controlled movements.

While default planners like OMPL (Open Motion Planning Library) work best for dynamic environments where you can have unexpected obstacles (people moving around, objects left on tabletops and counters, etc.), the structured and predictable nature of Pilz is advantageous in a scenario where the environment is well-structured and controlled.

Open a new file where we will define the Cartesian limits:

touch pilz_cartesian_limits.yaml  
# Defines the Cartesian movement limits for a robotic arm using the Pilz Planner.
# The values are typically determined based on a combination of the robot manufacturer's
# specifications, practical testing, and considerations for the specific tasks the robot will perform.
cartesian_limits:
  max_trans_vel: 1.0  # Maximum translational velocity in meters per second. Specifies how fast the end effector can move in space.
  max_trans_acc: 2.25 # Maximum translational acceleration in meters per second squared. Determines how quickly the end effector can increase its speed.
  max_trans_dec: -5.0 # Maximum translational deceleration in meters per second squared. Specifies how quickly the end effector can decrease its speed.
  max_rot_vel: 1.57   # Maximum rotational velocity in radians per second. Controls the speed at which the end effector can rotate.

Save the file, and close it.

This file sets speed and acceleration limits for a robotic arm’s movement in space. It defines how fast and quickly the robot’s end effector can move and rotate in Cartesian space (X, Y, Z direction) to ensure safe and efficient operation within the robot’s workspace.

Why do we need this file if we have already set velocity limits inside the URDF?

The URDF and the MoveIt configuration files serve different purposes when it comes to controlling the robot’s motion. The URDF contains velocity limits for each individual joint, which determine how fast each joint can move. These limits are important for controlling the speed of the individual joints. For example:

<joint name="joint_name" type="revolute">
  <limit lower="-2.879793" upper="2.879793" effort="5.0" velocity="2.792527"/>
</joint>

On the other hand, the cartesian_limits in the MoveIt configuration are used to control the speed and motion of the robot’s gripper in 3D space. These limits define how fast the gripper can move and rotate in the world coordinate frame (X, Y, Z axes). The cartesian_limits are essential for planning the overall motion of the robot and ensuring that it moves safely and efficiently when performing tasks.

So, while the URDF velocity limits are important for controlling individual joints, the MoveIt cartesian_limits are still necessary for proper motion planning and control of the gripper’s movement in 3D space.

By the way, in the context of the file name “pilz_cartesian_limits.yaml”, “pilz” refers to Pilz GmbH & Co. KG, a German automation technology company that specializes in safe automation solutions, including robotics.

Add a Planning Pipeline File for the Pilz Industrial Motion Planner

To make all this work, we need to add a file called pilz_industrial_motion_planner_planning_planner.yaml.

Here is an example file from MoveIt’s GitHub

This file preprocesses the robot’s motion planning request to make sure it meets certain safety and feasibility criteria, like ensuring the robot’s starting position is safe and within allowed limits before planning its movement.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_moveit_config/config/
touch pilz_industrial_motion_planner_planning.yaml

Add this code.

planning_plugins:
  - pilz_industrial_motion_planner/CommandPlanner
# The order of the elements in the adapter corresponds to the order they are processed by the motion planning pipeline.
request_adapters:
  - default_planning_request_adapters/ValidateWorkspaceBounds
  - default_planning_request_adapters/CheckStartStateBounds
  - default_planning_request_adapters/CheckStartStateCollision

default_planner_config: PTP
capabilities: >-
    pilz_industrial_motion_planner/MoveGroupSequenceAction
    pilz_industrial_motion_planner/MoveGroupSequenceService

Save the file, and close it.

Add a Planning Pipeline File for the OMPL Planner

Now add a pipeline file for another planner called OMPL (Open Motion Planning Library).

OMPL is a sampling-based motion planning framework integrated into that provides various probabilistic algorithms like RRT, PRM, and their variants to find collision-free paths for robots. OMPL handles the complex task of planning feasible trajectories while avoiding obstacles, though it doesn’t inherently optimize for smoothness or dynamics.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_moveit_config/config/
touch ompl_planning.yaml

Add this code.

Save the file, and close it.

Add a Planning Pipeline File for the STOMP Planner

Now add a pipeline file for another planner called STOMP.

STOMP (Stochastic Trajectory Optimization for Motion Planning) is a method of planning robot motion that uses random adjustments to improve paths. Think of it like constantly fine-tuning a rough sketch until you get a perfect drawing, but for robot movements.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_moveit_config/config/
touch stomp_planning.yaml

Add this code.

Save the file, and close it.

Add an RViz Configuration File

Let’s create a file that sets the parameters for visualizing all this in RViz.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_moveit_config/
mkdir rviz && cd rviz
touch move_group.rviz

Add this code.

Save the file, and close it.

Create a Launch File

Let’s create a launch file.

Open a terminal window, and type:

cd ~/ros2_ws/src/mycobot_ros2/mycobot_moveit_config/launch/

Create the following file:

touch move_group.launch.py

Add this code.

Save the file, and close it.

This launch file launches MoveIt and RViz.

Looking at the source code for the moveit_configs_builder helped me to complete this launch file.

Create a Setup Assistant File

The .setup_assistant file is typically created automatically when you use the MoveIt Setup Assistant to configure your robot. We are going to create a basic one since MoveIt expects this file to be inside your package (it will spit out warnings if it is not in the root of the package).

Create a new file named .setup_assistant in the root directory of your MoveIt configuration package (e.g., mycobot_moveit_config).

cd ~/ros2_ws/src/mycobot_ros2/mycobot_moveit_config/
touch .setup_assistant

Add this code:

moveit_setup_assistant_config:
  URDF:
    package: mycobot_description
    relative_path: urdf/robots/mycobot_280.urdf.xacro
  SRDF:
    relative_path: config/mycobot_280/mycobot_280.srdf
  CONFIG:
    author_name: AutomaticAddison
    author_email: addison@todo.com
    generated_timestamp: 1734048000

Save the file, and close it.

Remember to update this file, if you use a robot other than the myCobot 280.

Set file permissions.

sudo chmod 644 .setup_assistant

Add an RGBD Camera

Let’s add an RGBD camera to our robot so that we can eventually run a pick and place task.

Let’s start by creating a URDF file for an RGBD camera. An RGBD camera captures both color (Red Green Blue) images and depth information, allowing it to see the world in 3D like human eyes do.

Gazebo has many sensors available, including one specifically for an RGBD camera.

We will add to the URDF we currently have.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_description/meshes/
mkdir -p d435/visual/

Add this d435.stl file.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_description/urdf/sensors/
touch intel_rgbd_cam_d435.urdf.xacro

Add this code in that file.

Save the file, and close it.

Edit the main robot state publisher file with the use_camera parameter.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_description/launch/

Here is what the robot_state_publisher.launch.py should look like.

Now add the ROS 2 bridge parameter file.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_gazebo/
mkdir config

Add this file to the config folder:

touch ros_gz_bridge.yaml

Update CMakeLists.txt to add the config folder to the install path.

# Copy necessary files to designated locations in the project
install (
  DIRECTORY config launch models worlds
  DESTINATION share/${PROJECT_NAME}
)

Save the file, and close it.

Now edit the main launch file to bridge the 3D point cloud topic over from Gazebo to ROS 2.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_gazebo/launch

Open the mycobot.gazebo.launch.py file, and make it look like this.

Now edit your launch script:

cd ~/ros2_ws/src/mycobot_ros2/mycobot_bringup/scripts

Open the mycobot_280_gazebo.sh and the mycobot_280_gazebo_and_moveit.sh file, and add the use_camera parameter to the launch call. Make it true.

We also need to add a little delay on the loading of the joint state broadcaster.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_moveit_config/launch

Open load_ros2_controllers.launch.py.

Make it look like this file.

Save the file, and close it.

Edit package.xml

Let’s edit the package.xml file with the dependencies.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_moveit_config/

Open the package.xml file.

Add this code.

Save the file, and close it.

Edit CMakeLists.txt

Update CMakeLists.txt with the new folders we have created.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_moveit_config/

Open CMakeLists.txt.

Add this code.

cmake_minimum_required(VERSION 3.8)
project(mycobot_moveit_config)

if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
  add_compile_options(-Wall -Wextra -Wpedantic)
endif()

# find dependencies
find_package(ament_cmake REQUIRED)
find_package(moveit_ros_planning_interface REQUIRED)
find_package(moveit_ros_move_group REQUIRED)
find_package(moveit_kinematics REQUIRED)
find_package(moveit_planners_ompl REQUIRED)
find_package(moveit_ros_visualization REQUIRED)
find_package(rviz2 REQUIRED)
find_package(rviz_visual_tools REQUIRED)

# Copy necessary files to designated locations in the project
install (
  DIRECTORY config launch rviz
  DESTINATION share/${PROJECT_NAME}
)

# Install the .setup_assistant file
install(
  FILES .setup_assistant
  DESTINATION share/${PROJECT_NAME}
)


if(BUILD_TESTING)
  find_package(ament_lint_auto REQUIRED)
  # the following line skips the linter which checks for copyrights
  # comment the line when a copyright and license is added to all source files
  set(ament_cmake_copyright_FOUND TRUE)
  # the following line skips cpplint (only works in a git repo)
  # comment the line when this package is in a git repo and when
  # a copyright and license is added to all source files
  set(ament_cmake_cpplint_FOUND TRUE)
  ament_lint_auto_find_test_dependencies()
endif()

ament_package()

Save the file, and close it.

Create a Launch Script

Let’s bring it all together by creating a launch script.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_bringup/scripts
touch mycobot_280_gazebo_and_moveit.sh

Add this code.

Save the file, and close it.

I set my desired Gazebo camera pose in the script. When you launch everything, you can see your current camera pose by typing this command:

gz topic -t /gui/camera/pose -e

Build the Package

Now let’s build the package.

cd ~/ros2_ws/
colcon build
source ~/.bashrc

Launch the Demo and Configure the Plugin

Begin by launching the robot in Gazebo along with all the controllers (i.e. arm_controller, gripper_action_controller.

cd ~/ros2_ws/src/mycobot_ros2/mycobot_bringup/scripts
sudo chmod +x mycobot_280_gazebo_and_moveit.sh
bash ~/ros2_ws/src/mycobot_ros2/mycobot_bringup/scripts/mycobot_280_gazebo_and_moveit.sh

This will:

  • Start Gazebo
  • Spawn the robot
  • Start the robot_state_publisher
  • Load and start the necessary controllers (joint_state_broadcaster, arm_controller, gripper_action_controller)

The script will then:

  • Start the move_group node
  • Launch RViz with the MoveIt configuration
  • Use the simulation time from Gazebo

Here is what your screen should look like when everything is loaded properly. You will notice that on the right-hand side of Gazebo I closed the two panels. You can close them by right-clicking and hitting “Close”.

1-look-like-this

Let’s check out the active topics:

ros2 topic list
2-ros2-topic-list

The active nodes are:

ros2 node list
3-ros2-node-list

Here are our available action servers:

ros2 action list
4-ros2-action-list

Here are our services. You will see we have a lot of services. This page at the ROS 2 tutorials shows you how we can call services from the command line.

ros2 service list

Let’s check out the parameters for the different nodes my launching a GUI tool called rqt_reconfigure:

rqt --standalone rqt_reconfigure
5-rqt-reconfigure

Click “Refresh”.

Click on any of the nodes to see the parameters for that node. For example, let’s take a look at the moveit_simple_controller_manager node.

This tool enables you to see if the parameters you set in the config file of your MoveIt package were set correctly. 

You can close the window.

Explore MoveIt in RViz

We will now roughly follow this official MoveIt Quickstart in RViz tutorial which walks you through the main features provided by the MoveIt RViz plugins.

First, click on the Displays panel in the upper left.

Under Global Options, make sure the Fixed Frame is set to “base_link”

Scroll down to the MotionPlanning plugin.

Make sure the Robot Description field is set to robot_description.

6-robot-description

Make sure the Planning Scene Topic field is set to /monitored_planning_scene. 

Make sure the Trajectory Topic under Planned Path is set to /display_planned_path.

7-display-planned-path

In the Planning Request section,  make sure the Planning Group is set to arm. You can also see this in the MotionPlanning panel in the bottom left.

8-planning-group-arm

Play With the Robotic Arm

Rviz has four different visualizations for MoveIt. These visualizations are overlapping.

  1. The robot’s configuration in the /monitored_planning_scene planning environment:
    • Planning Scene -> Scene Robot -> Show Robot Visual checkbox
  2. The planned path for the robot:
    • Motion Planning -> Planned Path -> Show Robot Visual checkbox
  3. Green: The start state for motion planning:
    • MotionPlanning -> Planning Request -> Query Start State checkbox
  4. Orange: The goal state for motion planning:
    • MotionPlanning -> Planning Request -> Query Goal State checkbox

You can toggle each of these visualizations on and off using the checkboxes in the Displays panel.

Interact With the Robotic Arm

Now let’s configure our RViz so that all we have are the scene robot, the starting state, and the goal state.

  1. Check the Motion Planning -> Planned Path -> Show Robot Visual checkbox
  2. Uncheck the Planning Scene -> Scene Robot -> Show Robot Visual checkbox
  3. Uncheck the MotionPlanning -> Planning Request -> Query Start State checkbox
  4. Check the MotionPlanning -> Planning Request -> Query Goal State checkbox

The orange colored robot represents the desired Goal State

14-orange-colored-goal-state

Find the “Joints” tab in the bottom MotionPlanning panel. This lets you control individual joints of the robot.

Try moving the sliders for the joints, and watch how the robot arm changes its pose. 

Uncheck the MotionPlanning -> Planning Request -> Query Goal State checkbox.

Check the MotionPlanning -> Planning Request -> Query Start State checkbox.

The green colored robot is used to set the Start State for motion planning.

11-query-start-state

Use Motion Planning

Now it’s time to plan a movement for your robotic arm using RViz and MoveIt 2.

In the Displays panel in RViz, do the following…

Uncheck the MotionPlanning -> Planning Request -> Query Goal State checkbox

Check the MotionPlanning -> Planning Request -> Query Start State checkbox.

You can see if any links are in collision by going to the MotionPlanning window at the bottom and moving to the “Status” tab.

Move the green (Start State) to where you want the movement to begin.

Rather than using the interactive marker, I find it easier to go to the “Joints” tab in the Motion Planning window at the bottom. This lets you control individual joints of the robot to get your desired Start State.

16-joints-panel

Uncheck the MotionPlanning -> Planning Request -> Query Start State checkbox.

Check the MotionPlanning -> Planning Request -> Query Goal State checkbox.

Move the orange (Goal State) to where you want the movement to end.

Check the MotionPlanning -> Planning Request -> Query Start State checkbox.

12-check-both

Now let’s plan a motion. Look for the “MotionPlanning” window at the bottom-left part of the screen.

Click the Context tab.

Select ompl.

Now click the Planning tab.

13-select-ompl

Click the Plan button to generate a motion plan between the start and goal state.

The system will calculate a path, and you should see a preview of the planned motion in the 3D visualization window.

Review the planned path carefully to ensure it’s appropriate and collision-free.

If you’re not satisfied with the plan, you can adjust the start and goal states in the 3D view and plan again.

Once you’re happy with the plan, click the Plan & Execute button to make the robot carry out the planned motion.

You should see your simulated robotic arm moving in both Gazebo and RViz.

If you want to speed things up, you can tweak the Velocity Scaling figures on the Planning tab. You can make them all 1.0 for example.

Then click Plan & Execute again.

Keep setting new Goal States and moving your arm to different poses. 

Be careful not to move your robot into a Goal State where the robot appears red. Red links on your robot mean the robot is colliding with itself.

19-red-colliding-links

If you want to open and close the gripper. The steps are the same.

Go to the MotionPlanning window at the bottom-left.

Go to the Planning tab.

14-planning-tab

Change the Planning Group to gripper.

Select closed as the Goal State. Remember this is one of the states we configured in the SRDF file.

Click Plan & Execute. No need to click Plan first and then Execute since this is a simple motion.

You should see the gripper close in Gazebo.

15-gripper-close-gazebo

You can also make motion plans for the combined arm and gripper group. Have fun with it, and play around to get a feel for MoveIt2 in Rviz. 

You can try other planners other than OMPL if you like. 

I have set up Pilz and STOMP as some alternative motion planning libraries. Just use the dropdown menu on the Context tab.

Introspecting Trajectory Waypoints

Understanding how your robot moves in detail is important for ensuring smooth and safe operations. Let’s explore a tool that allows us to do just that:

Add a new tool to RViz.

  • Click on the “Panels” menu
  • Choose “Trajectory – Trajectory Slider”
  • You’ll see a new slider appear in RViz
21-panels-trajectory-slider

Plan a movement:

  • Set a new goal pose for the robotic arm.
  • Click the “Plan” button

Use the new Trajectory Slider:

  • Move the slider to see each step of the robot’s planned movement
  • Press the “Play” button to watch the whole movement smoothly
22-trajectory-slider

Remember: If you change where you want the robot to go, always click “Plan” again before using the slider or “Play” button. This makes sure you’re looking at the newest plan, not an old one.

This tool helps you understand exactly how your robot will move from start to finish.

Plan Cartesian Motions

Sometimes, you want your robot to move in a straight line. Here’s how to do that.

Go to the MotionPlanning panel at the bottom-right of the screen.

Click the Planning tab.

Click the Use Cartesian Path checkbox.

What does this do?

  • When checked, the robot will try to move its end-effector in a straight line.
  • This is useful for tasks that need precise, direct movements, like drawing a straight line or moving directly towards an object.

Why is this important?

  • Normal planning might move the robot’s arm in a curved or complex path.
  • Cartesian planning ensures a more predictable, straight-line movement.

Try planning some movements with this option on and off to see the difference in how the robot moves. You will need to try different joint goal locations to get a plan that works (many will fail).

That’s it! Keep building!

Autonomous Docking with AprilTags Using Nav2 – ROS 2 Jazzy

Precise docking is a key capability for autonomous mobile robots. While Nav2 excels at general path planning and obstacle avoidance, docking often requires more accurate position information than standard navigation provides. This is where AprilTags come in – they’re visual markers that help robots determine their exact position and orientation relative to a docking station.

In this tutorial, we’ll implement automated docking by combining Nav2’s docking server with AprilTag detection. We’ll mount an AprilTag near our docking station and configure our robot to use it as a visual reference point. The Nav2 docking server will handle the approach and final alignment, using the AprilTag’s position data to guide the robot into the dock. Here is what you will be able to build by the end of this tutorial:

docking-at-april-tag-speedup-small

We’ll build this system in clear steps:

  • Set up AprilTag detection to process our robot’s camera images
  • Configure the Nav2 docking server to use AprilTag data
  • Test our docking system in simulation

In this tutorial, we will implement AprilTag detection using ROS 2, giving you hands-on experience with an important technology in robot vision. We will use the CPU version of AprilTag detection from ROS 2. You can find a GPU-accelerated drop-in replacement for this CPU version here on the NVIDIA Isaac website

Real-World Applications

AprilTags are used in many real-world situations. Here are some examples:

  • Robotic Arms:
    • Helping robotic arms in factories pick up parts from exact locations
    • Guiding robotic arms in fast food restaurants, like those made by Miso Robotics, to assist with cooking tasks
  • Mobile Robots:
  • Helping delivery robots navigate through office buildings or hospitals
  • Assisting mobile robots to dock precisely at battery charging stations
  • Drones:
  • Enabling drones to land accurately on specific locations marked with AprilTags
  • Guiding drones for indoor inspections where GPS doesn’t work well

Prerequisites

All my code for this project is located here on GitHub.

What Are AprilTags?

AprilTags are visual markers designed specifically for robotics and computer vision applications. These tags appear as small black and white squares with unique patterns inside. Their primary purpose is to help robots and cameras quickly determine their position and orientation in space.

When a robot’s camera spots an AprilTag, it can instantly calculate how far away the tag is, what angle it’s viewing the tag from, and which specific tag it’s looking at, as each tag has a unique identifier.

AprilTags offer several advantages: 

  • They’re inexpensive to produce and can be quickly printed on a plain sheet of white paper
  • They perform well in various lighting conditions
  • They allow for fast detection – an important feature for real-time applications.

Why Use AprilTags Instead of ArUco Markers?

You may have heard of another type of visual tag used in robotics called ArUco markers. ArUco markers are similar to AprilTags. They look like small black and white squares with patterns inside, much like AprilTags. However, AprilTags have some advantages:

  • Better in Difficult Situations: AprilTags work better when lighting is not perfect or when part of the tag is hidden.
  • More Accurate: AprilTags give robots a more precise idea of where they are, especially when far away or viewed at odd angles.
  • Faster to Identify: Robots can usually find and read AprilTags more quickly than with ArUco markers, which is important when things need to happen fast.
  • Fewer Mistakes: AprilTags are less likely to be confused with other objects or misread.

What Do We Need to Do?

Remember Automatic Addison’s 10 Ps of robotics: Prior Proper Planning Prevents Poor Performance. Poor Performance Promotes Pain.

Let’s begin with the end in mind and think through all the steps we need to take to get to our end goal, which is to have our robot dock to a docking station using the Nav2 Docking Server.

First, we need to add an AprilTag to our cafe world environment in Gazebo. This tag will serve as the visual marker for our robot’s docking station. The tag needs to be placed at a specific position and orientation that makes sense for docking.

Now open a new terminal window, and type the following command to launch the robot. Wait until Gazebo and RViz come fully up, which can take up to 30 seconds:

nav

or

bash ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_bringup/scripts/rosmaster_x3_navigation.sh 

Type this command in the terminal window: 

ros2 topic list | grep cam_1

Our robot’s depth camera currently publishes raw images to /cam_1/color/image_raw as well as camera information to /cam_1/depth/camera_info.

To see the image on /cam_1/color/image_raw, type:

sudo apt-get install ros-$ROS_DISTRO-image-view

Then run the image viewer:

ros2 run image_view image_view --ros-args -r image:=cam_1/color/image_raw

You can move your robot around the room.

For these raw camera images to be useful for AprilTag detection, we need to process them through a ROS 2 package called image_proc. We will use the rectify node to achieve this. This node:

Subscribes to:

Publishes:

  • /cam_1/color/image_rect (sensor_msgs/msg/Image)

Why did we have to do this step? For AprilTag detection, we need rectified images, which remove camera lens distortion. The camera’s lens naturally bends the image a bit (like looking through a curved glass), which can make the square tags look slightly warped – rectification fixes this so the tags are easier to detect. 

With our rectified images ready, we will use the CPU-based apriltag_ros package for detection. This node:

Subscribes to:

  • /cam_1/color/image_rect (sensor_msgs/msg/Image)
  • /cam_1/color/camera_info (sensor_msgs/msg/CameraInfo)

Publishes:

Alternatively, if you have an NVIDIA GPU, you could use Isaac ROS AprilTag. This node would:

Subscribe to:

  • /cam_1/color/image_rect (sensor_msgs/msg/Image)
  • /cam_1/color/camera_info (sensor_msgs/msg/CameraInfo)

Publish:

The Nav2 Docking Server uses these AprilTag detections through its SimpleChargingDock plugin. Looking at the SimpleChargingDock source code, this plugin specifically subscribes to:

The complete pipeline works like this:

  • apriltag_ros (or isaac_ros_apriltag) detects our tag (ID 0 from family 36h11) and publishes the transform:
    • Parent frame: cam_1_depth_optical_frame (our camera’s optical frame)
    • Child frame: tag36h11:0 (our AprilTag’s frame)
  • We’ll need to create a node to add to our yahboom_rosmaster_navigation package that:
    • Listens for this transform from the camera’s optical frame to the tag
    • Publishes this as a PoseStamped message to the detected_dock_pose topic with the “odom” frame as the default frame_id in the header.
    • We will follow the reference code here.
  • The SimpleChargingDock plugin then:
    • Handles any needed frame transformations internally using its tf2 buffer
    • Applies configured offsets to determine the actual docking pose (since our tag will be high on the wall above the docking station)
    • Computes a staging pose offset from the docking pose
    • Uses either battery state or distance thresholds to determine when we’re successfully docked

Once we have all our components set up, we can test the docking system in two ways:

  1. Through RViz:
  2. Through a demo script by using the Simple Commander API.

In the next sections, we’ll implement each of these components step by step, starting with setting up a new package.

Create a New Package

Open a new terminal window and type:

cd ~/ros2_ws/src/yahboom_rosmaster/
ros2 pkg create --build-type ament_cmake \
                --license BSD-3-Clause \
                --maintainer-name ubuntu \
                --maintainer-email automaticaddison@todo.com \
                yahboom_rosmaster_docking

Edit the package.xml file in this new package to add these dependencies:

  <depend>rclcpp</depend>
  <depend>rclpy</depend>
  <depend>apriltag_ros</depend>
  <depend>apriltag_msgs</depend>
  <depend>image_proc</depend>
  <depend>image_view</depend>
  <depend>geometry_msgs</depend>
  <depend>nav2_simple_commander</depend>
  <depend>tf2_ros</depend>

You can also edit CMakeLists.txt to add these package dependencies.

# find dependencies
find_package(ament_cmake REQUIRED)
find_package(ament_cmake_python REQUIRED)
find_package(geometry_msgs REQUIRED)
find_package(rclcpp REQUIRED)
find_package(rclpy REQUIRED)
find_package(apriltag_ros REQUIRED)
find_package(apriltag_msgs REQUIRED)
find_package(image_proc REQUIRED)
find_package(geometry_msgs REQUIRED)
find_package(tf2_ros REQUIRED)

Update the package.xml file inside the metapackage yahboom_rosmaster.

 <exec_depend>yahboom_rosmaster_bringup</exec_depend>
 <exec_depend>yahboom_rosmaster_description</exec_depend>
 <exec_depend>yahboom_rosmaster_docking</exec_depend>
 <exec_depend>yahboom_rosmaster_gazebo</exec_depend>
 <exec_depend>yahboom_rosmaster_localization</exec_depend>
 <exec_depend>yahboom_rosmaster_navigation</exec_depend>
 <exec_depend>yahboom_rosmaster_system_tests</exec_depend>

Now build your workspace.

cd ~/ros2_ws/
rosdep install --from-paths src --ignore-src -r -y
colcon build && source ~/.bashrc

Add an AprilTag To Your Gazebo World

In this section, we’ll add an AprilTag to our cafe world.

The AprilTag is in Simulation Description Format (SDF), the default model format for Gazebo.

Open a terminal window, and type this:

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_gazebo/models/

Type:

dir

You can see we have an AprilTag that represents the number 0.

Apriltag36_11_00000

In another terminal, navigate to this file:

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_gazebo/worlds/

Open cafe.world.

We’ll be adding the AprilTag model to the wall.

Add this code.

   <include>
      <name>Apriltag36_11_00000</name>
      <pose>-4.96 1.5 0.55 1.5708 0 1.5708</pose>
      <static>true</static>
      <uri>model://Apriltag36_11_00000</uri>
    </include>

Save the changes, and close the file.

To see your cafe.world file in action, type the following command to let Gazebo know how to find the models:

export GZ_SIM_RESOURCE_PATH=/home/ubuntu/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_gazebo/models

Now launch your Gazebo world:

gz sim cafe.world

Your Gazebo world should now include the April Tag.

Remember to adjust the pose of the AprilTag as needed to ensure it is visible and correctly placed in your specific world layout.

Convert the Raw Camera Images into Rectified Images

Let’s create a launch file that will use the image_proc package’s rectify node to do the following:

Subscribes to:

Publishes:

  • /cam_1/color/image_rect (sensor_msgs/msg/Image)

Open a terminal window, and type:

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_docking/
mkdir launch

Add this file to the launch folder:

apriltag_dock_pose_publisher.launch.py

Save the file, and close it.

Edit CMakeLists.txt.

Add these lines:

# Copy necessary files to designated locations in the project
install (
  DIRECTORY launch
  DESTINATION share/${PROJECT_NAME}
)

Save the file, and close it.

Build your workspace.

cd ~/ros2_ws/
colcon build && source ~/.bashrc

Launch the robot.

nav

Once everything is up, type:

ros2 launch yahboom_rosmaster_docking apriltag_dock_pose_publisher.launch.py

To see the rectified image:

ros2 run image_view image_view --ros-args -r image:=/cam_1/color/image_rect

Run the image viewer to see the raw input image:

ros2 run image_view image_view --ros-args -r image:=cam_1/color/image_raw

You should see a light grey square in both images.

3-image-raw-image-rect

Type this command in the terminal window: 

ros2 topic list | grep cam_1
1-grep-cam-1

You can safely ignore the synchronization warnings. These warnings happen from time to time due to minor timing misalignments with Gazebo. 

4-ignore-synching-warnings

The important thing is that you can see the rectified image properly. Feel free to move the robot around.

Publish the April Tag Pose Using the apriltag_ros Package

We will now use the apriltag_ros package to publish the pose of the AprilTag with respect to the camera’s optical frame. 

Specifically, we will detect our tag (ID 0 from family 36h11) in the rectified camera image and then publish the following coordinate transformation:

  • Parent frame: cam_1_depth_optical_frame (our camera’s optical frame)
  • Child frame: tag36h11:0 (our AprilTag’s frame)

Let’s start by creating a configuration file.

Open a terminal window, and type:

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_docking/
mkdir config

Add this file to the config folder:

apriltags_36h11.yaml

Save the file, and close it.

Edit CMakeLists.txt.

Add these lines:

# Copy necessary files to designated locations in the project
install (
  DIRECTORY config launch
  DESTINATION share/${PROJECT_NAME}
)

Save the file, and close it.

Build your workspace.

cd ~/ros2_ws/
colcon build && source ~/.bashrc

Let’s test our AprilTag detection pipeline. First, make sure your robot is running:

nav

Once your robot is up and running, open a new terminal and launch our AprilTag detection nodes:

ros2 launch yahboom_rosmaster_docking apriltag_dock_pose_publisher.launch.py

Navigate your robot to where it can see the AprilTag in its front camera.

Now let’s verify that our AprilTag detection pipeline is working correctly. Open a new terminal and check if we’re receiving AprilTag detections:

ros2 topic echo /cam_1/detections

You should see detection messages whenever your robot’s camera sees the AprilTag. These messages include information about the tag’s ID, position, and detection confidence.

5-ros2-topic-echo-cam-detections

To verify the coordinate transformations are being published correctly, use the tf2_tools package:

ros2 run tf2_tools view_frames

This command will generate a PDF file showing the complete transform tree. Look for a transform from cam_1_depth_optical_frame to tag36h11:0.

6-view-april-tag-frames
7-april-tag-transform

To see these transforms in real-time, open another terminal and type:

ros2 run tf2_ros tf2_echo cam_1_depth_optical_frame tag36h11:0

This will display the position and orientation of the tag relative to your robot’s camera optical frame, updating in real-time.

8-transform-values-camera-optical-to-tag

You can also visualize what your robot’s camera is seeing. To view the rectified image (which is what the AprilTag detector uses):

ros2 run image_view image_view --ros-args -r image:=/cam_1/color/image_rect
9-ground_truth

Try moving your robot around to different positions where it can see the AprilTag. You should observe:

  • The detection messages updating in the topic echo window
  • The transform values changing as the relative position between camera and tag changes
  • The AprilTag visible in the image viewer windows

Let me explain the transform from the camera’s optical frame to the AprilTag frame. Remember that we’re working with rectified images, which means the image has been processed to remove lens distortion and appear as if taken by an ideal camera.

In the camera’s optical frame (assume we are in front of the camera looking directly at the camera lens):

  • Positive X points to the left 
  • Positive Y points down
  • Positive Z points outward from the camera lens towards us

For the AprilTag (assume we are looking directly at the tag above the docking station, which is against the wall):

  • Positive Z points out from the tag towards us (away from the wall and into the scene)
  • Positive X points to the right of the tag
  • Positive Y points up along the tag towards the ceiling

The translation [-0.088, 0.364, 0.805] tells us the tag’s position relative to the camera optical frame (assume we are looking through the camera lens at the AprilTag):

  • -0.088m in X: Tag is slightly to the left of the camera’s center
  • +0.364m in Y: Tag appears in the lower portion of the image
  • +0.805m in Z: Tag is about 0.8 meters in front of the camera

The rotation, shown in degrees [156.319, 4.207, -1.394], represents:

  • Roll (156.319°): This large value indicates the tag is mounted vertically on the wall. The tag orientation is achieved by rotating the camera optical frame (parent) about its x axis 156 degrees in the counterclockwise direction so that it aligns with the child tag36h11:0 frame (remember the right-hand rule of robotics)
  • Pitch (4.207°): Small positive pitch shows the tag is almost directly facing the camera
  • Yaw (-1.394°): Minimal yaw means the tag is mounted level on the wall with almost no rotation to the left or right

The transform matrix combines both the rotation and translation into a single 4×4 matrix, useful for computing the exact pose of the tag relative to the camera. This information will be essential for our docking system to determine how to approach the dock.

If you’re seeing the AprilTag detections and transforms being published, congratulations! Your AprilTag detection pipeline is working correctly. 

I will now add the apriltag_dock_pose_publisher.launch.py launch file to the main navigation launch file rosmaster_x3_navigation.launch.py, which is inside the yahboom_rosmaster_bringup/launch folder.

Create a Detected Dock Pose Publisher

To make our docking system work with AprilTags, we need to create a node that will listen to the /tf topic for the transform between the cam_1_depth_optical_frame (parent frame) and the tag36h11:0 child frame. It will then publish this information as a geometry_msgs/PoseStamped message to a topic named detected_dock_pose.

Our Detected Dock Pose Publisher will process this transformation information and publish the dock’s AprilTag pose as a geometry_msgs/PoseStamped message on the “detected_dock_pose” topic. This publisher is important because it transforms the AprilTag pose data from the TF tree into a format that the docking system can directly use to locate and approach the dock accurately.

The Nav2 Docking Server uses the information on the detected_dock_pose topic through its SimpleChargingDock plugin. Looking at the SimpleChargingDock source code, this plugin specifically subscribes to:

Let’s to create a node to add to our yahboom_rosmaster_navigation package that:

Open a terminal window, and type:

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_docking/src
touch detected_dock_pose_publisher.cpp

Add this code.

Save the file, and close it.

Update your CMakeLists.txt.

Now add this node to your apriltag_dock_pose_publisher.launch.py launch file in the yahboom_rosmaster_docking package.

To test that everything is setup properly, launch the robot:

nav

Now run the following command:

ros2 topic info /detected_dock_pose -v

You should see one publisher (detected_dock_pose_publisher) and one subscriber (docking_server).

Navigate the robot over to the AprilTag so that the tag is in the camera’s field of view. Then type:

ros2 topic echo /detected_dock_pose 

My output is this:

10-field-of-view

The data should match the output from this command:

ros2 run tf2_ros tf2_echo cam_1_depth_optical_frame tag36h11:0

To see what the camera sees, type:

ros2 run image_view image_view --ros-args -r image:=/cam_1/color/image_rect

Overview of the Docking Action

Now that we have the data setup, it is time to configure the docking server for Nav2.

The docking action in Nav2 guides your robot from its current position to a successful connection with a charging dock. Here’s a high-level overview of how it works:

  1. The action server receives a request to dock, either specifying a dock ID from a known database or providing explicit dock pose information.
  2. If the robot isn’t already near the dock, it uses Nav2 to navigate to a predefined staging pose near the charging station.
  3. Once at the staging pose, the robot begins to search for and detect the April tag associated with the dock.
  4. Using visual feedback from the AprilTag, the robot enters a precision control loop, carefully maneuvering to align itself with the dock.
  5. The robot continues its approach until it detects contact with the dock or confirms that charging has begun.
  6. If the docking attempt fails, the system can retry the process a configurable number of times.
  7. Throughout the process, the action server provides feedback on the current state and elapsed time.

This structured approach allows for reliable docking across various robot types and environments, with the flexibility to handle different dock designs and detection methods.

Overview of the Undocking Action

The undocking action in Nav2 guides your robot from a docked position to a safe staging pose away from the charging station. Here’s a high-level overview of how it works:

  1. The action server receives a request to undock, optionally including the dock type if not already known from a previous docking operation.
  2. The system verifies the dock type and current docking status, ensuring the robot is actually docked (for charging docks).
  3. For charging docks, the system first attempts to disable charging before any physical movement begins.
  4. Once charging is disabled, the robot calculates its staging pose based on its current position relative to the dock.
  5. Using a precision control loop, the robot carefully maneuvers to the staging pose while maintaining specified linear and angular tolerances.
  6. The system monitors both position and charging status, only considering the undocking successful when the robot has reached the staging pose and charging has completely stopped.
  7. Throughout the process, the system enforces timeout limits and can respond to cancellation requests for safety.

This structured approach allows for reliable undocking that works across different robot platforms and dock types, while maintaining the safety and integrity of both the robot and charging equipment.

Create the Dock Database File

To use the Nav2 docking server, you’ll need to create a database of known docking stations in your environment. This database tells your robot where docks are located and what type they are. Here’s how to set it up. We will store our files in the yahboom_rosmaster_docking package.

You can find a guide of how to create the database file here on GitHub.

Open a terminal window, and type:

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_docking/config
touch dock_database.yaml 

Add this code.

Save the file, and close it.

In this file, we defined a dock with a unique identifier, its type, and its pose in your map. The pose is given as [x, y, theta] coordinates.

Configure the Docking Server

We now need to configure the docking_server.

First, let’s navigate to the folder where the Nav2 parameters are located.

Open a terminal and move to the config folder.

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_navigation/config/

Now, let’s open the YAML file using a text editor: rosmaster_x3_nav2_default_params.yaml

Add these parameters.

docking_server:
  ros__parameters:
    controller_frequency: 10.0
    initial_perception_timeout: 20.0  # Default 5.0 
    wait_charge_timeout: 5.0
    dock_approach_timeout: 30.0
    undock_linear_tolerance: 0.05 
    undock_angular_tolerance: 0.05
    max_retries: 3
    base_frame: "base_link"
    fixed_frame: "odom"
    dock_backwards: false
    dock_prestaging_tolerance: 0.5

    # Types of docks
    dock_plugins: ['rosmaster_x3_dock']
    rosmaster_x3_dock:
      plugin: 'opennav_docking::SimpleChargingDock'
      docking_threshold: 0.02
      staging_x_offset: 0.75
      staging_yaw_offset: 3.14
      use_external_detection_pose: true
      use_battery_status: false
      use_stall_detection: false
      stall_velocity_threshold: 1.0
      stall_effort_threshold: 1.0
      charging_threshold: 0.5

      external_detection_timeout: 1.0
      external_detection_translation_x: -0.18
      external_detection_translation_y: 0.0
      external_detection_rotation_roll: -1.57
      external_detection_rotation_pitch: 1.57
      external_detection_rotation_yaw: 0.0
      filter_coef: 0.1

    # Dock instances
    dock_database: $(find-pkg-share yahboom_rosmaster_docking)/config/dock_database.yaml

    controller:
      k_phi: 3.0
      k_delta: 2.0
      beta: 0.4
      lambda: 2.0
      v_linear_min: 0.1
      v_linear_max: 0.15
      v_angular_max: 0.75
      slowdown_radius: 0.25
      use_collision_detection: true
      costmap_topic: "/local_costmap/costmap_raw"
      footprint_topic: "/local_costmap/published_footprint"
      transform_tolerance: 0.1
      projection_time: 5.0
      simulation_step: 0.1
      dock_collision_threshold: 0.3

Save the file, and close it.

You can find a detailed list and description of each of these parameters here.

In this scenario, the docking camera is on the front of the robot. Remember ROS conventions:

For the AprilTag (assume we are looking directly at the tag on the docking station):

  • Z-axis points out from the tag (away from the docking station, towards us)
  • X-axis points to the right of the tag
  • Y-axis points up along the tag towards the celing

For the robot’s base_link:

  • X-axis points forward
  • Y-axis points left
  • Z-axis points up

The docking pose of the robot is the AprilTag’s pose with certain translations and rotations applied.

The external_detection_* values in the docking parameters define offsets relative to the AprilTag frame to specify where the robot should dock.

We use the right hand rule here to figure out the values for the external_detection translations and rotations.

To properly express the docking pose relative to the AprilTag’s pose, we need to figure out what combination of rotations and translations of the parent AprilTag axes will match up to the desired docking pose.

You can see in the source code that the rotations are applied in this order: pitch (rotation around the y-axis by 90 degrees counterclockwise…1.57 radians), roll (rotation around the x-axis 90 degrees clockwise…-1.57 radians), then yaw (no rotation around the z-axis).

When you use your right hand to perform these rotations, you will see that your hand will align with the expected orientation of the robot when it is parked headfirst into the docking station.

Since the x-axis of the AprilTag points to the right (as if we are looking directly at the tag), we need that axis to point forward so that it is aligned with the robot base link forward direction x-axis.

To do this, we first rotate the AprilTag pose around its y axis (our middle finger). The z-x plane rotates counterclockwise around the positive y-axis 90 degrees. Counterclockwise rotations in robotics are positive by convention, so we have 1.5708 for external_detection_rotation_pitch. Remember pitch is another way of saying rotation around the y-axis.

Now we need to get the z-axis of the AprilTag pointing towards the sky. To do that, we need to rotate clockwise around the positive x-axis 90 degrees (imagine we are looking down at the arrow of the positive x-axis and watching the y-z plane rotate…rotate around your index finger). Clockwise rotations in robotics are negative by convention, so we have -1.5708 for external_detection_rotation_roll. Remember roll is another way of saying rotation around the x-axis.

Now our AprilTag orientation is aligned with the axes of the robot. Your thumb should be pointing towards the ceiling. 

We then move 0.18 meters down the negative x-axis (move your forearm backwards in the direction of your elbow) to get to the desired docking position of the robot. That is why external_detection_translation_x = -0.18.

In robotics and computer graphics, translations and rotations are generally non-commutative – meaning the order in which you apply them produces different results. Remember you need to:

  1. First correct the orientation of the parent frame to match your expected child frame orientation.
  2. Then apply any translational offsets based on that corrected orientation

Test Docking and Undocking

Now let’s send the robot to the dock using RViz.

Build your workspace.

cd ~/ros2_ws/
colcon build && source ~/.bashrc

Launch navigation.

nav

Once everything has come up, send the robot to the docking station going to the right panel.

In the Dock id field, type dock0. This is the id of the docking station.

Next to Dock type, select rosmaster_x3_dock.

Click Dock robot.

11-dock-robot

You could also type this in the terminal:

ros2 action send_goal /dock_robot nav2_msgs/action/DockRobot "{use_dock_id: true, dock_id: 'dock0', dock_type: 'rosmaster_x3_dock', max_staging_time: 1000.0, navigate_to_staging_pose: true}"
11b-dock-robot
12-docked
15-robot-at-charging-dock

To undock the robot, either click the Undock robot button in RViz, or type:

ros2 action send_goal /undock_robot nav2_msgs/action/UndockRobot "{dock_type: 'rosmaster_x3_dock', max_undocking_time: 30.0}"

The nav2_docking server doesn’t allow a lot of control over the undocking process. I would prefer to just back out straight to the staging pose. I had best results by shortening the max_undocking_time to 2.0 seconds, letting the action abort, and then sending a regular navigation goal after that. This package is fairly new as of the time of this writing, and definitely needs a little more work to be ready for prime time.

ros2 action send_goal /undock_robot nav2_msgs/action/UndockRobot "{dock_type: 'rosmaster_x3_dock', max_undocking_time: 2.0}"

That’s it.

If you want to take the work we did here to the next level, you can send docking and undocking goals by creating action clients that connect to the /dock_robot and /undock_robot action servers. You can write these clients using the Simple Commander API if you want to use Python, or you can use C++.

Keep building!