Autonomous Navigation for a Mobile Robot Using ROS 2 Jazzy

Introduction

In the previous tutorial, we learned how to use SLAM (Simultaneous Localization and Mapping) to create a map of the environment while keeping track of the robot’s location. Now that we have a map, it’s time to take things to the next level and make our robot navigate autonomously. 

The official autonomous navigation tutorial is here on the Nav2 website

By the end of this tutorial, you’ll have a robot that can independently navigate from point A to point B while avoiding obstacles. Here is what you will make happen in this tutorial:

go-to-goal-gazebo-nav2-small
go-to-goal-cafe-world-rviz-small

Real-World Applications

Autonomous navigation has numerous applications in the real world. Here are a few examples of how the concepts you’ll learn in this tutorial can be applied:

  • Home service robots: Imagine a robot that can navigate your home to perform tasks like delivering snacks, collecting laundry, or even reminding you to take your medicine. Autonomous navigation enables robots to move around homes safely and efficiently.
  • Warehouse automation: In large warehouses, autonomous mobile robots can be used to transport goods from one location to another. They can navigate through the warehouse aisles, avoid obstacles, and deliver items to the correct storage locations or shipping stations.
  • Agriculture: Autonomous navigation can be used in agricultural robots that perform tasks such as harvesting, planting, or soil analysis. These robots can navigate through fields, greenhouses, or orchards without human intervention, increasing efficiency and reducing labor costs.
  • Hospital and healthcare: In hospitals, autonomous mobile robots can be used to transport medical supplies, medication, or even patients. They can navigate through the hospital corridors and elevators, ensuring timely and safe delivery of essential items.
  • Search and rescue: In emergency situations, autonomous robots can be deployed to search for and rescue people in hazardous environments. These robots can navigate through rubble, collapsed buildings, or other challenging terrains to locate and assist victims.

By mastering autonomous navigation, you’ll be opening doors to a wide range of exciting applications that can benefit various industries and improve people’s lives.

Prerequisites

All my code for this project is located here on GitHub.

Understanding the Configuration File

First, let’s navigate to the folder where the YAML configuration file is located.

Open a terminal and move to the config folder:

cd ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_navigation/config/

Now, let’s open the YAML file using a text editor. 

gedit rosmaster_x3_nav2_default_params.yaml

With the file open, we can now explore and understand the different sections and parameters within the configuration file. 

Let’s break down the parts that are most important for autonomous navigation.

amcl (Adaptive Monte Carlo Localization)

AMCL helps the robot determine its location within the map. It uses a particle filter approach, where each particle represents a possible pose (position and orientation) of the robot. 

The parameters in the amcl section control how AMCL updates and resamples these particles based on sensor data and the robot’s motion. 

Key parameters include the number of particles (max_particles), the frequency of updates (update_min_d and update_min_a), and the noise models for the motion and the LIDAR (robot_model_type, laser_model_type).

bt_navigator (Behavior Tree Navigator)

The Behavior Tree Navigator is responsible for high-level decision making during navigation. It uses a behavior tree structure to define the logic for tasks like following a path, avoiding obstacles, and recovering from stuck situations. 

The bt_navigator section specifies the plugins and behaviors used by the navigator, such as the path following algorithm (FollowPath), the obstacle avoidance strategy, and the recovery behaviors.

controller_server

The controller server handles the execution of the robot’s motion commands. It receives the path from the planner and generates velocity commands to follow that path. 

The parameters in the controller_server section configure the controller plugins, such as the path tracking algorithm (FollowPath), the goal tolerance (xy_goal_tolerance, yaw_goal_tolerance), and the velocity thresholds (min_x_velocity_threshold, min_theta_velocity_threshold). 

It also includes parameters for the progress checker (progress_checker_plugin), which monitors the robot’s progress along the path.

You will notice that we are using the Model Predictive Path Integral Controller. I chose this controller because it works well for mecanum wheeled robots. 

You can find a high-level description of the other controllers on this page.

velocity_smoother

The velocity smoother takes the velocity commands from the controller and smooths them to ensure smooth and stable robot motion. It helps to reduce sudden changes in velocity and acceleration. 

The velocity_smoother section controls the smoothing algorithm, such as the smoothing frequency (smoothing_frequency), velocity limits (max_velocity, min_velocity), and acceleration limits (max_accel, max_decel).

planner_server

The planner server is responsible for generating paths from the robot’s current position to the goal location. It uses the global costmap to find the optimal path while avoiding obstacles. The planner_server section specifies the planner plugin (GridBased) and its associated parameters, such as the tolerance for the path search (tolerance) and the use of A* algorithm (use_astar).

smoother_server

The smoother server is responsible for optimizing and smoothing the global path generated by the planner before it’s sent to the controller. This helps create more natural and efficient robot trajectories.

behavior_server

The behavior server handles recovery behaviors when the robot gets stuck or encounters an error during navigation. It includes plugins for actions like spinning in place (Spin), backing up (BackUp), or waiting (Wait). 

These sections work together to provide a comprehensive configuration for the robot’s autonomous navigation system. By adjusting these parameters in the YAML file, you can fine-tune the robot’s behavior to suit your specific requirements and environment.

collision_monitor

The collision monitor is a safety system that continuously checks for potential collisions and can trigger emergency behaviors when obstacles are detected.

Launch Autonomous Navigation

Let’s start navigating. Open a terminal window, and use this command to launch the robot:

nav 

or

bash ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_bringup/scripts/rosmaster_x3_navigation.sh 

In the bash script, you can change cafe -> house everywhere you see it in the file in case you want to autonomously navigate your robot in the house world.

#!/bin/bash
# Single script to launch the Yahboom ROSMASTERX3 with Gazebo, Nav2 and ROS 2 Controllers

cleanup() {
    echo "Cleaning up..."
    sleep 5.0
    pkill -9 -f "ros2|gazebo|gz|nav2|amcl|bt_navigator|nav_to_pose|rviz2|assisted_teleop|cmd_vel_relay|robot_state_publisher|joint_state_publisher|move_to_free|mqtt|autodock|cliff_detection|moveit|move_group|basic_navigator"

}

# Set up cleanup trap
trap 'cleanup' SIGINT SIGTERM

# Check if SLAM argument is provided
if [ "$1" = "slam" ]; then
    SLAM_ARG="slam:=True"
else
    SLAM_ARG="slam:=False"
fi

# For cafe.world -> z:=0.20
# For house.world -> z:=0.05
# To change Gazebo camera pose: gz service -s /gui/move_to/pose --reqtype gz.msgs.GUICamera --reptype gz.msgs.Boolean --timeout 2000 --req "pose: {position: {x: 0.0, y: -2.0, z: 2.0} orientation: {x: -0.2706, y: 0.2706, z: 0.6533, w: 0.6533}}"

echo "Launching Gazebo simulation with Nav2..."
ros2 launch yahboom_rosmaster_bringup rosmaster_x3_navigation_launch.py \
   enable_odom_tf:=false \
   headless:=False \
   load_controllers:=true \
   world_file:=cafe.world \
   use_rviz:=true \
   use_robot_state_pub:=true \
   use_sim_time:=true \
   x:=0.0 \
   y:=0.0 \
   z:=0.20 \
   roll:=0.0 \
   pitch:=0.0 \
   yaw:=0.0 \
   "$SLAM_ARG" \
   map:=/home/ubuntu/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_navigation/maps/cafe_world_map.yaml &

echo "Waiting 25 seconds for simulation to initialize..."
sleep 25
echo "Adjusting camera position..."
gz service -s /gui/move_to/pose --reqtype gz.msgs.GUICamera --reptype gz.msgs.Boolean --timeout 2000 --req "pose: {position: {x: 0.0, y: -2.0, z: 2.0} orientation: {x: -0.2706, y: 0.2706, z: 0.6533, w: 0.6533}}"

# Keep the script running until Ctrl+C
wait

Running the bash script will launch the Gazebo simulator with the Yahboom ROSMASTER X3 robot and the necessary navigation nodes.

You should see Gazebo and RViz windows open up. In Gazebo, you will see the simulated environment with the robot. RViz will display the robot’s sensor data, the map, and the navigation-related visualizations.

Wait for all the nodes to initialize and for the robot to be spawned in the Gazebo environment. You can check the terminal output for any error messages or warnings.

Once everything is up and running, you will see the robot in the simulated environment, ready to navigate autonomously.

1-starting-point

Initialize the Location of the Robot Using the 2D Pose Estimate Button in RViz

Before the robot can start navigating autonomously, it needs to know its initial position and orientation (“initial pose”) within the map. This process is known as localization. In this section, we will learn how to set the initial pose of the robot using the “2D Pose Estimate” button in RViz.

1. In the RViz window, locate the “2D Pose Estimate” button in the toolbar at the top.

1-2d-pose-estimate

2. Click on the “2D Pose Estimate” button to activate the pose estimation tool.

3. Move your mouse cursor to the location on the map where you want to set the initial pose of the robot. This should be the robot’s actual starting position in the simulated environment.

4. Click and hold the left mouse button at the desired location on the map.

5. While holding the left mouse button, drag the mouse in the direction that represents the robot’s initial orientation. The arrow will follow your mouse movement, indicating the direction the robot is facing.

6. Release the left mouse button to set the initial pose of the robot.

7. The robot’s localization system (AMCL) will now use this initial pose as a starting point and continuously update its estimated position and orientation based on sensor data and movement commands.

Setting the initial pose is important because it gives the robot a reference point to start localizing itself within the map. Without an accurate initial pose, the robot may have difficulty determining its precise location and orientation, which can lead to navigation issues.

Remember to set the initial pose whenever you restart the autonomous navigation system or if you manually relocate the robot in the simulated environment.

By the way, for a real-world application, the initial pose can be set automatically using the NVIDIA Isaac ROS Map Localization package. This package uses LIDAR scans and deep learning to automatically estimate the robot’s pose within a pre-built map. It provides a more automated and accurate way of initializing the robot’s location compared to manually setting the pose in RViz.

Send a Goal Pose

Once the robot’s initial pose is set, you can command it to navigate autonomously to a specific goal location on the map. RViz provides an intuitive way to send navigation goals using the “2D Nav Goal” button. Follow these steps to send a goal pose to the robot:

1. In the RViz window, locate the “Nav2 Goal” button in the toolbar at the top.

2-nav2-goal

2. Click on the “Nav2 Goal” button to activate the goal setting tool.

3. Move your mouse cursor to the location on the map where you want the robot to navigate. This will be the goal position.

4. Click and hold the left mouse button at the desired goal location on the map.

5. While holding the left mouse button, drag the mouse in the direction that represents the desired orientation of the robot at the goal position. An arrow will appear, indicating the goal pose.

6. Release the left mouse button to set the goal pose.

7. The robot will now plan a path from its current position to the goal pose, taking into account the obstacles in the map and the configured navigation parameters.

8. Once the path is planned, the robot will start navigating towards the goal pose, following the planned trajectory.

9. As the robot moves, you will see its position and orientation updating in real-time on the map in RViz. 

10. The robot will continue navigating until it reaches the goal pose or until it encounters an obstacle that prevents it from reaching the goal.

You can send multiple goal poses to the robot by repeating the above steps. Each time you set a new goal pose, the robot will replan its path and navigate towards the new goal.

autonomous-navigation-1

Keep in mind that the robot’s ability to reach the goal pose depends on various factors, such as the accuracy of the map, the presence of obstacles, and the configuration of the navigation stack. If the robot is unable to reach the goal pose, it may attempt to replan or abort the navigation task based on the configured behavior.

During autonomous navigation, you can monitor the robot’s progress, path planning, and other relevant information through the RViz visualizations. The navigation stack provides feedback on the robot’s status, including any errors or warnings.

By sending goal poses, you can test the robot’s autonomous navigation capabilities and observe how it handles different scenarios in the simulated environment.

Send Waypoints

In addition to sending a single goal pose, you can also command the robot to navigate through a sequence of waypoints. Waypoints are intermediate goal positions that the robot should pass through before reaching its final destination. This is useful when you want the robot to follow a specific path or perform tasks at different locations.

Here’s how to do it…

Set the initial pose of the robot by clicking the “2D Pose Estimate” on top of the RViz2 screen. 

Then click on the map in the estimated position where the robot is in Gazebo.

Now click the “Waypoint/Nav Through Poses” mode button in the bottom left corner of RViz. Clicking this button puts the system in waypoint follower mode.

3-waypoint-nav-through-poses-rviz

Click the “Nav2 Goal” button, and click on areas of the map where you would like your robot to go (i.e. select your waypoints). 

Click the Nav2 Goal button, set a waypoint. Click it again, and set another waypoint. Select as many waypoints as you want. 

4-as-many-waypoints-as-you-want

Each waypoint is labeled wp_#, where # is the number of the waypoint.

4-each-waypoint-is-labeled

When you’re ready for the robot to follow the waypoints, click the Start Waypoint Following button.

5-start-waypoint-following

If you want the robot to visit each location without stopping, click the Start Nav Through Poses button.

You should see your robot autonomously navigate to all the waypoints. 

Check the CPU and Memory Usage

I often like to check the CPU and memory usage. In general, I want to see usage less than 50%. When you get over 80%, that is when you really start to run into real performance issues.

sudo apt install htop
htop

My CPU usage isn’t great. You can see it is quite high.

6-htop-not-great

Let’s close the RViz window, and see what we get:

7-much-better

Looks much better.

You can see how RViz really is a resource hog. For this reason, for the robots I develop professionally, I do not run RViz in a production environment. I only use it during development and for debugging.

That’s it! Keep building.

Building a Map of the Environment Using SLAM – ROS 2 Jazzy

In this tutorial, we will explore SLAM (Simultaneous Localization and Mapping), a fundamental concept in robotics. SLAM allows a robot to create a map of its environment while simultaneously keeping track of its own location within that map. It’s like giving a robot the ability to explore an unknown area, remember where it has been, and use that information to navigate effectively. SLAM combines data from the robot’s sensors, such as cameras, LIDAR, and odometry, to incrementally build a consistent map and estimate the robot’s trajectory.

By the end of this tutorial, you will be able to build this:

mapping-slam-ros2-rviz

Real-World Applications

Real-world applications of what you will learn in this tutorial include:

  • Autonomous navigation for robots in various environments, such as homes, offices, or outdoor settings
  • Mapping and exploration of unknown or dynamic environments
  • Search and rescue operations where a robot needs to navigate and map an area

Prerequisites

All my code for this project is located here on GitHub.

Create Folders to Store Maps and Configuration Files

Open a new terminal window, and type:

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_navigation/

Add two new folders.

mkdir -p config maps

Create a Configuration File for Nav2

Create the File

Now let’s create a file that stores the values for parameters defined for Nav2 nodes. These parameters 

control how the robot builds a map, localizes itself, and navigates.

cd config
touch rosmaster_x3_nav2_default_params.yaml

Add this code.

Save the file.

This file has a lot of values. Don’t be intimidated. Over time you will get to know the meaning of each of these parameters.

If you want a detailed understanding of these parameters, go to this page.

Let’s discuss the main nodes that are responsible for mapping, localization, and obstacle detection.

Understanding the slam_toolbox

The slam_toolbox package helps the robot build a map of its environment as it moves around. SLAM stands for Simultaneous Localization and Mapping. As the robot explores, it uses its sensors like the lidar to detect obstacles. It localizes itself relative to those obstacles to figure out where it is on the map. At the same time, it adds the obstacle information to the map to build a complete picture of the environment.

The slam_toolbox is responsible for publishing the coordinate transformation between the map frame and the odometry (odom) frame. This transformation allows the robot to understand its position and orientation within the map as it moves.

Understanding the amcl

The amcl (Adaptive Monte Carlo Localization) package helps the robot figure out where it is on a known map. This package is important once you have a complete map and want the robot to navigate autonomously.

amcl takes the map and LIDAR data as inputs. It then estimates the robot’s position and orientation by comparing the LIDAR scans to the expected readings at each possible position on the map. It uses a particle filter approach, maintaining a set of “particles” that represent guesses of the robot’s state.

Understanding the global_costmap

A costmap is like a special version of the map used for navigation. In the costmap, each cell has a specific value representing how difficult or dangerous it is to traverse. A value of 0 indicates completely free space, while 254 marks lethal obstacles. Special values like 253 mark inflated obstacle areas, and 255 indicates unknown space. Values between 1-252 represent increasing levels of navigation difficulty.

The navigation algorithms use these cost values to plan paths that steer the robot away from obstacles. You can picture it like hills on a landscape – the robot will naturally take the “valleys” that have low cost while avoiding the “peaks” of high-cost areas.

The global_costmap is used for long-term planning over the entire known map. It has several plugins or “layers” that track different kinds of data:

  • The static_layer holds a copy of the map from the map server
  • The obstacle_layer tracks obstacles detected by the robot’s sensors (typically LIDAR).
  • The voxel_layer tracks 3D data if available (typically RGBD camera).
  • The inflation_layer adds extra costs around obstacles to make the robot keep a safer distance

Understanding the local_costmap

The local_costmap is similar to the global one, but it’s used for short-term planning in a small area right around the robot.

The Difference Between the Global and the Local Costmap

The key differences between the global and local costmap are as follows:

  • The global costmap covers the entire known environment, while the local costmap only covers a small area around the robot.
  • The global costmap is used for big-picture planning of the best route given known obstacles (e.g. walls, sofas, etc.), while the local costmap is used for immediate decisions about avoiding both known (walls, sofas, etc.) and unknown obstacles (e.g. people walking through the environment).

Let’s use an analogy to make this clearer:

  • Global Costmap: Imagine you are planning a road trip across the country. You use a large map to plot the best route from your starting point to your destination, considering major highways and cities. This is like the global costmap, helping you plan the overall journey.
  • Local Costmap: Now, imagine you are driving through your neighborhood. You pay close attention to the streets and obstacles immediately around you, like parked cars or pedestrians, and make quick decisions to navigate through them. This is like the local costmap, helping you with immediate navigation.

So bottom line…the global costmap handles long-term path planning over a wide area, while the local costmap deals with real-time, short-term obstacle avoidance around the robot. By working together, these costmaps enable robots to navigate their environments efficiently and safely.

Create a Velocity Relay Node

We need to create a file that subscribes to the /cmd_vel topic (geometry_msgs/msg/Twist) and publishes to the /mecanum_drive_controller/cmd_vel ( geometry_msgs/msg/TwistStamped) topic.

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_navigation/src
touch cmd_vel_relay_node.cpp

Add this code:

/**
 * @file cmd_vel_relay.cpp
 * @brief Relay node that converts Twist messages to TwistStamped messages
 *
 * This program subscribes to velocity commands published as Twist messages and
 * republishes them as TwistStamped messages. This is useful when interfacing
 * between different ROS2 nodes that expect different message types for velocity
 * commands.
 *
 * Subscription Topics:
 *     /cmd_vel (geometry_msgs/Twist): Raw velocity commands
 *
 * Publishing Topics:
 *     /mecanum_drive_controller/cmd_vel (geometry_msgs/TwistStamped):
 *     Timestamped velocity commands
 *
 * @author Addison Sears-Collins
 * @date November 30, 2024
 */

// Include necessary header files
#include <memory>  // For smart pointers
#include "rclcpp/rclcpp.hpp"  // Main ROS2 C++ library
#include "geometry_msgs/msg/twist.hpp"  // For Twist messages
#include "geometry_msgs/msg/twist_stamped.hpp"  // For TwistStamped messages

/**
 * @brief A ROS2 node that relays velocity commands between different message types
 *
 * This class creates a node that subscribes to Twist messages and republishes
 * them as TwistStamped messages, adding a timestamp and frame ID.
 */
class CmdVelRelay : public rclcpp::Node {
public:
    /**
     * @brief Constructor for the CmdVelRelay node
     *
     * Initializes the node, creates a subscription to the input topic,
     * and sets up a publisher for the output topic.
     */
    CmdVelRelay() : Node("cmd_vel_relay") {
        // Create subscription to /cmd_vel topic
        // The '10' represents the queue size - how many messages to store if we can't process them fast enough
        subscription_ = this->create_subscription<geometry_msgs::msg::Twist>(
            "/cmd_vel", 10,
            std::bind(&CmdVelRelay::cmd_vel_callback, this, std::placeholders::_1));

        // Create publisher for /mecanum_drive_controller/cmd_vel topic
        // Again, '10' is the queue size for outgoing messages
        publisher_ = this->create_publisher<geometry_msgs::msg::TwistStamped>(
            "/mecanum_drive_controller/cmd_vel", 10);

        // Log a message indicating the node has started
        RCLCPP_INFO(this->get_logger(), "Velocity relay node started");
    }

private:
    /**
     * @brief Callback function that handles incoming Twist messages
     *
     * This function is called whenever a new message arrives on the /cmd_vel topic.
     * It creates a new TwistStamped message, copies the velocity data, adds
     * a timestamp and frame ID, and publishes the result.
     *
     * @param msg Shared pointer to the incoming Twist message
     */
    void cmd_vel_callback(const geometry_msgs::msg::Twist::SharedPtr msg) {
        // Create a new TwistStamped message
        auto stamped_msg = std::make_unique<geometry_msgs::msg::TwistStamped>();

        // Set the timestamp to current time
        stamped_msg->header.stamp = this->now();
        // Set the frame ID (coordinate frame this velocity is expressed in)
        stamped_msg->header.frame_id = "base_link";

        // Copy the twist message content (linear and angular velocities)
        stamped_msg->twist = *msg;

        // Publish the stamped message
        // std::move is used for efficient transfer of the message
        publisher_->publish(std::move(stamped_msg));
    }

    // Declare class member variables for the subscriber and publisher
    rclcpp::Subscription<geometry_msgs::msg::Twist>::SharedPtr subscription_;
    rclcpp::Publisher<geometry_msgs::msg::TwistStamped>::SharedPtr publisher_;
};

/**
 * @brief Main function that initializes and runs the node
 *
 * @param argc Number of command line arguments
 * @param argv Array of command line arguments
 * @return int Exit status
 */
int main(int argc, char* argv[]) {
    // Initialize ROS2
    rclcpp::init(argc, argv);

    // Create an instance of our node
    auto node = std::make_shared<CmdVelRelay>();

    // Spin the node, making it respond to incoming messages
    rclcpp::spin(node);

    // Clean up ROS2 resources
    rclcpp::shutdown();

    return 0;
}

Save the file, and close it. 

cd ..

Add these lines to CMakeLists.txt:

add_executable(cmd_vel_relay src/cmd_vel_relay_node.cpp)
ament_target_dependencies(cmd_vel_relay
  rclcpp
  geometry_msgs
)

install(TARGETS
  cmd_vel_relay
  DESTINATION lib/${PROJECT_NAME}
)

Add these lines to package.xml:

<depend>rclcpp</depend>
<depend>geometry_msgs</depend>

Edit the Launch File

To be able to map our environment, we need to edit our launch file.

Open up a new terminal window, and type:

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_bringup/launch

Open rosmaster_x3_navigation_launch.py.

Add this code.

#!/usr/bin/env python3
"""
Launch Nav2 for the Yahboom ROSMASTER X3 robot in Gazebo.

This launch file sets up a complete ROS 2 navigation environment.

:author: Addison Sears-Collins
:date: November 30, 2024
"""

import os
from launch import LaunchDescription
from launch.actions import DeclareLaunchArgument, IncludeLaunchDescription
from launch.launch_description_sources import PythonLaunchDescriptionSource
from launch.substitutions import LaunchConfiguration
from launch_ros.actions import Node
from launch_ros.substitutions import FindPackageShare


def generate_launch_description():
    """
    Generate a launch description.

    Returns:
        LaunchDescription: A complete launch description for the robot.
    """
    # Constants for paths to different packages
    package_name_gazebo = 'yahboom_rosmaster_gazebo'
    package_name_localization = 'yahboom_rosmaster_localization'
    package_name_navigation = 'yahboom_rosmaster_navigation'

    # Launch and config file paths
    gazebo_launch_file_path = 'launch/yahboom_rosmaster.gazebo.launch.py'
    ekf_launch_file_path = 'launch/ekf_gazebo.launch.py'
    ekf_config_file_path = 'config/ekf.yaml'
    map_file_path = 'maps/cafe_world_map.yaml'
    nav2_params_path = 'config/rosmaster_x3_nav2_default_params.yaml'
    rviz_config_file_path = 'rviz/nav2_default_view.rviz'

    # Set the path to different packages
    pkg_share_gazebo = FindPackageShare(package=package_name_gazebo).find(package_name_gazebo)
    pkg_share_localization = FindPackageShare(
        package=package_name_localization).find(package_name_localization)
    pkg_share_navigation = FindPackageShare(
        package=package_name_navigation).find(package_name_navigation)

    # Set default paths
    default_gazebo_launch_path = os.path.join(pkg_share_gazebo, gazebo_launch_file_path)
    default_ekf_launch_path = os.path.join(pkg_share_localization, ekf_launch_file_path)
    default_ekf_config_path = os.path.join(pkg_share_localization, ekf_config_file_path)
    default_rviz_config_path = os.path.join(pkg_share_navigation, rviz_config_file_path)

    nav2_dir = FindPackageShare(package='nav2_bringup').find('nav2_bringup')
    nav2_launch_dir = os.path.join(nav2_dir, 'launch')
    nav2_params_path = os.path.join(pkg_share_navigation, nav2_params_path)
    static_map_path = os.path.join(pkg_share_navigation, map_file_path)

    # Launch configuration variables
    # Config and launch files
    autostart = LaunchConfiguration('autostart')
    enable_odom_tf = LaunchConfiguration('enable_odom_tf')
    ekf_config_file = LaunchConfiguration('ekf_config_file')
    ekf_launch_file = LaunchConfiguration('ekf_launch_file')
    gazebo_launch_file = LaunchConfiguration('gazebo_launch_file')
    map_yaml_file = LaunchConfiguration('map')
    namespace = LaunchConfiguration('namespace')
    nav2_params_file = LaunchConfiguration('nav2_params_file')
    rviz_config_file = LaunchConfiguration('rviz_config_file')
    slam = LaunchConfiguration('slam')
    use_composition = LaunchConfiguration('use_composition')
    use_namespace = LaunchConfiguration('use_namespace')
    use_respawn = LaunchConfiguration('use_respawn')

    # Robot configuration
    robot_name = LaunchConfiguration('robot_name')
    world_file = LaunchConfiguration('world_file')

    # Position and orientation
    x = LaunchConfiguration('x')
    y = LaunchConfiguration('y')
    z = LaunchConfiguration('z')
    roll = LaunchConfiguration('roll')
    pitch = LaunchConfiguration('pitch')
    yaw = LaunchConfiguration('yaw')

    # Feature flags related to Gazebo and the robot description
    headless = LaunchConfiguration('headless')
    jsp_gui = LaunchConfiguration('jsp_gui')
    load_controllers = LaunchConfiguration('load_controllers')
    use_gazebo = LaunchConfiguration('use_gazebo')
    use_robot_state_pub = LaunchConfiguration('use_robot_state_pub')
    use_rviz = LaunchConfiguration('use_rviz')
    use_sim_time = LaunchConfiguration('use_sim_time')

    # Declare all launch arguments
    # Config and launch files
    declare_autostart_cmd = DeclareLaunchArgument(
        name='autostart',
        default_value='true',
        description='Automatically startup the Nav2 stack')

    declare_ekf_config_file_cmd = DeclareLaunchArgument(
        name='ekf_config_file',
        default_value=default_ekf_config_path,
        description='Full path to the EKF configuration YAML file')

    declare_ekf_launch_file_cmd = DeclareLaunchArgument(
        name='ekf_launch_file',
        default_value=default_ekf_launch_path,
        description='Full path to the EKF launch file to use')

    declare_enable_odom_tf_cmd = DeclareLaunchArgument(
        name='enable_odom_tf',
        default_value='true',
        choices=['true', 'false'],
        description='Whether to enable odometry transform broadcasting via ROS 2 Control')

    declare_gazebo_launch_file_cmd = DeclareLaunchArgument(
        name='gazebo_launch_file',
        default_value=default_gazebo_launch_path,
        description='Full path to the Gazebo launch file to use')

    declare_map_yaml_cmd = DeclareLaunchArgument(
        name='map',
        default_value=static_map_path,
        description='Full path to map file to load')

    declare_namespace_cmd = DeclareLaunchArgument(
        name='namespace',
        default_value='',
        description='Top-level namespace')

    declare_nav2_params_file_cmd = DeclareLaunchArgument(
        name='nav2_params_file',
        default_value=nav2_params_path,
        description='Full path to the ROS2 parameters file to use for navigation nodes')

    declare_rviz_config_file_cmd = DeclareLaunchArgument(
        name='rviz_config_file',
        default_value=default_rviz_config_path,
        description='Full path to the RVIZ config file to use')

    # Robot configuration
    declare_robot_name_cmd = DeclareLaunchArgument(
        name='robot_name',
        default_value='rosmaster_x3',
        description='The name for the robot')

    declare_slam_cmd = DeclareLaunchArgument(
        name='slam',
        default_value='False',
        description='Whether to run SLAM')

    declare_use_namespace_cmd = DeclareLaunchArgument(
        name='use_namespace',
        default_value='false',
        description='Whether to apply a namespace to the navigation stack')

    declare_world_cmd = DeclareLaunchArgument(
        name='world_file',
        default_value='cafe.world',
        description='World file name (e.g., empty.world, house.world)')

    # Position arguments
    declare_x_cmd = DeclareLaunchArgument(
        name='x',
        default_value='0.0',
        description='x component of initial position, meters')

    declare_y_cmd = DeclareLaunchArgument(
        name='y',
        default_value='0.0',
        description='y component of initial position, meters')

    declare_z_cmd = DeclareLaunchArgument(
        name='z',
        default_value='0.20',
        description='z component of initial position, meters')

    # Orientation arguments
    declare_roll_cmd = DeclareLaunchArgument(
        name='roll',
        default_value='0.0',
        description='roll angle of initial orientation, radians')

    declare_pitch_cmd = DeclareLaunchArgument(
        name='pitch',
        default_value='0.0',
        description='pitch angle of initial orientation, radians')

    declare_yaw_cmd = DeclareLaunchArgument(
        name='yaw',
        default_value='0.0',
        description='yaw angle of initial orientation, radians')

    # Feature flags
    declare_headless_cmd = DeclareLaunchArgument(
        name='headless',
        default_value='False',
        description='Whether to execute gzclient (visualization)')

    declare_jsp_gui_cmd = DeclareLaunchArgument(
        name='jsp_gui',
        default_value='false',
        description='Flag to enable joint_state_publisher_gui')

    declare_load_controllers_cmd = DeclareLaunchArgument(
        name='load_controllers',
        default_value='true',
        description='Flag to enable loading of ROS 2 controllers')

    declare_use_composition_cmd = DeclareLaunchArgument(
        name='use_composition',
        default_value='True',
        description='Whether to use composed bringup')

    declare_use_respawn_cmd = DeclareLaunchArgument(
        name='use_respawn',
        default_value='False',
        description='Whether to respawn if a node crashes. Applied when composition is disabled.')

    declare_use_gazebo_cmd = DeclareLaunchArgument(
        name='use_gazebo',
        default_value='true',
        description='Flag to enable Gazebo')

    declare_use_robot_state_pub_cmd = DeclareLaunchArgument(
        name='use_robot_state_pub',
        default_value='true',
        description='Flag to enable robot state publisher')

    declare_use_rviz_cmd = DeclareLaunchArgument(
        name='use_rviz',
        default_value='true',
        description='Flag to enable RViz')

    declare_use_sim_time_cmd = DeclareLaunchArgument(
        name='use_sim_time',
        default_value='true',
        description='Use simulation (Gazebo) clock if true')

    # Specify the actions

    # Start the node that relays /cmd_vel to /mecanum_drive_controller/cmd_vel
    start_cmd_vel_relay_cmd = Node(
        package='yahboom_rosmaster_navigation',
        executable='cmd_vel_relay',
        name='cmd_vel_relay',
        output='screen',
        parameters=[{'use_sim_time': use_sim_time}]
    )

    # Start Extended Kalman Filter node from the robot_localization ROS 2 package
    start_ekf_cmd = IncludeLaunchDescription(
        PythonLaunchDescriptionSource([ekf_launch_file]),
        launch_arguments={
            'ekf_config_file': ekf_config_file,
            'use_sim_time': use_sim_time
        }.items()
    )

    # Start Gazebo
    start_gazebo_cmd = IncludeLaunchDescription(
        PythonLaunchDescriptionSource([gazebo_launch_file]),
        launch_arguments={
            'enable_odom_tf': enable_odom_tf,
            'headless': headless,
            'jsp_gui': jsp_gui,
            'load_controllers': load_controllers,
            'robot_name': robot_name,
            'rviz_config_file': rviz_config_file,
            'use_rviz': use_rviz,
            'use_gazebo': use_gazebo,
            'use_robot_state_pub': use_robot_state_pub,
            'use_sim_time': use_sim_time,
            'world_file': world_file,
            'x': x,
            'y': y,
            'z': z,
            'roll': roll,
            'pitch': pitch,
            'yaw': yaw
        }.items()
    )

    # Launch the ROS 2 Navigation Stack
    start_ros2_navigation_cmd = IncludeLaunchDescription(
        PythonLaunchDescriptionSource(os.path.join(nav2_launch_dir, 'bringup_launch.py')),
        launch_arguments={
            'namespace': namespace,
            'use_namespace': use_namespace,
            'slam': slam,
            'map': map_yaml_file,
            'use_sim_time': use_sim_time,
            'params_file': nav2_params_file,
            'autostart': autostart,
            'use_composition': use_composition,
            'use_respawn': use_respawn,
        }.items()
    )

    # Create the launch description and populate
    ld = LaunchDescription()

    # Add all launch arguments
    # Config and launch files
    ld.add_action(declare_autostart_cmd)
    ld.add_action(declare_enable_odom_tf_cmd)
    ld.add_action(declare_ekf_config_file_cmd)
    ld.add_action(declare_ekf_launch_file_cmd)
    ld.add_action(declare_gazebo_launch_file_cmd)
    ld.add_action(declare_map_yaml_cmd)
    ld.add_action(declare_namespace_cmd)
    ld.add_action(declare_nav2_params_file_cmd)
    ld.add_action(declare_rviz_config_file_cmd)
    ld.add_action(declare_slam_cmd)
    ld.add_action(declare_use_composition_cmd)
    ld.add_action(declare_use_namespace_cmd)
    ld.add_action(declare_use_respawn_cmd)

    # Robot configuration
    ld.add_action(declare_robot_name_cmd)
    ld.add_action(declare_world_cmd)

    # Position declarations
    ld.add_action(declare_x_cmd)
    ld.add_action(declare_y_cmd)
    ld.add_action(declare_z_cmd)

    # Orientation declarations
    ld.add_action(declare_roll_cmd)
    ld.add_action(declare_pitch_cmd)
    ld.add_action(declare_yaw_cmd)

    # Feature flags
    ld.add_action(declare_headless_cmd)
    ld.add_action(declare_jsp_gui_cmd)
    ld.add_action(declare_load_controllers_cmd)
    ld.add_action(declare_use_gazebo_cmd)
    ld.add_action(declare_use_robot_state_pub_cmd)
    ld.add_action(declare_use_rviz_cmd)
    ld.add_action(declare_use_sim_time_cmd)

    # Add any actions
    ld.add_action(start_cmd_vel_relay_cmd)
    ld.add_action(start_ekf_cmd)
    ld.add_action(start_gazebo_cmd)
    ld.add_action(start_ros2_navigation_cmd)

    return ld

Save the file, and close it.

Edit CMakeLists.txt

Edit the CMakeLists.txt file for the yahboom_rosmaster_navigation package.

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_navigation/

Make sure to add these lines:

…

install(
  DIRECTORY config maps rviz
  DESTINATION share/${PROJECT_NAME}
)

if(BUILD_TESTING)
  find_package(ament_lint_auto REQUIRED)
  # the following line skips the linter which checks for copyrights
  # comment the line when a copyright and license is added to all source files
  set(ament_cmake_copyright_FOUND TRUE)
  # the following line skips cpplint (only works in a git repo)
  # comment the line when this package is in a git repo and when
  # a copyright and license is added to all source files
  set(ament_cmake_cpplint_FOUND TRUE)
  ament_lint_auto_find_test_dependencies()
endif()

ament_package()

Save the file, and close it.

Edit the Launch Script

Open up a new terminal window, and type:

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_bringup/scripts/

Edit rosmaster_x3_navigation.sh so we can pass a boolean variable called slam to it.

#!/bin/bash
# Single script to launch the Yahboom ROSMASTERX3 with Gazebo, Nav2 and ROS 2 Controllers

cleanup() {
    echo "Cleaning up..."
    sleep 5.0
    pkill -9 -f "ros2|gazebo|gz|nav2|amcl|bt_navigator|nav_to_pose|rviz2|assisted_teleop|cmd_vel_relay|robot_state_publisher|joint_state_publisher|move_to_free|mqtt|autodock|cliff_detection|moveit|move_group|basic_navigator"
}

# Set up cleanup trap
trap 'cleanup' SIGINT SIGTERM

# Check if SLAM argument is provided
if [ "$1" = "slam" ]; then
    SLAM_ARG="slam:=True"
else
    SLAM_ARG="slam:=False"
fi

# For cafe.world -> z:=0.20
# For house.world -> z:=0.05
# To change Gazebo camera pose: gz service -s /gui/move_to/pose --reqtype gz.msgs.GUICamera --reptype gz.msgs.Boolean --timeout 2000 --req "pose: {position: {x: 0.0, y: -2.0, z: 2.0} orientation: {x: -0.2706, y: 0.2706, z: 0.6533, w: 0.6533}}"

echo "Launching Gazebo simulation with Nav2..."
ros2 launch yahboom_rosmaster_bringup rosmaster_x3_navigation_launch.py \
   enable_odom_tf:=false \
   headless:=False \
   load_controllers:=true \
   world_file:=cafe.world \
   use_rviz:=true \
   use_robot_state_pub:=true \
   use_sim_time:=true \
   x:=0.0 \
   y:=0.0 \
   z:=0.20 \
   roll:=0.0 \
   pitch:=0.0 \
   yaw:=0.0 \
   "$SLAM_ARG" \
   map:=/home/ubuntu/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_navigation/maps/cafe_world_map.yaml &

echo "Waiting 25 seconds for simulation to initialize..."
sleep 25
echo "Adjusting camera position..."
gz service -s /gui/move_to/pose --reqtype gz.msgs.GUICamera --reptype gz.msgs.Boolean --timeout 2000 --req "pose: {position: {x: 0.0, y: -2.0, z: 2.0} orientation: {x: -0.2706, y: 0.2706, z: 0.6533, w: 0.6533}}"

# Keep the script running until Ctrl+C
wait

Save the file, and close it.

Build the Package

Now let’s build our new package:

cd ~/ros2_ws
colcon build 
source ~/.bashrc

Launch the Robot and Mapping

Let’s start mapping. Let’s bring our robot to life. Open a terminal window, and use this command to launch the robot:

nav slam

or

bash ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_bringup/scripts/rosmaster_x3_navigation.sh slam

This command launches the Gazebo simulation with the Yahboom ROSMASTER X3 robot, and sets the slam argument to True. This tells the launch file to start the slam_toolbox node, which will perform SLAM as the robot moves around.

creating-a-map-slam-ros2-jazzy
creating-a-map-slam-ros2-jazzy-2
creating-a-map-slam-ros2-jazzy-4

Let’s check out the velocity data. We have many topics, which you can see if you type this command:

ros2 topic list

Let’s go through each velocity topic to understand the publishers and subscribers of each topic.

ros2 topic info /cmd_vel_nav -v

/cmd_vel_nav (geometry_msgs/msg/Twist)

  • Publishers: controller_server and behavior_server 
  • Subscribers: velocity_smoother
ros2 topic info /cmd_vel_smoothed -v

/cmd_vel_smoothed (geometry_msgs/msg/Twist)

  • Publishers: velocity_smoother
  • Subscribers: collision_monitor
ros2 topic info /cmd_vel -v

/cmd_vel (geometry_msgs/msg/Twist)

  • Publishers: collision_monitor and docking_server 
  • Subscribers: cmd_vel_relay
ros2 topic info /mecanum_drive_controller/cmd_vel -v

/mecanum_drive_controller/cmd_vel ( geometry_msgs/msg/TwistStamped)

  • Publishers: cmd_vel_relay
  • Subscribers: ros_gz_bridge (Gazebo) and mecanum_drive_controller (ROS 2 Control)
ros2 topic info /cmd_vel_teleop -v

/cmd_vel_teleop (geometry_msgs/msg/Twist)

  • Publishers: None
  • Subscribers: behavior_server

In RViz, you’ll notice a visualization of the robot in an empty grey space. This grey space will fill in with the map as we move the robot around. Let’s do that now.

You can now either drive the robot around manually by moving the virtual joystick:

rqt_robot_steering

Set the topic to /cmd_vel.

Or you can send the robot to goals at the edge of the white (i.e. free space). 

Explore all areas of the simulated environment to ensure good map coverage.

As you move the robot, you will see the map update in real-time in RViz:

  • Grey areas represent unexplored space
  • White areas represent free space that the robot has mapped with its laser scanner
  • Black areas represent obstacles or walls that the robot has detected

The map will gradually fill in as the robot covers more ground. Drive the robot around until you have a complete map of the environment.

Some tips for mapping for real-world robots:

  • Drive slowly and smoothly to minimize odometry errors and ensure clear laser scans
  • Overlap your paths to give the SLAM algorithm plenty of opportunities to “close the loop” and correct for accumulated errors
  • If the map starts to look distorted or misaligned, you may need to adjust the SLAM parameters or start over in a new area

Once you’re satisfied with your map, you can save it using the map_saver utility, which we’ll cover in the next section.

Save the Map

When see that you have a good map that you want to save, do this:

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_navigation/maps/
ros2 run nav2_map_server map_saver_cli -f cafe_world_map

The syntax is:

ros2 run nav2_map_server map_saver_cli -f <map_name>

Your cafe_world_map.pgm and cafe_world_map.yaml files will automatically save to the current directory.

Let’s go through the fields in the yaml file. This file provides all the necessary information to interpret the cafe_world_map.pgm image correctly, including how to scale it to real-world dimensions, where to place it in the world, and how to differentiate between free space, obstacles, and unknown areas:

image: cafe_world_map.pgm

This specifies the filename of the PGM (Portable Gray Map) image file that represents the actual occupancy grid map.

mode: trinary

The mode parameter sets the interpretation of the map data in the image. “trinary” means each pixel has three possible values – free (0-254), unknown (205), or occupied (255).

resolution: 0.05

Resolution is the real-world size in meters of each pixel in the map image. Here, each pixel represents a 5cm x 5cm area.

origin: [-5.03, -11.1, 0] (for example)

This parameter sets the origin point of the map in the real world. The coordinates [-5.03, -11.1, 0] tell you where the bottom-left corner of the map image is located in the real-world space. The first two numbers are the x and y positions, and the third number (0) is the orientation (in radians).

Suppose the origin value is [-5.03, -11.1, 0].

This means the bottom-left corner of the map image is positioned at -5.03 meters on the x-axis and -11.1 meters on the y-axis from the [x=0, y=0, yaw=0] location of the map in the real-world coordinate system.

The third value 0 indicates the orientation, which is the rotation around the z-axis. In this case, it is 0 radians, meaning no rotation.

negate: 0

This indicates whether the colors in the image need to be inverted. A value of 0 means no inversion is needed. It’s like saying the colors in the picture are already correct and don’t need to be flipped.

occupied_thresh: 0.65

This sets the threshold for determining which pixels in the image represent occupied space (e.g., walls, obstacles). If a pixel’s value is above 0.65, it is considered occupied.

free_thresh: 0.25

This sets the threshold for determining which pixels represent free space (e.g., navigable areas). If a pixel’s value is below 0.25, it is considered free.

View and Edit the Map

To view and edit the map, I recommend using a program like GIMP.

Install gimp.

sudo apt-get update
sudo apt-get install gimp
cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_navigation/maps/

Run gimp.

gimp cafe_world_map.pgm

Once the map is loaded in GIMP, you can edit it. Here are some tips:

  • Use the Pencil or Brush tool to draw on the map. Drawing with black (RGB 0,0,0) will add obstacles/walls. Drawing with white (RGB 255,255,255) will erase obstacles and create free space. I like to draw with white to clean out salt and pepper noise from the LIDAR.
  • Use the Eraser tool to remove obstacles and create free space.
  • Focus on cleaning up major errors like incorrect walls or obstacles. Don’t get too caught up in tiny details.
  • Always make a backup of your original map files before editing, in case you make a mistake or want to revert changes.
1-edit-the-map-using-gimp

After making your edits, save the file with File -> Export. Make sure to export it in the .pgm format with the same name as the original map file.

With your map cleaned up and saved, your robot will have a more accurate representation of its environment for navigation. 

That’s it. Keep building!

creating-map-house-world

Exploring the Common Sensors for Nav2

In this tutorial, we will explore the various sensors commonly used in mobile robotics for navigation and mapping using Nav2

Mobile robots rely on a wide array of sensors to perceive and understand their surroundings, enabling them to:

  • Build accurate maps
  • Localize themselves within those maps
  • Detect obstacles for safe navigation through dynamic environments

Prerequisites

All my code for this project is located here on GitHub.

Common Sensors

Some of the most frequently used sensors in mobile robotics include:

  • Lidar
  • IMUs
  • Depth cameras
  • RGB cameras
  • GPS

The mobile robot I use for the examples in this tutorial is equipped with simulated LIDAR, IMU, and a depth camera.

An RGB camera is nice to have but not critical for navigation. 

GPS is not relevant to use because our robot is designed for indoor navigation.

For a real robot, instead of using simulated sensors, we would use real hardware. Fortunately, most of the popular LIDARs, IMUs, and depth cameras on the market have ROS 2 publishers you can find on GitHub that will read the data from the sensor and publish that data in a ROS 2-compatible format. These ROS 2 publishers are also known as “drivers.”

How ROS 2 Helps Sensors Work Together

The beauty of ROS 2 is how it makes it easier for different types of sensors to work together, even if they’re made by different companies. It does this by providing special packages that define standard ways for sensors to communicate.

The sensor_msgs package creates a common language for a variety of sensors. This means that you can use any sensor as long as it speaks this common language. 

Some of the most important messages for navigation include:

  • sensor_msgs/LaserScan: Used for LIDARs
  • sensor_msgs/PointCloud2: Used for depth cameras
  • sensor_msgs/Range: Used for distance sensors, both infrared and ultrasonic sensors
  • sensor_msgs/Image: Used for cameras

In addition to the sensor_msgs package, we have two other packages that you should know about if you come across these during your robotics career:

radar_msgs

The radar_msgs package is specifically for radar sensors and defines how they should communicate.

vision_msgs

The vision_msgs package is for sensors used in computer vision, like cameras that can detect and recognize objects or people. Some of the messages it supports include:

  • vision_msgs/Classification2D and vision_msgs/Classification3D: Used for identifying objects in 2D or 3D
  • vision_msgs/Detection2D and vision_msgs/Detection3D: Used for detecting objects in 2D or 3D

As mentioned earlier, most real robots come with special programs called ROS drivers that allow their sensors to talk using the common language defined in these packages. This makes it easier to use sensors from different manufacturers because general software packages like Nav2 can understand this common language and do their job no matter what specific sensor hardware is being used.

In simulated environments, like what we will use in this tutorial, special plugins, which we defined in the URDF file, are used to make the simulated sensors communicate using the sensor_msgs package format.

Visualizing Sensor Data in RViz

Let’s take a look at the sensor data we have to work with on our simulated robot. Open a terminal window, and type this command:

nav

or

bash ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_bringup/scripts/rosmaster_x3_navigation.sh

In the “Fixed Frame” field, enter odom as the Fixed Frame.

Let’s look at the sensor readings.

Click the “Add” button at the bottom of the RViz window.

Click the “By topic” tab.

Select the “/scan” topic so we can view the sensor_msgs/LaserScan messages.

On the left panel, set the Reliability Policy in RViz to “Best Effort”. Setting the Reliability Policy to “Best Effort” for a LIDAR sensor means that RViz will display the most recently received data from the sensor, even if some data packets have been lost or delayed along the way.

Now set the size to 0.1 so it is easier to see the scans. You can see the scans match up pretty well with the detected objects around the robot in the Gazebo environment.

1-lidar-scan-data
2-laser-scan-data

Let’s look at the IMU information.

Click the “Add” button at the bottom of the RViz window.

Click the “By topic” tab.

Select the “/imu/data” topic so we can view the sensor_msgs/Imu messages.

You will see the IMU orientation and acceleration visualized using the IMU plugin.

3-imu-information

Observe how the visualizations change in real-time as your robot moves and interacts with the environment.

ros2 topic pub /mecanum_drive_controller/cmd_vel geometry_msgs/msg/TwistStamped "{
  header: {
    stamp: {sec: `date +%s`, nanosec: 0},
    frame_id: ''
  },
  twist: {
    linear: {
      x: 0.05,
      y: 0.0,
      z: 0.0
    },
    angular: {
      x: 0.0,
      y: 0.0,
      z: 0.0
    }
  }
}"

That’s it! Keep building!