Getting Started With the Simple Commander API – ROS 2 Jazzy

In this tutorial, we will explore the Nav2 Simple Commander API, a powerful tool that simplifies robot navigation programming in Python. We’ll learn how to use this API to control a robot’s movement, plan paths, and execute various navigation tasks without getting bogged down in complex ROS 2 details. This guide will help you get started with the most important features of the Simple Commander API.

By the end of this tutorial, you will be able to build this:

go-to-goal-simple-commander-api

Real-World Applications

You can use the Nav2 Simple Commander API in many real-world situations. Here are some examples:

  1. Home Help: You can create robots that clean your house or bring you things from other rooms.
  2. Warehouse Work: You can build robots that check items on shelves or pick up and move things around the warehouse.
  3. Farming: You can develop vehicles that plant, check, or pick crops.
  4. Security: You can make robots that patrol areas or quickly go to places where there might be trouble.
  5. Healthcare: You can create robots that bring medicine to hospital rooms or clean areas to keep them safe.

Prerequisites

All my code for this project is located here on GitHub.

Create Folders

To get started with the Nav2 Simple Commander API, you’ll need to set up your workspace. Follow these steps to create the necessary folders:

Open a terminal window, and type the following command:

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_navigation

Create a folder for your Python scripts:

mkdir scripts

Create a folder for the helper files you will create:

mkdir yahboom_rosmaster_navigation
cd yahboom_rosmaster_navigation
touch __init__.py

Create a Goal Pose Message Generator

Now we will create a goal pose message generator. This generator is important for our navigation tasks because:

  1. In ROS 2, robot positions and orientations are often represented using the PoseStamped message type.
  2. This message type includes not just the position and orientation, but also timing and reference frame information, which are essential for accurate navigation.
  3. Creating a function to generate these messages will simplify our code when we need to specify goal positions for our robot.

Open a terminal window:

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_navigation/yahboom_rosmaster_navigation/
touch posestamped_msg_generator.py

Type this code.

#!/usr/bin/env python3
"""
Create a PoseStamped message for ROS 2 navigation.

This script creates a geometry_msgs/PoseStamped message which is commonly used
for robot navigation in ROS 2. It shows how to properly construct a PoseStamped
message with header information and pose data.

:author: Addison Sears-Collins
:date: December 5, 2024
"""

from rclpy.node import Node
from builtin_interfaces.msg import Time
from geometry_msgs.msg import PoseStamped, Pose, Point, Quaternion
from std_msgs.msg import Header


class PoseStampedGenerator(Node):
    """A ROS 2 node that generates PoseStamped messages."""

    def __init__(self, node_name='pose_stamped_generator'):
        """
        Initialize the ROS 2 node.

        :param node_name: Name of the node
        :type node_name: str
        """
        super().__init__(node_name)

    def create_pose_stamped(self, x=0.0, y=0.0, z=0.0,
                            qx=0.0, qy=0.0, qz=0.0, qw=1.0,
                            frame_id='map'):
        """
        Create a PoseStamped message with the specified parameters.

        :param x: x position coordinate
        :type x: float
        :param y: y position coordinate
        :type y: float
        :param z: z position coordinate
        :type z: float
        :param qx: x quaternion component
        :type qx: float
        :param qy: y quaternion component
        :type qy: float
        :param qz: z quaternion component
        :type qz: float
        :param qw: w quaternion component
        :type qw: float
        :param frame_id: reference frame
        :type frame_id: str
        :return: pose stamped message
        :rtype: PoseStamped
        """
        # Create the PoseStamped message object
        pose_stamped = PoseStamped()

        # Create and fill the header with frame_id and timestamp
        header = Header()
        header.frame_id = frame_id

        # Get current ROS time and convert to Time message
        now = self.get_clock().now()
        header.stamp = Time(
            sec=now.seconds_nanoseconds()[0],
            nanosec=now.seconds_nanoseconds()[1]
        )

        # Create and fill the position message with x, y, z coordinates
        position = Point()
        position.x = float(x)
        position.y = float(y)
        position.z = float(z)

        # Create and fill the orientation message with quaternion values
        orientation = Quaternion()
        orientation.x = float(qx)
        orientation.y = float(qy)
        orientation.z = float(qz)
        orientation.w = float(qw)

        # Create Pose message and combine position and orientation
        pose = Pose()
        pose.position = position
        pose.orientation = orientation

        # Combine header and pose into the final PoseStamped message
        pose_stamped.header = header
        pose_stamped.pose = pose

        return pose_stamped

Save the file, and close it.

This script provides a ROS 2 node class PoseStampedGenerator that creates geometry_msgs/PoseStamped messages. The class provides a create_pose_stamped method that takes:

  • Position coordinates (x, y, z)
  • Orientation as a quaternion (qx, qy, qz, qw)
  • Reference frame (defaults to ‘map’)

Create a Go to Goal Pose Script

Now I will show you how to create a script that will send a robot to a single goal location (i.e. “goal pose”). A barebones official example code for sending a single goal is here on the Nav2 GitHub. However, this code is missing some important functionality that you will need for a real-world production mobile robot that needs to navigate in a dynamic environment with people moving around and objects of all shapes and sizes. For this reason, we are going to use that code as a basis and then add additional functionality.

Our navigation script will assume the initial pose of the robot has already been set either via RViz, the command line, or some other method.

To create this script, open a terminal, and navigate to the scripts directory:

cd ~/ros2_ws/src/yahboom_rosmaster_navigation/yahboom_rosmaster_navigation/scripts
touch nav_to_pose.py

Add this code.

#!/usr/bin/env python3
"""
ROS 2 node for navigating to a goal pose and publishing status updates.

This script creates a ROS 2 node that:
    - Subscribes to a goal pose
    - Navigates the robot to the goal pose
    - Publishes the estimated time of arrival
    - Publishes the goal status
    - Cancels the goal if a stop signal is received

Subscription Topics:
    /goal_pose/goal (geometry_msgs/PoseStamped): The desired goal pose
    /stop/navigation/go_to_goal_pose (std_msgs/Bool): Signal to stop navigation
    /cmd_vel (geometry_msgs/Twist): Velocity command

Publishing Topics:
    /goal_pose/eta (std_msgs/String): Estimated time of arrival in seconds
    /goal_pose/status (std_msgs/String): Goal pose status

:author: Addison Sears-Collins
:date: December 5, 2024
"""

import time
import rclpy
from rclpy.duration import Duration
from rclpy.executors import MultiThreadedExecutor
from rclpy.node import Node
from nav2_simple_commander.robot_navigator import BasicNavigator, TaskResult
from std_msgs.msg import Bool, String
from geometry_msgs.msg import Twist, PoseStamped

# Global flags for navigation state
STOP_NAVIGATION_NOW = False
NAV_IN_PROGRESS = False
MOVING_FORWARD = True

COSTMAP_CLEARING_PERIOD = 0.5


class GoToGoalPose(Node):
    """This class subscribes to the goal pose and publishes the ETA
       and goal status information to ROS 2.
    """

    def __init__(self):
        """Constructor."""
        # Initialize the class using the constructor
        super().__init__('go_to_goal_pose')

        # Create the ROS 2 publishers
        self.publisher_eta = self.create_publisher(String, '/goal_pose/eta', 10)
        self.publisher_status = self.create_publisher(String, '/goal_pose/status', 10)

        # Create a subscriber
        # This node subscribes to messages of type geometry_msgs/PoseStamped
        self.subscription_go_to_goal_pose = self.create_subscription(
            PoseStamped,
            '/goal_pose/goal',
            self.go_to_goal_pose,
            10)

        # Keep track of time for clearing the costmaps
        self.current_time = self.get_clock().now().nanoseconds
        self.last_time = self.current_time
        self.dt = (self.current_time - self.last_time) * 1e-9

        # Clear the costmap every X.X seconds when the robot is not making forward progress
        self.costmap_clearing_period = COSTMAP_CLEARING_PERIOD

        # Launch the ROS 2 Navigation Stack
        self.navigator = BasicNavigator()

        # Wait for navigation to fully activate
        self.navigator.waitUntilNav2Active()

    def go_to_goal_pose(self, msg):
        """Go to goal pose."""
        global NAV_IN_PROGRESS, STOP_NAVIGATION_NOW  # pylint: disable=global-statement

        # Clear all costmaps before sending to a goal
        self.navigator.clearAllCostmaps()

        # Go to the goal pose
        self.navigator.goToPose(msg)

        # As long as the robot is moving to the goal pose
        while rclpy.ok() and not self.navigator.isTaskComplete():
            # Get the feedback message
            feedback = self.navigator.getFeedback()

            if feedback:
                # Publish the estimated time of arrival in seconds
                estimated_time_of_arrival = f"{Duration.from_msg(
                    feedback.estimated_time_remaining).nanoseconds / 1e9:.0f}"
                msg_eta = String()
                msg_eta.data = str(estimated_time_of_arrival)
                self.publisher_eta.publish(msg_eta)

                # Publish the goal status
                msg_status = String()
                msg_status.data = "IN_PROGRESS"
                self.publisher_status.publish(msg_status)
                NAV_IN_PROGRESS = True

                # Clear the costmap at the desired frequency
                # Get the current time
                self.current_time = self.get_clock().now().nanoseconds

                # How much time has passed in seconds since the last costmap clearing
                self.dt = (self.current_time - self.last_time) * 1e-9

                # If we are making no forward progress after X seconds, clear all costmaps
                if not MOVING_FORWARD and self.dt > self.costmap_clearing_period:
                    self.navigator.clearAllCostmaps()
                    self.last_time = self.current_time

            # Stop the robot if necessary
            if STOP_NAVIGATION_NOW:
                self.navigator.cancelTask()
                self.get_logger().info('Navigation cancellation request fulfilled...')

            time.sleep(0.1)

        # Reset the variable values
        STOP_NAVIGATION_NOW = False
        NAV_IN_PROGRESS = False

        # Publish the final goal status
        result = self.navigator.getResult()
        msg_status = String()

        if result == TaskResult.SUCCEEDED:
            self.get_logger().info('Successfully reached the goal!')
            msg_status.data = "SUCCEEDED"
            self.publisher_status.publish(msg_status)

        elif result == TaskResult.CANCELED:
            self.get_logger().info('Goal was canceled!')
            msg_status.data = "CANCELED"
            self.publisher_status.publish(msg_status)

        elif result == TaskResult.FAILED:
            self.get_logger().info('Goal failed!')
            msg_status.data = "FAILED"
            self.publisher_status.publish(msg_status)

        else:
            self.get_logger().info('Goal has an invalid return status!')
            msg_status.data = "INVALID"
            self.publisher_status.publish(msg_status)


class GetStopNavigationSignal(Node):
    """This class subscribes to a Boolean flag that tells the
       robot to stop navigation.
    """

    def __init__(self):
        """Constructor."""
        # Initialize the class using the constructor
        super().__init__('get_stop_navigation_signal')

        # Create a subscriber
        # This node subscribes to messages of type std_msgs/Bool
        self.subscription_stop_navigation = self.create_subscription(
            Bool,
            '/stop/navigation/go_to_goal_pose',
            self.set_stop_navigation,
            10)

    def set_stop_navigation(self, msg):
        """Determine if the robot needs to stop, and adjust the variable value accordingly."""
        global STOP_NAVIGATION_NOW  # pylint: disable=global-statement

        if NAV_IN_PROGRESS and msg.data:
            STOP_NAVIGATION_NOW = msg.data
            self.get_logger().info('Navigation cancellation request received by ROS 2...')


class GetCurrentVelocity(Node):
    """This class subscribes to the current velocity and determines if the robot
       is making forward progress.
    """

    def __init__(self):
        """Constructor."""
        # Initialize the class using the constructor
        super().__init__('get_current_velocity')

        # Create a subscriber
        # This node subscribes to messages of type geometry_msgs/Twist
        self.subscription_current_velocity = self.create_subscription(
            Twist,
            '/cmd_vel',
            self.get_current_velocity,
            1)

    def get_current_velocity(self, msg):
        """Get the current velocity, and determine if the robot is making forward progress."""
        global MOVING_FORWARD  # pylint: disable=global-statement

        linear_x_value = msg.linear.x

        if linear_x_value <= 0.0:
            MOVING_FORWARD = False
        else:
            MOVING_FORWARD = True


def main(args=None):
    """Main function to initialize and run the ROS 2 nodes."""
    # Initialize the rclpy library
    rclpy.init(args=args)

    try:
        # Create the nodes
        go_to_goal_pose = GoToGoalPose()
        get_stop_navigation_signal = GetStopNavigationSignal()
        get_current_velocity = GetCurrentVelocity()

        # Set up multithreading
        executor = MultiThreadedExecutor()
        executor.add_node(go_to_goal_pose)
        executor.add_node(get_stop_navigation_signal)
        executor.add_node(get_current_velocity)

        try:
            # Spin the nodes to execute the callbacks
            executor.spin()
        finally:
            # Shutdown the nodes
            executor.shutdown()
            go_to_goal_pose.destroy_node()
            get_stop_navigation_signal.destroy_node()
            get_current_velocity.destroy_node()

    finally:
        # Shutdown the ROS client library for Python
        rclpy.shutdown()


if __name__ == '__main__':
    main()

Save the code, and close it.

Our navigation script, nav_to_pose.py, works as follows:

It sets up three ROS 2 nodes.

The GoToGoalPose node:

  • Subscribes to goal poses and initiates navigation.
  • Publishes estimated time of arrival (ETA) and navigation status.
  • Handles navigation feedback and result processing.
  • Implements costmap clearing to handle dynamic obstacles.

The GetStopNavigationSignal node allows external cancellation of navigation tasks.

The GetCurrentVelocity node monitors if the robot is moving forward, which is used for costmap clearing decisions.

The script uses a multithreaded executor to run all nodes concurrently.

It implements error handling and proper shutdown procedures.

Key features include:

  • Dynamic obstacle handling through costmap clearing.
  • Real-time status and ETA updates.
  • Ability to cancel navigation tasks.
  • Robust error handling and graceful shutdown.

Once we area ready to test the navigation, we will send a goal pose from the command line:

ros2 topic pub /goal_pose/goal geometry_msgs/PoseStamped "{header: {stamp: {sec: 0, nanosec: 0}, frame_id: 'map'}, pose: {position: {x: 2.0, y: 2.0, z: 0.0}, orientation: {x: 0.0, y: 0.0, z: 0.7071068, w: 0.7071068}}}" --once

Alternatively, we can use a script to send the goal. Let’s create a script called test_nav_to_pose.py:

In the same directory, create and open the file:

touch test_nav_to_pose.py

Add this code.

#!/usr/bin/env python3
"""
ROS 2 node for sending a robot to different tables based on user input.

This script creates a ROS 2 node that continuously prompts the user to select a table
number (1-5) in the cafe world and publishes the corresponding goal pose for the
robot to navigate to. The script will keep running until the user enters 'q' or presses Ctrl+C.

Publishing Topics:
    /goal_pose/goal (geometry_msgs/PoseStamped): The desired goal pose for the robot

:author: Addison Sears-Collins
:date: December 5, 2024
"""

import rclpy
from rclpy.node import Node
from geometry_msgs.msg import PoseStamped
from yahboom_rosmaster_navigation.posestamped_msg_generator import PoseStampedGenerator


class GoalPublisher(Node):
    """
    ROS 2 node for publishing goal poses to different tables.

    This node allows users to input table numbers and publishes the corresponding
    goal poses for robot navigation.

    Attributes:
        publisher: Publisher object for goal poses
        locations (dict): Dictionary containing x,y coordinates for each table
        pose_generator: Generator for PoseStamped messages
    """

    def __init__(self):
        """
        Initialize the GoalPublisher node.

        Sets up the publisher for goal poses and defines the table locations.
        """
        super().__init__('goal_publisher')

        # Create a publisher that will send goal poses to the robot
        self.publisher = self.create_publisher(PoseStamped, '/goal_pose/goal', 10)
        self.get_logger().info('Goal Publisher node has been initialized')

        # Initialize the PoseStamped message generator
        self.pose_generator = PoseStampedGenerator('pose_generator')

        # Dictionary storing the x,y coordinates for each table
        self.locations = {
            '1': {'x': -0.96, 'y': -0.92},   # Table 1 location
            '2': {'x': 1.16, 'y': -4.23},    # Table 2 location
            '3': {'x': 0.792, 'y': -8.27},   # Table 3 location
            '4': {'x': -3.12, 'y': -7.495},  # Table 4 location
            '5': {'x': -2.45, 'y': -3.55}    # Table 5 location
        }

    def publish_goal(self, table_number):
        """
        Publish a goal pose for the selected table.

        Creates and publishes a PoseStamped message with the coordinates
        corresponding to the selected table number.

        Args:
            table_number (str): The selected table number ('1' through '5')
        """
        # Get the coordinates for the selected table
        x = self.locations[table_number]['x']
        y = self.locations[table_number]['y']

        # Create the PoseStamped message using the generator
        # Using default orientation (facing forward) and z=0.0
        pose_msg = self.pose_generator.create_pose_stamped(
            x=x,
            y=y,
            z=0.0,
            qx=0.0,
            qy=0.0,
            qz=0.0,
            qw=1.0,
            frame_id='map'
        )

        # Publish the goal pose
        self.publisher.publish(pose_msg)
        self.get_logger().info(f'Goal pose published for table {table_number}')

    def run_interface(self):
        """
        Run the user interface for table selection.

        Continuously prompts the user to select a table number and publishes
        the corresponding goal pose. Exits when user enters 'q' or Ctrl+C.
        """
        while True:
            # Display available tables
            print("\nAVAILABLE TABLES:")
            for table in self.locations:
                print(f"Table {table}")

            # Get user input
            user_input = input('\nEnter table number (1-5) or "q" to quit: ').strip()

            # Check if user wants to quit
            if user_input.lower() == 'q':
                break

            # Validate and process user input
            if user_input in self.locations:
                self.publish_goal(user_input)
            else:
                print("Invalid table number! Please enter a number between 1 and 5.")


def main(args=None):
    """
    Main function to initialize and run the ROS 2 node.

    Args:
        args: Command-line arguments (default: None)
    """
    # Initialize ROS 2
    rclpy.init(args=args)

    # Create and run the node
    goal_publisher = GoalPublisher()

    try:
        goal_publisher.run_interface()
    except KeyboardInterrupt:
        print("\nShutting down...")
    finally:
        # Clean up
        goal_publisher.destroy_node()
        rclpy.shutdown()


if __name__ == '__main__':
    main()

Save the code, and close it.

This Python script creates a ROS 2 node that lets you send a robot to different predefined table locations in a cafe environment by entering table numbers (1-5) through a command-line interface. When you select a table number, the code generates and publishes a PoseStamped message containing the x,y coordinates for that table location, which tells the robot where to go. The program runs in a continuous loop asking for your table selections until you quit by entering ‘q’ or pressing Ctrl+C, allowing you to send the robot to multiple locations in sequence.

How to Determine Coordinates for Goals

To determine coordinates for goal locations, open a fresh terminal window, and type:

ros2 topic echo /clicked_point

At the top of RViz, click the “Publish Point” button.

Click near the first table, and record the coordinates of that point, which will appear in the terminal window.

  • table_1: (x=-0.96, y=-0.92)

Now do the same for the other four tables. I will go in a clockwise direction:

  • table_2: (x=1.16, y=-4.23)
  • table_3: (x=0.792, y=-8.27)
  • table_4: (x=-3.12, y=-7.495)
  • table_5: (x=-2.45, y=-3.55)

Press CTRL + C when you are done determining the goal locations.

Create an Assisted Teleoperation Script

In this section, we will create an Assisted Teleoperation script for our robot. Assisted Teleoperation allows manual control of the robot while automatically stopping it when obstacles are detected. This feature helps prevent collisions when you are manually sending velocity commands to steer the robot..

You can find a bare-bones example of assisted teleoperation here on the Nav2 GitHub, but we will go beyond that script to make it useful for a real-world environment.

We’ll begin by creating a Python script that implements this functionality:

Open a terminal and navigate to the scripts folder:

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_navigation/scripts

Create a new file for assisted teleoperation:

touch assisted_teleoperation.py

Add the provided code to this file.

#!/usr/bin/env python3
"""
Assisted Teleop Node for ROS 2 Navigation

This script implements an Assisted Teleop node that interfaces with the Nav2 stack.
It manages the lifecycle of the AssistedTeleop action, handles cancellation requests,
and periodically clears costmaps to remove temporary obstacles.

Subscription Topics:
    /cmd_vel_teleop (geometry_msgs/Twist): Velocity commands for assisted teleop
    /cancel_assisted_teleop (std_msgs/Bool): Cancellation requests for assisted teleop

Parameters:
    ~/costmap_clear_frequency (double): Frequency in Hz for costmap clearing. Default: 2.0

:author: Addison Sears-Collins
:date: December 5, 2024
"""

import rclpy
from rclpy.node import Node
from rclpy.exceptions import ROSInterruptException
from geometry_msgs.msg import Twist
from std_msgs.msg import Bool
from nav2_simple_commander.robot_navigator import BasicNavigator
from rcl_interfaces.msg import ParameterDescriptor


class AssistedTeleopNode(Node):
    """
    A ROS 2 node for managing Assisted Teleop functionality.
    """

    def __init__(self):
        """Initialize the AssistedTeleopNode."""
        super().__init__('assisted_teleop_node')

        # Declare and get parameters
        self.declare_parameter(
            'costmap_clear_frequency',
            2.0,
            ParameterDescriptor(description='Frequency in Hz for costmap clearing')
        )
        clear_frequency = self.get_parameter('costmap_clear_frequency').value

        # Initialize the BasicNavigator for interfacing with Nav2
        self.navigator = BasicNavigator('assisted_teleop_navigator')

        # Create subscribers for velocity commands and cancellation requests
        self.cmd_vel_sub = self.create_subscription(
            Twist, '/cmd_vel_teleop', self.cmd_vel_callback, 10)
        self.cancel_sub = self.create_subscription(
            Bool, '/cancel_assisted_teleop', self.cancel_callback, 10)

        # Initialize state variables
        self.assisted_teleop_active = False
        self.cancellation_requested = False

        # Create a timer for periodic costmap clearing with configurable frequency
        period = 1.0 / clear_frequency
        self.clear_costmaps_timer = self.create_timer(period, self.clear_costmaps_callback)

        self.get_logger().info(
            f'Assisted Teleop Node initialized with costmap clearing frequency: {
                clear_frequency} Hz')

        # Wait for navigation to fully activate.
        self.navigator.waitUntilNav2Active()

    def cmd_vel_callback(self, twist_msg: Twist) -> None:
        """Process incoming velocity commands and activate assisted teleop if needed."""
        # Reset cancellation flag when new velocity commands are received
        if (abs(twist_msg.linear.x) > 0.0 or
            abs(twist_msg.linear.y) > 0.0 or
                abs(twist_msg.angular.z) > 0.0):
            self.cancellation_requested = False

        if not self.assisted_teleop_active and not self.cancellation_requested:
            if (abs(twist_msg.linear.x) > 0.0 or
                abs(twist_msg.linear.y) > 0.0 or
                    abs(twist_msg.angular.z) > 0.0):
                self.start_assisted_teleop()

    def start_assisted_teleop(self) -> None:
        """Start the Assisted Teleop action with indefinite duration."""
        self.assisted_teleop_active = True
        self.cancellation_requested = False
        self.navigator.assistedTeleop(time_allowance=0)  # 0 means indefinite duration
        self.get_logger().info('AssistedTeleop activated with indefinite duration')

    def cancel_callback(self, msg: Bool) -> None:
        """Handle cancellation requests for assisted teleop."""
        if msg.data and self.assisted_teleop_active and not self.cancellation_requested:
            self.cancel_assisted_teleop()

    def cancel_assisted_teleop(self) -> None:
        """Cancel the currently running Assisted Teleop action."""
        if self.assisted_teleop_active:
            self.navigator.cancelTask()
            self.assisted_teleop_active = False
            self.cancellation_requested = True
            self.get_logger().info('AssistedTeleop cancelled')

    def clear_costmaps_callback(self) -> None:
        """Periodically clear all costmaps to remove temporary obstacles."""
        if not self.assisted_teleop_active:
            return

        self.navigator.clearAllCostmaps()
        self.get_logger().debug('Costmaps cleared')


def main():
    """Initialize and run the AssistedTeleopNode."""
    rclpy.init()
    node = None

    try:
        node = AssistedTeleopNode()
        rclpy.spin(node)

    except KeyboardInterrupt:
        if node:
            node.get_logger().info('Node shutting down due to keyboard interrupt')
    except ROSInterruptException:
        if node:
            node.get_logger().info('Node shutting down due to ROS interrupt')
    finally:
        if node:
            node.cancel_assisted_teleop()
            node.navigator.lifecycleShutdown()
        rclpy.shutdown()


if __name__ == '__main__':
    main()

This script:

  • Subscribes to the /cmd_vel_teleop topic to receive velocity commands from the user.
  • Manages the Assisted Teleoperation action in the Nav2 stack.
  • Handles cancellation requests for Assisted Teleoperation.
  • Periodically clears costmaps to remove temporary obstacles.
  • The Nav2 stack processes these commands and publishes the resulting safe velocity commands to the /cmd_vel topic.

It’s important to note that /cmd_vel_teleop is the input topic for user commands, while /cmd_vel is the output topic that sends commands to the robot after processing for obstacle avoidance.

Save and close the file.

Add the Go to Goal Pose and Assisted Teleop Scripts to the Bringup Launch File

Open a terminal window, and type:

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_bringup/launch

Open rosmaster_x3_navigation.launch.py.

Add our two new nodes:

   start_nav_to_pose_cmd = Node(
        package='yahboom_rosmaster_navigation',
        executable='nav_to_pose.py',
        output='screen',
        parameters=[{'use_sim_time': use_sim_time}]
    )

   start_assisted_teleop_cmd = Node(
        package='yahboom_rosmaster_navigation',
        executable='assisted_teleoperation.py',
        output='screen',
        parameters=[{'use_sim_time': use_sim_time}]
    )

….

   ld.add_action(start_assisted_teleop_cmd)
   ld.add_action(start_nav_to_pose_cmd)

Edit package.xml

To ensure our package has all the necessary dependencies, we need to edit the package.xml file:

Navigate to the package directory:

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_navigation/

Make sure your package.xml looks like this:

<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
  <name>yahboom_rosmaster_navigation</name>
  <version>0.0.0</version>
  <description>Navigation package for ROSMASTER series robots by Yahboom</description>
  <maintainer email="automaticaddison@todo.com">ubuntu</maintainer>
  <license>BSD-3-Clause</license>

  <buildtool_depend>ament_cmake</buildtool_depend>
  <buildtool_depend>ament_cmake_python</buildtool_depend>

  <depend>rclcpp</depend>
  <depend>rclpy</depend>
  <depend>builtin_interfaces</depend>
  <depend>geometry_msgs</depend>
  <depend>navigation2</depend>
  <depend>nav2_bringup</depend>
  <depend>nav2_simple_commander</depend>
  <depend>slam_toolbox</depend>
  <depend>std_msgs</depend>
  <depend>tf2_ros</depend>
  <depend>tf_transformations</depend>


  <test_depend>ament_lint_auto</test_depend>
  <test_depend>ament_lint_common</test_depend>

  <export>
    <build_type>ament_cmake</build_type>
  </export>
</package>

These dependencies are required for the Assisted Teleoperation functionality to work properly with the Nav2 stack.

Save and close the file.

Edit CMakeLists.txt

To properly build and install all our Python scripts, including the new Assisted Teleoperation script, we need to update the CMakeLists.txt file:

Navigate to the package directory if you’re not already there:

cd ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_navigation/

Open the CMakeLists.txt file:

cmake_minimum_required(VERSION 3.8)
project(yahboom_rosmaster_navigation)

if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang")
  add_compile_options(-Wall -Wextra -Wpedantic)
endif()

# find dependencies
find_package(ament_cmake REQUIRED)
find_package(ament_cmake_python REQUIRED)
find_package(geometry_msgs REQUIRED)
find_package(navigation2 REQUIRED)
find_package(nav2_bringup REQUIRED)
find_package(rclcpp REQUIRED)
find_package(rclpy REQUIRED)
find_package(slam_toolbox REQUIRED)
find_package(std_msgs REQUIRED)
find_package(tf2_ros REQUIRED)

add_executable(cmd_vel_relay src/cmd_vel_relay_node.cpp)
ament_target_dependencies(cmd_vel_relay
  rclcpp
  geometry_msgs
)

install(TARGETS
  cmd_vel_relay
  DESTINATION lib/${PROJECT_NAME}
)

install (
  DIRECTORY config maps rviz
  DESTINATION share/${PROJECT_NAME}
)

# Install Python modules
ament_python_install_package(${PROJECT_NAME})

# Install Python executables
install(PROGRAMS
  scripts/assisted_teleoperation.py
  scripts/nav_to_pose.py
  scripts/test_nav_to_pose.py
  DESTINATION lib/${PROJECT_NAME}
)

if(BUILD_TESTING)
  find_package(ament_lint_auto REQUIRED)
  # the following line skips the linter which checks for copyrights
  # comment the line when a copyright and license is added to all source files
  set(ament_cmake_copyright_FOUND TRUE)
  # the following line skips cpplint (only works in a git repo)
  # comment the line when this package is in a git repo and when
  # a copyright and license is added to all source files
  set(ament_cmake_cpplint_FOUND TRUE)
  ament_lint_auto_find_test_dependencies()
endif()

ament_package()

Make sure it looks like this.

These changes to CMakeLists.txt will ensure that all our scripts, including the new Assisted Teleoperation script, are properly included when building and installing the package.

Build the Package

Now that we’ve created our scripts and updated the necessary configuration files, it’s time to build our package to incorporate all these changes. This process compiles our code and makes it ready for use in the ROS 2 environment.

Open a terminal window, and type:

cd ~/ros2_ws
rosdep install --from-paths src --ignore-src -y -r
colcon build 
source ~/.bashrc

Launch the Robot 

Open a new terminal window, and type the following command to launch the robot. Wait until Gazebo and RViz come fully up, which can take up to 30 seconds:

nav

or

bash ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_bringup/scripts/rosmaster_x3_navigation.sh 

Run Each Script

Let’s test each of the navigation scripts we’ve created. I recommend rerunning the navigation launch file for each script you test below.

You might run into this error from time to time:

[RTPS_TRANSPORT_SHM Error] Failed init_port fastrtps_port7000: open_and_lock_file failed -> Function open_port_internal

This error typically occurs when nodes didn’t close cleanly after you pressed CTRL + C. To fix this, I have an aggressive cleanup command that you can run:

sudo pkill -9 -f "ros2|gazebo|gz|nav2|amcl|bt_navigator|nav_to_pose|rviz2|python3|assisted_teleop|cmd_vel_relay|robot_state_publisher|joint_state_publisher|move_to_free|mqtt|autodock|cliff_detection|moveit|move_group|basic_navigator" && sudo ipcrm -a && sudo rm -rf /dev/shm/fastrtps_* /dev/shm/sem.fastrtps_* && sudo rm -rf /tmp/fastrtps* /tmp/ros* /tmp/snap.ros* /tmp/*.sem /var/run/ros* && sleep 0.2 && ros2 daemon stop && sleep 0.2 && ros2 daemon start

You can also run it without sudo:

pkill -9 -f "ros2|gazebo|gz|nav2|amcl|bt_navigator|nav_to_pose|rviz2|python3|assisted_teleop|cmd_vel_relay|robot_state_publisher|joint_state_publisher|move_to_free|mqtt|autodock|cliff_detection|moveit|move_group|basic_navigator" && sleep 0.2 && ipcrm -a && sleep 0.2 && rm -rf /dev/shm/fastrtps_* /dev/shm/sem.fastrtps_* && sleep 0.2 && rm -rf /tmp/fastrtps* /tmp/ros* /tmp/snap.ros* /tmp/*.sem /var/run/ros* && sleep 0.2 && ros2 daemon stop && sleep 0.2 && ros2 daemon start

If that doesn’t work, this command will guarantee to terminate all zombie nodes and processes.

sudo reboot

Go to Goal Pose

Af the the robot and its controllers come fully up, run these commands in separate terminal windows:

ros2 topic echo /goal_pose/eta
ros2 topic echo /goal_pose/status

In another terminal, publish a goal pose:

ros2 topic pub /goal_pose/goal geometry_msgs/PoseStamped "{header: {stamp: {sec: 0, nanosec: 0}, frame_id: 'map'}, pose: {position: {x: 2.0, y: 2.0, z: 0.0}, orientation: {x: 0.0, y: 0.0, z: 0.7071068, w: 0.7071068}}}" 

Press CTRL + C when the robot starts moving.

You will see the robot move to the (x=2.0, y=2.0) position.

Alternatively, you can send the robot to different tables in the cafe world using our test script:

ros2 run yahboom_rosmaster_navigation test_nav_to_pose.py --ros-args -p use_sim_time:=true
1-go-to-table-2
Robot going from Table 1 to Table 2

Deending on your CPU, you may have to send commands more than once. I had some issues where some messages simply didn’t get sent the first time I sent a table number.

If you want to cancel an ongoing goal, just type:

ros2 topic pub /stop/navigation/go_to_goal_pose std_msgs/msg/Bool "data: true"

Then press CTRL + C.

Press CTRL + C to close these nodes when you are done.

Assisted Teleoperation

Launch teleoperation in another terminal window.

rqt_robot_steering

In the rqt_robot_steering window, set the topic to /cmd_vel_teleop. Use the sliders to control the robot’s movement. Watch what happens when you teleoperate the robot towards an obstacle like a wall. You will notice the robot stops just before the obstacle.

3-assisted-teleop

Stop the robot, by clicking the Stop button.

Close the rqt_robot_steering tool by pressing CTRL + C.

Cancel Assisted Teleoperation

To cancel assisted teleoperation, publish a cancellation message:

ros2 topic pub /cancel_assisted_teleop std_msgs/Bool "data: true" --once

To restart assisted teleoperation, launch the rqt_robot_steering tool again, and move the sliders:

rqt_robot_steering

Press CTRL + C to close these nodes when you are done.

In this tutorial, we explored the Nav2 Simple Commander API and implemented various navigation functionalities for our robot, including basic goal navigation, predefined locations, and assisted teleoperation. You can find other code examples here at the Nav2 GitHub.

These implementations demonstrate the versatility of Nav2, combining autonomous navigation with user input for flexible control in various scenarios. The skills you’ve gained form a strong foundation for developing more complex robotic systems, applicable to real-world robots with minimal modifications.

ROS 2 Navigation Tuning Guide – Nav2

In this tutorial, we will explore the process of fine-tuning the parameters for Nav2 (the ROS 2 Navigation stack), which is essential for achieving optimal navigation performance in your mobile robot. With hundreds of parameters in the yaml file for Nav2, the configuration process can be quite confusing and time-consuming. However, this guide is designed to save you time and effort by leveraging my years of experience in tuning parameters for ROS-based mobile robots.

If you are working with ROS 1, I encourage you to check out this ROS Navigation Tuning Guide by Kaiyu Zheng. Otherwise, if you are working with ROS 2, you have come to the right place.

Prerequisites

All my code for this project is located here on GitHub. The file we will be going through is located here.

For the purposes of comparison, you can also take a look at these parameters here from the Nav2 GitHub for some other default values.

Introduction

It is important to understand that the tuning process is more of an art than a science. It involves a significant amount of trial and error and experimentation to find the right balance of parameters that work best for your specific robot and its operating environment. Don’t be discouraged if it takes some time to get the desired results.

In this guide, I will share my insights on which parameters are best left at their default values and which ones are worth investing time in tuning, regardless of the type of mobile robot you are working with. I will also provide my recommended values that have proven to work well for most of my projects.

By following the advice in this guide, you can streamline your Nav2 parameter tuning process and avoid the common pitfalls that many roboticists face when optimizing their navigation stack. Whether you’re a beginner or an experienced roboticist, this guide will help you understand the key concepts and reasoning behind each parameter, enabling you to make informed decisions and achieve the best possible navigation performance for your mobile robot.

amcl

Description

Here is the official configuration guide for AMCL.

The AMCL (Adaptive Monte Carlo Localization) package helps the robot understand its location on a map. It uses a particle filter to estimate the robot’s position by matching laser scans from a LIDAR sensor with the existing map. This process allows the robot to navigate more accurately in known environments.

Imagine you’re blindfolded in your house. How would you figure out exactly where you are? You would probably reach out to touch walls and furniture, comparing what you feel to your mental map of your home. This is similar to how AMCL helps a robot understand its position in a known environment.

Let’s break down how AMCL works…

First, think about the name itself. “Adaptive” means it can adjust itself based on circumstances. “Monte Carlo” refers to using random sampling to solve problems (named after the famous casino!), and “Localization” simply means finding out where something is.

AMCL starts by spreading out thousands of virtual “guesses” (we call them particles) across the map. Each particle represents a possible position and direction the robot might be in. Think of it like dropping thousands of pins on a map, with each pin being a “maybe I’m here” guess.

As the robot moves and uses its LIDAR, it does the following:

  • It looks at what the LIDAR is seeing – walls, corners, furniture, etc.
  • It compares these readings to what it would expect to see at each of those guessed positions
  • It gives higher “scores” to guesses that match well with what the sensors are actually seeing
  • It gradually eliminates unlikely guesses and creates new ones around the more promising positions

The “Adaptive” part is particularly smart: when the robot is uncertain about its position (like when you first turn it on), it uses more particles to cast a wider net. Once it is pretty sure where it is, it reduces the number of particles to save computing power while maintaining accuracy.

Why is this so important? Think of it this way: if you’re using a GPS navigation app but it shows you on the wrong street, all its directions will be useless! Similarly, a robot needs to know exactly where it is before it can effectively plan paths or navigate to goals.

The AMCL server in Nav2 handles all of this complex math and probability behind the scenes. It:

  • Can start with either a rough or precise initial guess of the robot’s position
  • Constantly updates its position estimate as the robot moves
  • Handles situations where the environment has changed somewhat (30% or less in my experience) from the original map
  • Shares its position information with other parts of the navigation system

One of the most fascinating aspects of AMCL is how it mirrors human navigation behaviors. Just as you become more confident about your position when you recognize more landmarks, AMCL becomes more certain as it matches more sensor readings to known map features.

Parameters

alpha1: (unitless)

  • Default: 0.2
  • My value: 0.2
  • Matches the default value in the official guide. This is a reasonable default for the expected process noise in odometry’s rotation estimate from rotation.

alpha1 controls how much rotation affects the robot’s rotation estimate. Increasing this value makes the system more cautious about rotational movements, assuming more uncertainty when the robot turns. Decreasing it makes the system more confident about its rotation estimates, which can be risky if your robot’s sensors aren’t very accurate.

alpha2: (unitless)

  • Default: 0.2
  • My value: 0.2
  • Matches the default value. A suitable default for expected process noise in rotation estimate from translation.

alpha2 handles how forward motion affects rotation estimates. If you increase this value, the robot becomes more uncertain about its rotation when moving forward, leading to more conservative behavior. Decreasing it means the robot will be more confident that moving forward won’t affect its rotational accuracy, which might be too optimistic for many robots.

alpha3: (unitless)

  • Default: 0.2
  • My value: 0.2
  • Matches the default. Appropriate for expected process noise in translation estimate from translation.

alpha3 determines how forward motion affects position estimates. Increasing this makes the robot more uncertain about its position when moving forward, causing it to rely more on sensor data. Decreasing it means the robot will trust its forward motion more, which can be good for robots with very accurate wheel encoders but risky for others.

alpha4: (unitless)

  • Default: 0.2
  • My value: 0.2
  • Same as default. Works well for expected process noise in translation estimate from rotation in most cases.

alpha4 manages how rotation affects position estimates. When increased, the system assumes more position uncertainty during turns, making it more reliant on sensor data to confirm its location. Decreasing it means the robot will be more confident about maintaining position accuracy during turns, which might not be realistic for many robots.

alpha5: (unitless)

  • Default: 0.2
  • My value: 0.2
  • Aligns with default. Good value for translation noise in omni models.

Specific to omnidirectional robots, this controls translation noise. Increasing this value makes the robot more cautious about its position estimates during omnidirectional movement. Decreasing it makes the robot more confident in its position during sideways motion, which should only be done if you have very accurate encoders.

base_frame_id: (string)

  • Default: “base_footprint”
  • My value: “base_footprint”
  • Matches the typical default base frame for many robots.

beam_skip_distance: (meters)

  • Default: 0.5
  • My value: 0.5
  • Same as default. 0.5m is a reasonable max distance to consider skipping beams that don’t match the map.

This parameter tells AMCL how different a laser reading can be from the expected map reading before being considered for skipping. For example, if a laser beam expects to hit a wall at 3 meters according to the map, but actually hits something at 2.3 meters, the difference is 0.7 meters. If this difference is larger than beam_skip_distance (i.e. 0.5 meters), this reading becomes a candidate for being skipped. 

Increasing the beam_skip_distance means more readings that don’t match the map will be considered for skipping, which helps when your environment changes often but might make localization less precise. Decreasing it means only readings that are very different from expectations will be considered for skipping, making localization more precise but potentially slower in changing environments.

beam_skip_error_threshold: (unitless)

  • Default: 0.9
  • My value: 0.9
  • Matches default. 0.9 is a good percentage of beams to skip before forcing a full update due to bad convergence.

Percentage of beams that can be skipped before forcing a full update. Increasing this allows more beams to be skipped before requiring a full update, which can help in noisy environments. Decreasing it makes the system more conservative about skipping beams, potentially improving accuracy but increasing computational load.

beam_skip_threshold: (unitless)

  • Default: 0.3
  • My value: 0.3
  • Aligns with default. 0.3 works well as the percentage of beams required to skip.

Percentage of beams that can be skipped before forcing a full update. Increasing this allows more beams to be skipped before requiring a full update, which can help in noisy environments. Decreasing it makes the system more conservative about skipping beams, potentially improving accuracy but increasing computational load.

do_beamskip: (bool)

  • Default: False
  • My value: false
  • Same as default. Beam skipping is not necessary in the likelihood field model.

Percentage of beams that can be skipped before forcing a full update. Increasing this allows more beams to be skipped before requiring a full update, which can help in noisy environments. Decreasing it makes the system more conservative about skipping beams, potentially improving accuracy but increasing computational load.

global_frame_id: (string)

  • Default: “map”
  • My value: “map”
  • Matches the standard global frame name.

lambda_short: (unitless)

  • Default: 0.1
  • My value: 0.1
  • Same as default. A good exponential decay parameter for the z_short part of the model.

Increasing this value makes the system more sensitive to unexpected short readings, which can help detect obstacles. Decreasing it makes the system more tolerant of short readings, which might help in environments with lots of small objects.

laser_likelihood_max_dist: (meters)

  • Default: 2.0
  • My value: 2.0
  • Matches default. 2.0m is a reasonable maximum distance for obstacle inflation in the likelihood field model.

Increasing this value makes the system consider obstacles further away, which can improve accuracy but increases computation time. Decreasing it makes the system focus on closer obstacles, which can be faster but might miss important features.

laser_max_range: (meters)

  • Default: 100.0
  • My value: 100.0
  • Aligns with default. 100m is far enough to cover most laser scanners. -1.0 would use the laser’s reported max range instead.

Increasing this lets the robot use more distant readings but can slow down processing and might include more noise. Decreasing it means ignoring distant readings, which can be good in cluttered environments but might make localization harder in large open spaces.

laser_min_range: (meters)

  • Default: -1.0
  • My value: -1.0
  • Same as default. -1.0 will use the laser’s reported minimum range.

Increasing this ignores closer readings, which can help if your LIDAR gets a lot of false close readings. Decreasing it (or keeping at -1.0) uses the sensor’s built-in minimum range, which is usually fine.

laser_model_type: (string)

  • Default: “likelihood_field”
  • My value: “likelihood_field”
  • Matches the recommended default, which works well for most use cases. Considers the beamskip feature if enabled.

There are three main options for laser_model_type:

  1. “likelihood_field” (default)
    • This is the recommended default
    • Most computationally efficient
    • Works well for most scenarios
    • Creates a pre-computed likelihood field of expected laser readings
  2. “beam”
    • Also known as the ray-cast model
    • More computationally intensive
    • Can be more accurate in some situations
    • Directly simulates individual laser beams
    • Better at handling dynamic environments but slower
  3. “likelihood_field_prob”
    • A probabilistic variant of the likelihood field model
    • Adds probability calculations to the standard likelihood field
    • Can be more accurate than basic likelihood field but more computationally intensive
    • Good for environments where you need more precise probability estimates

max_beams: (unitless)

  • Default: 60
  • My value: 60
  • Matches default. 60 is a good number of evenly-spaced beams to use from each scan.

Increasing this can improve accuracy but slows down processing significantly. Decreasing it speeds up processing but might make localization less accurate.

max_particles: (unitless)

  • Default: 2000
  • My value: 2000
  • Same as default. 2000 is the recommended maximum number of particles for the particle filter.

Increasing this can improve accuracy but uses more computational power. Decreasing it saves processing power but might make localization less reliable, especially in complex environments.

min_particles: (unitless)

  • Default: 500
  • My value: 500
  • Matches default. 500 is a good minimum to ensure a sufficient particle distribution.

Increasing this value ensures more thorough position checking but uses more processing power. Decreasing it saves computation but might make the system less reliable at finding the robot’s position

odom_frame_id: (string)

  • Default: “odom”
  • My value: “odom”
  • Same as default. “odom” is the standard frame for odometry data.

pf_err: (unitless)

  • Default: 0.05
  • My value: 0.05
  • Matches default. 0.05 is a reasonable value for the particle filter population error.

Increasing this value makes the system reduce particles more aggressively, saving computation but potentially reducing accuracy. Decreasing it maintains more particles, using more computation but potentially improving accuracy.

pf_z: (unitless)

  • Default: 0.99
  • My value: 0.99
  • Same as default. 0.99 works well for the particle filter population density.

Increasing this value maintains more particles longer, using more computation but potentially improving accuracy. Decreasing it reduces particles more quickly, saving computation but possibly reducing accuracy.

recovery_alpha_fast: (unitless)

  • Default: 0.0
  • My value: 0.0
  • Matches default. 0.0 disables the fast particle filter recovery.

When set above 0.0, it enables a fast recovery mode that quickly spreads particles to find the robot’s position. Increasing this makes recovery more aggressive but potentially less stable. Keeping it at 0.0 means relying on normal localization methods.

recovery_alpha_slow: (unitless)

  • Default: 0.0
  • My value: 0.0
  • Same as default. 0.0 disables the slow particle filter recovery.

When set above 0.0, it enables a slower, more methodical recovery mode. Increasing this value makes recovery more thorough but slower. Keeping it at 0.0 means relying on normal localization methods.

resample_interval: (unitless)

  • Default: 1
  • My value: 1
  • Matches default. Resampling after each filter update is usually appropriate.

A value of 1 means it updates after every movement. Increasing this number means less frequent updates, which can help maintain diverse position estimates but might make the robot slower to adapt. Decreasing it below 1 isn’t recommended.

robot_model_type: (string)

  • Default: “nav2_amcl::DifferentialMotionModel”
  • My value: “nav2_amcl::OmniMotionModel”
  • In my navigation tutorials, I am using an omnidirectional mecanum wheeled robot. If you are using a differential drive robot, use the nav2_amcl::DifferentialMotionModel. Differential drive robots are the most common mobile robots in industry.

save_pose_rate: (Hz)

  • Default: 0.5
  • My value: 0.5
  • Same as default. 0.5Hz is a good rate for saving the estimated pose to the parameter server.

A value of 0.5 means it saves twice per second. Increasing this saves positions more frequently but uses more system resources. Decreasing it saves less often but is more efficient.

sigma_hit: (meters)

  • Default: 0.2
  • My value: 0.2
  • Matches default. 0.2 is a reasonable standard deviation for the Gaussian model in the z_hit part.

Increasing this value makes the robot more tolerant of small sensor inaccuracies but might reduce precision. Decreasing it makes the robot more strict about matching sensor readings to the map.

tf_broadcast: (bool)

  • Default: True
  • My value: true
  • Same as the default example. Broadcasting the map->odom transform is why we use ACML.

This parameter should almost always be true. The main job of AMCL is to figure out how to connect the map to the robot’s odometry, and this parameter enables that connection.

transform_tolerance: (seconds)

  • Default: 1.0
  • My value: 1.0
  • Matches default. 1.0s is a good duration for how long the published transform should be considered valid.

If set to 1.0, the robot will trust its position data for 1 second. Increasing this helps handle communication delays but might use outdated positions. Decreasing it ensures more current data but might cause jittery behavior.

update_min_a: (radians)

  • Default: 0.2
  • My value: 0.05
  • Different from the default of 0.2, but 0.05 requires less rotational movement of the robot before performing a filter update.

The default 0.2 is about 11.5 degrees. Increasing this value means fewer updates, saving computation. Decreasing it (like to 0.05) makes the robot update more often, which is better for slow or precise movements.

update_min_d: (meters)

  • Default: 0.25
  • My value: 0.05
  • Smaller than the default of 0.25, requiring less translational movement before updating. Better for slow-moving robots.

z_hit: (unitless)

  • Default: 0.5
  • My value: 0.5
  • Same as default. A good mixture weight for the z_hit part of the likelihood field model.

How much to trust sensor readings that match the map perfectly. A weight of 0.5 means 50% importance. Increasing this makes the robot trust perfect matches more. Decreasing it makes the robot more forgiving of small mismatches.

z_max: (unitless)

  • Default: 0.05
  • My value: 0.05
  • Matches default. A reasonable mixture weight for the z_max part.

How much to trust maximum-range readings (when the sensor doesn’t see anything in range). A small value of 0.05 means these readings aren’t very important. Increasing this makes the robot trust empty space more. Decreasing it makes the robot mostly ignore maximum-range readings.

z_rand: (unitless)

  • Default: 0.5
  • My value: 0.5
  • Same as default. Works well as the mixture weight for the z_rand part.

How much to expect random, unexplainable sensor readings. A value of 0.5 means expecting quite a few random readings. Increasing this helps in busy, unpredictable environments. Decreasing it assumes sensor readings should mostly match the map.

z_short: (unitless)

  • Default: 0.005
  • My value: 0.05
  • Different from the default of 0.005. A higher value of 0.05 gives more weight to the z_short part of the model, which accounts for unexpected short readings. This may be beneficial if your robot frequently encounters objects that cause short readings, such as small obstacles or reflective surfaces.

scan_topic: (string)

  • Default: scan
  • My value: scan
  • Same as example. “scan” is the typical ROS 2 topic for laser scan data.

This parameter must match exactly what your robot’s laser scanner is using. Changing this is only needed if your robot uses a different topic name for its laser data.

bt_navigator

Description

Here is the official configuration guide for bt_navigator.

The bt_navigator (Behavior Tree Navigator) package acts as the central decision-maker in Nav2, coordinating the robot’s navigation tasks through behavior trees. 

Think of it like giving directions to someone in a complex building: “Go down the hall, if you see a closed door, either wait or try another route. If you get lost, go back to the last place you were sure about.” This is how bt_navigator helps robots make navigation decisions.

The bt_navigator implements two main types of navigation:

  1. NavigateToPose: Gets the robot to a single destination
  2. NavigateThroughPoses: Guides the robot through multiple waypoints

When navigating, the bt_navigator coordinates the entire process. It takes your goal position, works with planners to create and follow paths, monitors for problems, triggers recovery behaviors when needed, and provides status updates. The behavior tree structure allows the robot to handle real-world complexity by making dynamic decisions, like finding new paths or initiating recovery behaviors when obstacles appear.

Parameters

global_frame: (unitless)

  • Default: “map”
  • My value: “map”
  • Matches default. “map” is the conventional name for the global reference frame.

You’ll almost never change this from “map” unless you have a very specific multi-map setup.

robot_base_frame: (unitless)

  • Default: “base_link”
  • My value: “base_link”
  • Matches default. “base_link” is the conventional name for the robot’s base frame.

You’ll typically only change this value if your robot uses a different naming convention. Using the wrong name will prevent the robot from navigating properly.

odom_topic: (unitless)

  • Default: “odom”
  • My value: “/odometry/filtered”
  • Different from the default. Using “/odometry/filtered” means I am using a filtered odometry source, which can provide better localization estimates.

bt_loop_duration: (milliseconds)

  • Default: 10
  • My value: 10
  • Matches default. 10 ms is a reasonable value for the behavior tree execution loop duration.

How often the robot’s “brain” (behavior tree) makes decisions. At 10ms, it makes 100 decisions per second. Increasing this gives the CPU more breathing room but makes the robot react slower. Decreasing it makes the robot more responsive but uses more CPU power.

default_server_timeout: (milliseconds)

  • Default: 20
  • My value: 20
  • Matches default. 20 ms is a good timeout value for the behavior tree action node to wait for an action server response.

How long to wait for navigation components to respond. At 20ms, it’s a brief timeout that catches quick failures. Increasing this makes the system more tolerant of slow responses but could delay error detection. Decreasing it might cause premature timeouts if components are slow.

wait_for_service_timeout: (milliseconds)

  • Default: 1000
  • My value: 1000
  • Matches default. This is the timeout value for BT nodes waiting for acknowledgement from a service or action server during initialization. It can be overwritten for individual BT nodes if needed.

How long to wait for navigation services to start up. At 1 second (1000ms), it’s usually enough for normal startup. Increasing this helps with slow-starting systems but delays overall startup. Decreasing it might cause failures if services start slowly.

action_server_result_timeout: (seconds)

  • Default: 900.0
  • My value: 900.0
  • Matches default. This is the timeout value for action servers to discard a goal handle if a result hasn’t been produced. The high value of 900 seconds (15 minutes) allows for long-running navigation tasks.

Increasing this parameter allows longer missions but might keep failed tasks running too long. Decreasing it might interrupt valid long-running tasks.

navigators: (vector<string>)

  • Default: [‘navigate_to_pose’, ‘navigate_through_poses’]
  • My value: [“navigate_to_pose”, “navigate_through_poses”]
  • Matches default. These are the plugins for navigator types implementing the nav2_core::BehaviorTreeNavigator interface. They define custom action servers with custom interface definitions for behavior tree navigation requests.

The types of navigation commands your robot understands:

  • ‘navigate_to_pose’: Go to a single location
  • ‘navigate_through_poses’: Follow a series of waypoints 

Adding custom navigators lets you create new types of movement commands. Removing options limits what navigation commands your robot can accept.

error_code_names: (vector<string>)

  •  Default: [“compute_path_error_code”, “follow_path_error_code”]
  •  My value: [“compute_path_error_code”, “follow_path_error_code”]
  • Matches default. This is a list of error codes that we want to keep track of during navigation tasks. Other error codes you can add include: smoother_error_code, navigate_to_pose_error_code, navigate_through_poses_error_code, etc.

The defaults track:

  • Problems finding a path
  • Problems following a path

Adding more codes (like “smoother_error_code”) lets you track more specific issues. Removing codes means you’ll get less detailed error reporting.

transform_tolerance: (seconds)

  • Default: 0.1
  • My value: 0.1
  • Matches default. This specifies the TF transform tolerance, which is the time window in which transforms are considered valid.

How old position information can be before it’s considered outdated. At 0.1 seconds, it’s fairly strict. Increasing this helps with laggy systems but might use outdated position data. Decreasing it ensures more current data but might cause jitter with slight delays.

controller_server

Description

Here is the official configuration guide for the controller_server.

The controller_server takes in planned routes and figures out the exact velocity commands needed to move the robot along these paths smoothly and accurately. It ensures the robot follows the path correctly, avoids obstacles using the local map around it, and uses various plugins to check its progress and confirm when the robot reaches its goal.

Parameters

controller_frequency: (Hz)

  • Default: 20.0
  • My value: 5.0  
  • Different from default. This parameter depends on what your CPU can handle. I often use a lower frequency of 5 Hz to reduce computational load, but you are free to keep it as 20.0 Hz if you wish. 

costmap_update_timeout: (seconds)

  • Default: 0.30
  • My value: 0.30  
  • How long to wait for the costmap to update before giving up. If the costmap hasn’t updated within 0.3 seconds, the controller assumes something’s wrong. Increasing this helps with slow computers but might let the robot use outdated obstacle information. Decreasing it makes the system more responsive to obstacles but might cause more timeouts.

min_x_velocity_threshold: (m/s)

  • Default: 0.0001
  • My value: 0.001
  • Similar to default. 0.001 m/s is a reasonable minimum velocity threshold to filter odometry noise in the x direction.

At 0.001 m/s, tiny movements are ignored. Increasing this value means the robot must move faster to be considered “moving”. Decreasing it makes the system more sensitive to tiny movements.

min_y_velocity_threshold: (m/s)

  • Default: 0.0001 
  • My value: 0.001
  • Different from default. 0.001 m/s is a suitable minimum velocity threshold for filtering odometry noise in the y direction for robots that can move in the y-direction like a mecanum wheel omnidirectional robot.

The default (0.5) is for differential drive robots that can’t move sideways. Increasing this ignores small sideways movements. Decreasing it makes the robot more responsive to sideways commands.

min_theta_velocity_threshold: (rad/s)

  • Default: 0.0001
  • My value: 0.001
  • Similar to default. 0.001 rad/s works well as a minimum angular velocity threshold to filter odometry noise.

At 0.001 rad/s, very slow rotations are still tracked. Increasing this value ignores slower rotations. Decreasing it makes the system track even tinier rotational movements.

failure_tolerance: (seconds)

  • Default: 0.0
  • My value: 0.3
  • Different from default. Allowing the controller to fail for up to 0.3 seconds before the FollowPath action fails provides some tolerance for temporary failures.

Setting this parameter to -1.0 makes it never give up. 0.0 disables failure tolerance completely. Increasing this value makes the system more tolerant of temporary failures, while decreasing it makes it give up more quickly on problematic paths.

progress_checker_plugin: (string)

  • Default: “progress_checker”
  • My value: “progress_checker” 
  • Matches default. Using the SimpleProgressChecker plugin is suitable for monitoring the robot’s progress.

progress_checker.required_movement_radius: (meters)

  • Default: 0.5
  • My value: 0.5
  • A movement radius of 0.5 meters is a good threshold for determining if the robot has made sufficient progress.

Increasing this value requires more movement to prove progress, which might cause false “stuck” detections. Decreasing it makes the system more lenient but might miss actual stuck conditions.

progress_checker.movement_time_allowance: (seconds)

  • Default: 10.0
  • My value: 10.0
  • Allowing 10 seconds for the robot to move the required distance is a reasonable time limit before considering it stuck.

If the robot doesn’t move the required radius within this time, it’s considered stuck. Increasing this gives more time for slow maneuvers but delays stuck detection. Decreasing it catches stuck conditions faster but might interrupt valid slow movements.

goal_checker_plugins: (vector<string>)

  • Default: [“goal_checker”]
  • My value: [“general_goal_checker”]
  • Different from default. The general_goal_checker namespace is used instead of goal_checker. It d

general_goal_checker.xy_goal_tolerance: (meters)

  • Default: 0.25
  • My value: 0.35
  • An xy goal tolerance of 0.35 meters is suitable for considering the goal reached in terms of position. You can set a smaller value if you want the robot to stop closer to your goal.

A larger value of 0.35m makes it easier for the robot to consider goals achieved. Increasing this value makes navigation more reliable but less precise. Decreasing it ensures more precise positioning but might cause the robot to spend more time trying to reach exact positions (i.e. dancing and twirling around the goal).

general_goal_checker.yaw_goal_tolerance: (radians)

  • Default: 0.25
  • My value: 0.50
  • A yaw goal tolerance of 0.50 radians (about 28.6 degrees) is appropriate for considering the goal orientation reached.

You can set this at 0.25 if you want; however, setting it too low can cause the robot to “dance” when it reaches the goal as it struggles to meet the goal tolerance.

controller_plugins: (vector<string>)

  • Default: [‘FollowPath’]
  • My value: [“FollowPath”]
  • Matches default. Using the FollowPath controller plugin is suitable for following paths.

FollowPath.plugin: (string)

Option 1 (MPPI): Model Predictive Path Integral controller simulates many possible trajectories and picks the best one. It’s computationally intensive but very smooth and handles dynamic constraints well. 

Option 2 (Rotation Shim): First rotates the robot to face the path, then uses a simpler controller for following it. This can make navigation more predictable but might be slower due to the initial rotation.

FollowPath.primary_controller: (string)

This parameter is only used with the Rotation Shim controller to specify which controller handles path following after rotation.

While you can use the DWBLocalPlanner if you want, I’ve had more success with the Regulated Pure Pursuit controller. 

  • The Regulated Pure Pursuit controller steers the robot towards a point ahead on the path, resulting in smoother and more natural-looking motion.
  • The Regulated Pure Pursuit controller also makes better turns into doorways. I have found that DWB can come close to scraping the wall.

Model Predictive Path Integral Controller Parameters

time_steps: (integer)

  • Default: 56
  • My value: 15
  • Different from default. Using fewer timesteps reduces computational load while maintaining adequate planning horizon.

Increasing this value allows the controller to plan further ahead but significantly increases computational load. Each timestep represents a future point where trajectories are evaluated.

model_dt: (seconds)

  • Default: 0.05
  • My value: 0.2
  • Different from default. Larger timestep matches our control frequency and reduces computational overhead.

This is how far apart each prediction step is in time. Increasing it means bigger jumps between predictions, saving computation but potentially missing obstacles. Decreasing it gives finer predictions but increases computational load substantially.

batch_size: (integer)

  • Default: 1000
  • My value: 10000
  • Different from default. Larger batch size provides better sampling of possible trajectories.

This is how many potential trajectories are simulated in parallel. Increasing it gives better chances of finding optimal paths but uses more memory and CPU. Decreasing it saves resources but might miss better path options.

vx_std: (m/s)

  • Default: 0.2
  • My value: 0.2
  • Matches default. This standard deviation provides good sampling for linear velocity.

Increasing this value makes the controller try more diverse speeds but might lead to erratic behavior. Decreasing it makes speed choices more conservative but might miss useful options.

vy_std: (m/s)

  • Default: 0.2
  • My value: 0.2
  • Matches default. Appropriate for holonomic robots.

Controls how varied the sideways velocity samples are. Increasing this value allows more diverse lateral movements but might cause unstable behavior. Decreasing it makes lateral motion more predictable but might limit maneuverability.

wz_std: (rad/s)

  • Default: 0.2
  • My value: 0.4
  • Different from default. Higher angular velocity standard deviation allows for more varied rotational sampling.

Controls how much the rotation speeds vary in samples. Your higher value tries more diverse turning speeds, useful for finding better rotational movements. Increasing it further might lead to erratic rotation behavior.

vx_max: (m/s)

  • Default: 0.5
  • My value: 0.5
  • Matches default. Maximum forward velocity suitable for most indoor robots.

Increasing this allows faster motion but might make the robot harder to control. Decreasing it makes the robot move more cautiously but takes longer to reach goals.

vx_min: (m/s)

  • Default: -0.35
  • My value: 0.0
  • Different from default. Set to 0.0 to prevent reverse motion.

vy_max: (m/s)

  • Default: 0.5
  • My value: 0.5
  • Matches default. Maximum lateral velocity for holonomic robots.

wz_max: (rad/s)

  • Default: 1.9
  • My value: 1.9
  • Matches default. Suitable maximum angular velocity for controlled turns.

ax_max: (m/s²)

  • Default: 3.0
  • My value: 3.0
  • Matches default. Maximum forward acceleration.

ax_min: (m/s²)

  • Default: -3.0
  • My value: -3.0
  • Matches default. Maximum deceleration.

ay_max: (m/s²)

  • Default: 3.0
  • My value: 3.0
  • Matches default. Maximum lateral acceleration.

az_max: (rad/s²)

  • Default: 3.5
  • My value: 3.5
  • Matches default. Maximum angular acceleration.

iteration_count: (integer)

  • Default: 1
  • My value: 1
  • Matches default. Single iteration is recommended with larger batch sizes.

Multiple iterations can improve path quality but are rarely needed with large batch sizes. Keeping it at 1 is efficient when using many samples.

temperature: (double)

  • Default: 0.3
  • My value: 0.3
  • Matches default. Good balance for trajectory selection based on costs.

Controls how strongly the controller favors lower-cost trajectories. Higher values make selection more random, potentially finding creative solutions. Lower values make it stick to the most obvious low-cost paths.

gamma: (double)

  • Default: 0.015
  • My value: 0.015
  • Matches default. Appropriate trade-off between smoothness and low energy.

gamma controls how the controller weighs trajectory costs in its decision-making. This parameter determines how “picky” the controller is about choosing trajectories based on their costs. Think of it as a cost sensitivity setting. 

With a higher gamma (like 0.03), the controller will strongly prefer the very best trajectories and mostly ignore mediocre ones. With a lower gamma (like 0.005), it will be more willing to consider trajectories even if they’re not optimal, which can help in complex situations where the perfect path isn’t obvious. 

The default of 0.015 provides a good balance between being selective enough to find good paths while still being flexible enough to handle various situations.

ConstraintCritic (robot limits):

  • Default: enabled (true), cost_power (1), cost_weight (4.0)
  • My values: enabled (true), cost_power (1), cost_weight (4.0)
  • Matches default. This parameter makes sure the robot doesn’t try to move in impossible ways. Think of this like a speed limit enforcer. It makes sure the robot doesn’t try to turn too sharply or move too fast – essentially keeping it within what it can physically do. Increasing the weight makes it more strict about these limits, while decreasing makes it more relaxed but might push the robot too hard.

CostCritic (obstacle avoidance):

  • Default: enabled (true), cost_power (1), cost_weight (3.81), critical_cost (300.0), consider_footprint (false), collision_cost (1000000.0)
  • My values: enabled (true), cost_power (1), cost_weight (3.81), critical_cost (300.0), consider_footprint (true), collision_cost (1000000.0), near_goal_distance (1.0), trajectory_point_step (2)
  • Different from default only in consider_footprint. 

This parameter helps the robot understand its size and stay safe from obstacles. Think of this like a bubble around the robot to keep it safe. The critical_cost is like a warning zone, telling the robot “this is getting dangerous” when it’s too close to obstacles. 

The collision_cost is like a big red stop sign – the robot will do almost anything to avoid an actual collision. By setting consider_footprint to true (different from default), we make the robot more aware of its actual shape instead of just treating it like a point, which is safer but takes more computer power.

The near_goal_distance of 1.0 meter means the robot can get a bit closer to obstacles when it’s trying to reach its final destination.

GoalCritic (getting to destination):

  • Default: enabled (true), cost_power (1), cost_weight (5.0), threshold_to_consider (1.4)
  • My values: enabled (true), cost_power (1), cost_weight (5.0), threshold_to_consider (1.4)
  • Matches default. Appropriate weight for goal-oriented behavior.

This parameter helps the robot focus on getting to its destination point. Think of this like a magnet pulling the robot toward its goal. 

The weight of 5.0 determines how strong this “pull” is. The threshold_to_consider of 1.4 meters means the robot starts really focusing on getting to the exact goal point when it’s within this distance. Before that, it’s more focused on following the path. Increasing the weight makes the robot more aggressive about getting to the goal, while decreasing it makes it more relaxed about reaching the exact spot.

GoalAngleCritic (final rotation):

  • Default: enabled (true), cost_power (1), cost_weight (3.0), threshold_to_consider (0.5)
  • My values: enabled (true), cost_power (1), cost_weight (3.0), threshold_to_consider (0.5)
  • Matches default. Good balance for goal orientation alignment.

This parameter helps the robot turn to face the right direction at the goal…like making sure you’re facing the right way when you park a car. 

When the robot gets within 0.5 meters (threshold_to_consider) of its goal, it starts focusing on turning to the requested final direction. The weight of 3.0 determines how much the robot cares about getting this final rotation right. Increasing it makes the robot more picky about its final orientation, while decreasing it means it won’t try as hard to get the exact angle right.

PathAlignCritic: (staying on path)

  • Default: enabled (true), cost_power (1), cost_weight (10.0), threshold_to_consider (0.5)
  • My value: enabled (true), cost_power (1), cost_weight (14.0), threshold_to_consider (0.5)
  • Different from default. 

I use a higher weight to make the robot stick to the path more strictly. Think of this like trying to walk on a line drawn on the ground. My higher weight (14.0 vs 10.0) means the robot tries harder to stay exactly on the planned path. The threshold of 0.5 meters means it stops worrying about strict path following when it’s very close to the goal. Increasing the weight makes the robot follow the path more precisely, while decreasing it allows more deviation from the path.

PathAngleCritic: (path direction)

  • Default: enabled (true), cost_power (1), cost_weight (2.2), threshold_to_consider (0.5)
  • My value: enabled (true), cost_power (1), cost_weight (2.0), threshold_to_consider (0.5)
  • Different from default. I use a slightly lower weight to allow more flexible turning. 

My slightly lower weight (2.0 vs 2.2) means the robot is a bit more relaxed about matching the exact path direction. Within 0.5 meters of the goal, it stops caring about path direction and focuses on final positioning. Increasing this weight makes the robot more strictly follow the path’s direction, while decreasing it allows more freedom in orientation.

PathFollowCritic: (forward progress)

  • Default: enabled (true), cost_power (1), cost_weight (5.0), threshold_to_consider (1.4)
  • My value: enabled (true), cost_power (1), cost_weight (5.0), threshold_to_consider (1.4)
  • Matches default. This encourages the robot to make progress along the path. 

Think of this parameter like a gentle push forward along the path. The weight of 5.0 determines how strongly the robot is urged to move forward. When within 1.4 meters of the goal, it switches from path following to focusing on the final approach. Increasing the weight makes the robot more aggressive about moving forward, while decreasing it makes it more willing to slow down or adjust position.

PreferForwardCritic: (forward motion)

  • Default: enabled (true), cost_power (1), cost_weight (5.0), threshold_to_consider (0.5)
  • My value: enabled (true), cost_power (1), cost_weight (5.0), threshold_to_consider (0.5)
  • Matches default. This encourages the robot to drive forward instead of backward….like preferring to walk forward instead of backward. 

The weight of 5.0 determines how much the robot prefers forward motion. Within 0.5 meters of the goal, it stops caring about forward motion to allow any adjustments needed for final positioning. Increasing this value makes the robot more strongly prefer forward motion, while decreasing it makes it more willing to drive backward.

TwirlingCritic (smooth rotation):

  • Default: enabled (true), cost_power (1), cost_weight (10.0)
  • My values: enabled (true), cost_power (1), cost_weight (10.0)
  • Matches default. Good weight for preventing unnecessary rotation for omnidirectional robots.

This parameter prevents unnecessary dancing and spinning. The weight of 10.0 determines how much the robot tries to avoid extra rotation. Increasing this value makes the robot more resistant to changing its rotation, while decreasing it allows more rotational movement.

Regulated Pure Pursuit 

Here’s the analysis of the Regulated Pure Pursuit controller parameters, comparing the defaults with my values:

desired_linear_vel: (m/s)

  • Default: 0.5
  • My value: 0.4
  • Different from default. I use a slightly lower cruising speed for smoother motion.

Think of this as the robot’s preferred cruising speed. Increasing this makes the robot move faster overall, decreasing it makes it move slower but more carefully. Choose a speed that makes sense for your environment and robot’s capabilities.

lookahead_dist: (m)

  • Default: 0.6
  • My value: 0.7
  • Different from default. A lookahead distance of 0.7 m is used, which is slightly higher than the default. This allows the robot to anticipate and react to the path just a touch further ahead.

Increasing this makes the robot take smoother, wider turns but might cut corners more. Decreasing it makes the robot follow the path more precisely but might make motion more jerky.

min_lookahead_dist: (m)

  • Default: 0.3
  • My value: 0.5
  • Different from default. The minimum lookahead distance is set to 0.5 m, higher than the default. This ensures a minimum level of path anticipation, even at lower speeds.

Increasing this value makes turns smoother but might make tight maneuvers harder. Decreasing it allows for tighter maneuvers but might make motion less smooth.

max_lookahead_dist: (m)

  • Default: 0.9
  • My value: 0.7
  • Different from default. The maximum lookahead distance is limited to 0.7 m, lower than the default. This prevents the robot from looking too far ahead, which can be beneficial in environments with frequent turns or obstacles.

lookahead_time: (s)

  • Default: 1.5
  • My value: 1.5
  • Matches default. The lookahead time of 1.5 seconds is used, which is a reasonable value for maintaining a balance between path anticipation and responsiveness.

The 1.5 second preview allows the robot to anticipate and smoothly adjust to upcoming path changes. Increasing this makes the robot more forward-thinking but might react slower to sudden changes, while decreasing it makes it more reactive but potentially less smooth.

rotate_to_heading_angular_vel: (rad/s)

  • Default: 1.8
  • My value: 0.375
  • Different from default. A significantly lower angular velocity of 0.375 rad/s is used for rotating to the desired heading. This slower rotation speed allows for more controlled and stable heading adjustments. You can use whatever value makes sense for your robot.

transform_tolerance: (s)

  • Default: 0.1
  • My value: 0.1
  • Matches default. The transform tolerance of 0.1 seconds is used, which is a reasonable value for allowing small time differences between transforms.

The 0.1 second tolerance means position info must be very fresh. Increasing this helps with slower computers but might use outdated information, while decreasing it demands more up-to-date data but might cause more errors.

use_velocity_scaled_lookahead_dist: (bool)

  • Default: false
  • My value: true
  • Different from default. Velocity scaling of the lookahead distance is enabled. This means that the lookahead distance is adjusted based on the robot’s velocity, providing better path following behavior.

Setting this to true means the robot automatically adjusts how far it looks ahead based on its speed. This generally gives better path following behavior than a fixed distance.

min_approach_linear_velocity: (m/s)

  • Default: 0.05
  • My value: 0.05
  • Matches default. The minimum approach linear velocity of 0.05 m/s is used, which is a suitable value for the final approach to the goal.

approach_velocity_scaling_dist: (m)

  • Default: 1.0
  • My value: 0.6
  • Different from default. The approach velocity scaling distance is set to 0.6 m, which is smaller than the default. This means that the robot starts slowing down earlier when approaching the goal.

use_cost_regulated_linear_velocity_scaling: (bool)

  • Default: false
  • My value: true
  • Different from default. Cost-regulated linear velocity scaling is enabled. If there are obstacles nearby, the robot will slow down.

Like slowing down when driving close to parked cars, setting this to true makes the robot automatically slow down when near obstacles. This makes navigation safer but might make the robot move slower in tight spaces.

regulated_linear_scaling_min_radius: (m)

  • Default: 0.9
  • My value: 0.85
  • Different from default. The minimum radius for regulated linear velocity scaling is set to 0.85 m, slightly lower than the default. This allows for velocity scaling in tighter spaces.

Think of this as your comfort zone around obstacles. At 0.85m, it’s slightly more tolerant of tight spaces than default. Increasing this makes the robot slow down further from obstacles, while decreasing it allows closer approaches at higher speeds

regulated_linear_scaling_min_speed: (m/s)

  • Default: 0.25
  • My value: 0.25
  • Matches default. The minimum speed for regulated linear velocity scaling is set to 0.25 m/s, which is a reasonable value for maintaining a minimum speed while scaling.

Increasing this value makes the robot maintain higher speeds near obstacles, while decreasing it allows slower, more cautious movement in tight spaces.

use_fixed_curvature_lookahead: (bool)

  • Default: false
  • My value: false
  • Matches default. Fixed curvature lookahead is not used, allowing for dynamic lookahead distance based on the path curvature.

Think of this like looking closer when going around corners and further on straight paths. Keeping this false lets the robot automatically adjust how far it looks ahead based on the path. Setting it to true would force a fixed lookahead distance.

curvature_lookahead_dist: (m)

  • Default: 1.0
  • My value: 0.6
  • Different from default. The curvature lookahead distance is set to 0.6 m, which is shorter than the default. This allows the robot to be more responsive and precise in following curved paths, at the potential cost of not planning as far ahead. 

use_rotate_to_heading: (bool)

  • Default: true
  • My value: true
  • Matches default. The rotate to heading behavior is enabled, allowing the robot to rotate in place to align with the path heading when necessary.

rotate_to_heading_min_angle: (rad)

  • Default: 0.785
  • My value: 0.785
  • Matches default. The minimum angle for rotating to heading is set to 0.785 radians (approximately 45 degrees), which is a reasonable threshold for triggering the rotate to heading behavior.

max_angular_accel: (rad/s^2)

  • Default: 3.2
  • My value: 3.2
  • Matches default. The maximum angular acceleration is set to 3.2 rad/s^2, which is a suitable value for limiting the angular acceleration of the robot.

At 3.2 rad/s², this value provides smooth transitions when starting or stopping turns. Increasing this allows faster turn adjustments but might make motion jerky, while decreasing it makes turns smoother but less responsive.

interpolate_curvature_after_goal: (bool)

  • Default: false
  • My value: false
  • Matches default. Enabling this parameter can lead to smoother path following near the goal by mitigating oscillations. This feature requires another parameter, use_fixed_curvature_lookahead, to be set to true. The use_fixed_curvature_lookahead parameter, when enabled, ensures that a fixed lookahead distance is used for curvature calculation.

use_cancel_deceleration: (bool)

  • Default: false
  • My value: false
  • Matches default. If set to true, the robot will decelerate gracefully by the cancel_deceleration value (see below) when a goal is canceled. You can set this to true if you want.

Keeping this false means the robot stops normally when a goal is canceled. Setting it true would make it slow down more gradually using the cancel_deceleration value.

cancel_deceleration: (m/s^2)

  • Default: 3.2
  • My value: 3.2
  • Matches default. You can lower this value if you want a more gradual deceleration when a goal is canceled.

max_robot_pose_search_dist: (m)

  • Default: 10.0
  • My value: 10.0
  • Matches default. The maximum distance to search for the robot’s pose during path following is set to 10.0 m. 

At 10 meters, it gives plenty of room to find where the robot is on the path. Increasing this value allows recovery from larger position errors but takes more computation, while decreasing it might make the robot get lost more easily but uses less computation.

use_collision_detection: (bool)

  • Default: true
  • My value: true
  • Matches default. Collision detection is enabled, allowing the controller to consider potential collisions while following the path.

max_allowed_time_to_collision_up_to_carrot: (s)

  • Default: 1.0
  • My value: 1.5
  • Different from default. The maximum allowed time to collision up to the carrot (lookahead point) is set to 1.5 seconds, giving the robot more time to react to potential collisions.

My higher value (1.5s vs 1.0s) gives more time to react to potential collisions. Increasing this makes navigation safer but more conservative, while decreasing it allows more aggressive movement but might catch obstacles later.

local_costmap

Description

Here is the official configuration guide for the local_costmap.

The local costmap acts like the robot’s immediate awareness of its surroundings, similar to how you use your eyes while walking or driving. Just as you constantly watch for obstacles in your path, the local costmap creates a detailed map of the area immediately around the robot using sensors like laser scanners, depth cameras, or sonars.

Think of it as a moving “safety bubble” that travels with the robot. Inside this bubble, the robot creates a grid where each cell is marked based on sensor data. When sensors detect an obstacle – like a wall, furniture, or a person walking by – those areas get marked as unsafe in the grid. This constantly updating grid helps the robot make smart decisions about how to move safely through its immediate space.

The local costmap works hand-in-hand with the robot’s controller server, which decides the robot’s actual movements. As the robot moves, the local costmap feeds information about nearby obstacles to the controller, allowing it to adjust the robot’s speed and direction to avoid collisions – much like how you might slow down or change direction when you see something in your path.

This immediate awareness complements the global costmap, which provides the bigger picture needed for overall route planning. While the global costmap helps plot the general path (like a map showing your entire route), the local costmap handles the moment-to-moment navigation decisions (like watching where you’re going while walking).

Parameters

update_frequency: (Hz)

  • Default: 5.0
  • My value: 5.0
  • Same as default. 5 Hz provides a good balance between keeping the costmap updated and computational load.

publish_frequency: (Hz)

  • Default: 5.0 
  • My value: 5.0
  • Matches default. Publishing the costmap at 5 Hz is sufficient for most use cases.

global_frame: (string)

  • Default: “map”
  • My value: “odom”
  • Different from default. Using the “odom” frame helps keep the local costmap consistent with the robot’s movement, making it easier for the robot to navigate and avoid obstacles in its immediate surroundings, regardless of any changes or updates to the global map.

robot_base_frame: (string)

  • Default: “base_link”
  • My value: “base_link” 
  • Same as default. The “base_link” frame represents the robot’s center of rotation.

rolling_window: (bool)

  • Default: false
  • My value: true
  • Different from default. Enabling the rolling window allows the costmap to move with the robot, providing a local view around the robot.

width: (meters)

  • Default: 5.0
  • My value: 5.0
  • Same as default. A width of 5 meters provides a sufficiently large area around the robot for local planning.

height: (meters)

  • Default: 5.0
  • My value: 5.0
  • Matches default. A height of 5 meters is suitable for covering the local area around the robot.

resolution: (meters/cell)

  • Default: 0.1
  • My value: 0.05
  • Different from default. A higher resolution of 0.05 m/cell provides more detailed obstacle representation for precise navigation.

robot_radius: (meters)

  • Default: 0.1
  • My value: 0.15
  • Slightly different from default. You need to set this value to the radius of your robot base.

plugins: (list of strings)

  • Default: [“static_layer”, “obstacle_layer”, “inflation_layer”]
  • Common values for real-world robots: [“obstacle_layer”,  “voxel_layer”, “range_sensor_layer”, “denoise_layer”, “inflation_layer”] (order matters here)
  • Different from default.
    • Added “voxel_layer” for 3D obstacle representation using a depth camera.
    • Added “range_sensor_layer” for handling range sensor data from an ultrasonic sensor (if you have one). 
    • Added “denoise_layer” for removing salt and pepper noise from the sensors.
    • Removed “static_layer” since it’s not needed for the local costmap.

obstacle_layer

The obstacle layer creates a grid-like map of the robot’s surroundings using data from sensors like LIDAR. It marks cells in the grid as either occupied by obstacles or free space, helping the robot understand where it can and cannot safely move. As the robot navigates, the obstacle layer continuously updates this map based on the latest sensor readings.

Here is the official configuration guide for the obstacle layer.

enabled: (bool)

  • Default: True
  • My value: True
  • Matches default. The obstacle layer is enabled.

observation_sources: (vector<string>)

  • Default: {“”}
  • My value: scan
  • Different from default. The “scan” observation source is specified, which corresponds to the LIDAR.

scan.topic: (string)

  • Default: “”
  • My value: /scan
  • Different from default. The topic “/scan” is specified as the source of laser scan data.

scan.raytrace_min_range: (meters)

  • Default: 0.0
  • My value: 0.20
  • Different from default. The minimum range for raytracing is set to 0.20 meters, which means that the obstacle layer will start clearing obstacles from this distance. I use this distance because part of the robot’s body is within 0.20 meters of the LIDAR.

scan.obstacle_min_range: (meters)

  • Default: 0.0
  • My value: 0.20
  • Different from default. The minimum range for marking obstacles is set to 0.20 meters, which means that obstacles closer than this distance will not be marked in the costmap.

scan.max_obstacle_height: (meters)

  • Default: 0.0
  • My value: 2.0
  • Different from default. The maximum height of obstacles to be marked in the costmap is set to 2.0 meters.

scan.clearing: (bool)

  • Default: False
  • My value: True
  • Different from default. Clearing is enabled, allowing the obstacle layer to clear free space in the costmap based on laser data.

scan.marking: (bool)

  • Default: True
  • My value: True
  • Same as default. Marking is enabled, allowing the obstacle layer to mark obstacles in the costmap.

scan.data_type: (string)

  • Default: “LaserScan”
  • My value: “LaserScan”
  • Matches default. The data type is correctly set to “LaserScan” for laser scanner data.

voxel_layer

The voxel layer is like a 3D version of the obstacle layer. Instead of creating a 2D grid map, it divides the space around the robot into small 3D cubes called voxels. This allows the robot to understand the environment in three dimensions, detecting obstacles not only on the ground but also at different heights, like tabletops or overhanging objects. The data source for this layer is typically a depth camera like the Intel RealSense.

Here is the official configuration guide for the voxel layer.

Here is the tuning guide for the voxel layer parameters based on my experience messing around with this on various mobile robots:

enabled: (bool)

  • Default: True
  • My value: True
  • Same as default. Enabling the voxel layer allows the costmap to incorporate 3D obstacle information from depth sensors.

footprint_clearing_enabled: (bool)

  • Default: True
  • My value: true
  • Matches default. Clearing the robot’s footprint in the 3D costmap helps prevent the robot from being stuck by obstacles detected in its own footprint.

max_obstacle_height: (meters)

  • Default: 2.0
  • My value: 2.0
  • Same as default. A maximum obstacle height of 2 meters is suitable for most indoor environments.

z_voxels: (int)

  • Default: 10
  • My value: 16
  • Different from default. Using 16 voxels in the height dimension provides a higher resolution representation of the 3D space, up to the maximum allowed value.

origin_z: (meters)

  • Default: 0.0
  • My value: 0.0
  • Matches default. The origin of the voxel grid in the z-axis is set to the ground level.

z_resolution: (meters)

  • Default: 0.2
  • My value: 0.2
  • Same as default. A z-resolution of 0.2 meters provides a good balance between detail and computational efficiency.

unknown_threshold: (int)

  • Default: 15
  • My value: 15
  • Matches default. A minimum of 15 empty voxels in a column is required to mark that cell as unknown in the 2D occupancy grid.

mark_threshold: (int)

  • Default: 0
  • My value: 0
  • Same as default. Any occupied voxel in a column will result in marking that cell as occupied in the 2D occupancy grid.

combination_method: (int)

  • Default: 1
  • My value: 1
  • Matches default. The voxel layer will update the master costmap by taking the maximum value between the existing costmap value and the voxel layer’s value for each cell.

publish_voxel_map: (bool)

  • Default: False
  • My value: False
  • Same as default. Publishing the 3D voxel grid is disabled to reduce computational overhead.

observation_sources: (vector<string>)

  • Default: {“”}
  • My value: realsense1
  • Different from default. The “realsense1” observation source is specified, which corresponds to the 3D depth sensor.

realsense1.topic: (string)

  • Default: “”
  • My value: /cam_1/depth/color/points
  • Different from default. The topic must be your source of 3D pointcloud data.

realsense1.max_obstacle_height: (meters)

  • Default: 0.0
  • My value: 2.0
  • Different from default. The maximum height of obstacles to be marked in the costmap is set to 2.0 meters.

realsense1.min_obstacle_height: (meters)

  • Default: 0.0
  • My value: 0.0
  • Same as default. The minimum height of obstacles to be marked in the costmap is set to ground level.

realsense1.obstacle_max_range: (meters)

  • Default: 2.5
  • My value: 1.25
  • Different from default. The maximum range for marking obstacles is set to 1.25 meters, a lower value than the default of 2.5 meters, providing a more focused view of nearby obstacles. 
  • This value is largely based on my experience with the Intel RealSense D435 depth camera. 

realsense1.obstacle_min_range: (meters)

  • Default: 0.0
  • My value: 0.05
  • Different from default. The minimum range for marking obstacles is set to 0.05 meters, filtering out noise very close to the sensor.

realsense1.raytrace_max_range: (meters)

  • Default: 3.0
  • My value: 3.0
  • Same as default. The maximum range for raytracing to clear obstacles is set to 3.0 meters.

realsense1.raytrace_min_range: (meters)

  • Default: 0.0
  • My value: 0.05
  • Different from default. The minimum range for raytracing to clear obstacles is set to 0.05 meters, filtering out noise very close to the sensor.

realsense1.clearing: (bool)

  • Default: False
  • My value: False
  • Matches default. Clearing of obstacles using raytracing is disabled.

realsense1.marking: (bool)

  • Default: True
  • My value: True
  • Same as default. Marking of obstacles in the costmap is enabled.

realsense1.data_type: (string)

  • Default: “LaserScan”
  • My value: “PointCloud2”
  • Different from default. The data type is set to “PointCloud2” to match the 3D depth camera data format.

range_sensor_layer 

The range sensor layer is similar to the obstacle layer, but it uses data from range sensors like ultrasonic sensors or infrared sensors instead of laser scanners. It helps the robot detect obstacles at closer ranges and in areas where laser scanners might not be effective. The range sensor layer adds an extra level of safety by ensuring the robot is aware of nearby obstacles.

Here is the official configuration guide for the range sensor layer.

Here is the tuning guide for the range sensor layer parameters based on my experience messing around with this on various mobile robots:

enabled: (bool)

  • Default: True
  • My value: False
  • Different from default. The range sensor layer is disabled, meaning that it will not be used to update the costmap.

topics: (vector<string>)

  • Default: [“”]
  • My value: [“/ultrasonic1”]
  • Different from default. I have specified the topic “/ultrasonic1” as the input source for the range sensor layer. This assumes that you have a range sensor (e.g., ultrasonic sensor) publishing data on this topic.

phi: (double)

  • Default: 1.2
  • My value: 1.2
  • Same as default. The phi value determines the width of the sensor’s field of view. A value of 1.2 means that the sensor’s coverage area will be 1.2 radians wide.

inflate_cone: (double)

  • Default: 1.0
  • My value: 1.0
  • Same as default. The inflate_cone parameter determines how much the triangular area covered by the sensor is inflated. A value of 1.0 means no inflation.

no_readings_timeout: (double)

  • Default: 0.0
  • My value: 0.0
  • Same as default. If the layer does not receive sensor data for this amount of time (in seconds), it will warn the user and mark the layer as not current. A value of 0.0 disables this timeout.

clear_threshold: (double)

  • Default: 0.2
  • My value: 0.2
  • Same as default. Cells with a probability below this threshold are marked as free in the costmap.

mark_threshold: (double)

  • Default: 0.8
  • My value: 0.8
  • Same as default. Cells with a probability above this threshold are marked as occupied in the costmap.

clear_on_max_reading: (bool)

  • Default: False
  • My value: True
  • Different from default. When set to True, the sensor readings will be cleared when the maximum range is reached. This can help to remove false positives and stale readings.

input_sensor_type: (string)

  • Default: ALL
  • My value: ALL
  • Same as default. The input sensor type is set to “ALL”, which means the layer will automatically select the appropriate type based on the sensor’s minimum and maximum range values.

denoise_layer

The denoise layer acts like a digital filter for your robot’s map. It helps clean up false obstacles that might appear due to sensor errors, especially from LIDARs.

Imagine a LIDAR sometimes sees a speck of dust and thinks it’s a wall. This can confuse the robot, making it think there are obstacles where there aren’t any. The denoise layer works like a filter, removing these “ghost” obstacles.

How it works:

  • It looks at the map data and identifies small groups of cells marked as obstacles.
  • By default, it removes single isolated obstacle points.
  • You can adjust its settings to remove larger groups of false obstacles if needed.

Key points:

  • The layer processes only obstacle information in the costmap.
  • Cells identified as noise are replaced with free space.
  • It’s typically placed before the inflation layer to prevent inflating noise-induced obstacles.

Here is the official configuration guide for the denoise layer.

enabled: (bool)

  • Default: True
  • My value: true
  • This parameter turns the denoise layer on or off. I’ve kept it on to help clean up my map.

minimal_group_size: (int)

  • Default: 2
  • My value: 2
  • This parameter sets how big a group of obstacle points needs to be to stay on the map. With 2, single dots get erased, but anything bigger stays. It’s like telling the robot, “If you see just one obstacle point all alone, ignore it. It’s probably not real.”

group_connectivity_type: (int)

  • Default: 8
  • My value: 8
  • This parameter decides how obstacle points connect to form groups. 8 means points touching corners count as connected, like a checkers board. 4 would only count direct side-to-side connections.

Important Note:

  • While this layer can really help clean up your map, it might slow things down, especially if you’re using a long-range LIDAR (like 20+ meters) or have a big map. It’s like running a spell-check on a huge document – helpful, but it takes time. 
  • For my indoor robots, the benefits outweigh the small speed loss, but you might need to test it out in your specific setup.

Remember, the goal is to help your robot navigate better by removing false obstacles, but without erasing real ones. These settings have worked well for me, but feel free to adjust based on how your robot behaves in its environment.

inflation_layer

The inflation layer adds a safety buffer around obstacles in the costmap. It expands the size of the obstacles by a certain amount, making the robot keep a distance from them. This helps the robot navigate more safely and avoid getting too close to walls, furniture, or other objects.

Here is the official configuration guide for the inflation layer.

enabled: (bool)

  • Default: True
  • My value: true
  • Same as default. The inflation layer is enabled to create a cost decay around obstacles.

inflation_radius: (meters)

  • Default: 0.55
  • My value: 1.75
  • Different from default. The inflation radius is set to 1.75 meters, which is larger than the default value. This means that the cost will decay over a larger distance around obstacles, making the robot maintain a greater clearance from them. 
  • Credit to this ROS Tuning Guide for finding this magic number which has worked really well on my own mobile robots for indoor environments.

cost_scaling_factor: (unitless)

  • Default: 1.0
  • My value: 2.58
  • Different from default. The cost scaling factor determines the rate at which the cost decays exponentially across the inflation radius. A value of 2.58 means that the cost will decay more quickly compared to the default value of 1.0. 
  • Credit to this ROS Tuning Guide for finding this magic number which has worked really well on my own mobile robots for indoor environments.

inflate_unknown: (bool)

  • Default: False
  • My value: Not specified
  • Stick with the default value. Unknown cells will not be inflated as if they were lethal obstacles.

inflate_around_unknown: (bool)

  • Default: False
  • My value: Not specified
  • Assuming default value of False. The inflation layer will not inflate costs around unknown cells.

global_costmap

Description

Here is the official configuration guide for the global_costmap.

The global_costmap creates a global occupancy grid map that represents the entire environment in which the robot operates. It combines data from the robot’s localization system, static map, and sensor observations to build and update the global map. The global_costmap is used to generate high-level paths from the robot’s current position to its goal.

Here is an analogy…

The global costmap is like a full building map that helps the robot plan its overall routes. Imagine having a detailed floor plan where every area is color-coded based on whether the robot can safely travel there or not.

The global costmap creates this safety map by combining three key sources of information:

  1. The static map – like a basic building floor plan showing permanent features like walls and doorways
  2. The robot’s location tracking system – so it knows where it is within this map
  3. Sensor data – to detect changes in the environment over time

This complete map helps the robot plan efficient routes from its current location to any goal, much like how you might look at a building map to plan your route from the entrance to a specific room. The robot can see all possible paths and choose the best one while avoiding known obstacles.

The global costmap is particularly important because it gives the robot the “big picture” view it needs for smart path planning. While the local costmap handles immediate navigation (like watching where you step), the global costmap ensures the robot can efficiently reach its final destination (like knowing which hallways and rooms to use to reach your goal).

Parameters

Below are the parameters I often use for the global costmap along with my explanation.

update_frequency: (Hz)

  • Default: 5.0
  • My value: 5.0
  • Same as default. The global costmap is updated at a frequency of 5 Hz.

publish_frequency: (Hz)

  • Default: 1.0
  • My value: 5.0
  • Different from default. The global costmap is published at a frequency of 5 Hz, which is higher than the default value of 1 Hz. This means that the costmap will be sent to other nodes more frequently.

global_frame: (string)

  • Default: “map”
  • My value: “map”
  • Same as default. The global frame is set to “map”, which is typically the fixed frame of the environment.

robot_base_frame: (string)

  • Default: “base_link”
  • My value: “base_link”
  • Same as default. The robot’s base frame is set to “base_link”.

robot_radius: (meters)

  • Default: 0.1
  • My value: 0.15
  • Slightly different from default. You need to set this value to the radius of your robot base. 

resolution: (meters/cell)

  • Default: 0.1
  • My value: 0.05
  • Different from default. The costmap resolution is set to 0.05 meters per cell, providing a higher resolution than the default value of 0.1 meters per cell.

track_unknown_space: (bool)

  • Default: False
  • My value: true
  • Different from default. Unknown space is tracked in the costmap, meaning that the costmap will distinguish between free, occupied, and unknown space.

plugins: (list)

  • Default: [“static_layer”, “obstacle_layer”, “inflation_layer”]
  • My value: [“static_layer”, “obstacle_layer”, “voxel_layer”, “range_sensor_layer”, “inflation_layer”]
  • Different from default. Additional plugins are used in the global costmap, including the “voxel_layer” for 3D obstacle detection and the “range_sensor_layer” for incorporating range sensor data.

static_layer.map_subscribe_transient_local: (bool)

  • Default: True
  • My value: True
  • Same as default. The static layer subscribes to the map topic using the “transient local” durability.
  • “Transient local” durability means that the static layer will only receive the map data if it is available at the moment it subscribes to the topic. If the map data was published before the static layer subscribed, it won’t receive that old data. It will only get the map data that is published after it starts listening to the topic.
  • This is useful because it ensures that the static layer always uses the most recent map available when it starts up, without relying on old data that might be outdated or inconsistent with the current state of the system.

Parameters

obstacle_layer

The obstacle layer in the global costmap is responsible for adding obstacle information detected by sensors, such as laser scanners, to the costmap. It marks cells in the costmap as occupied if obstacles are detected within a certain range and height. The obstacle layer helps the robot navigate by providing a representation of the obstacles in the environment, allowing it to plan paths that avoid collisions.

Here is the official configuration guide for the obstacle layer.

enabled: (bool)

  • Default: True
  • My value: True
  • Same as default. The obstacle layer is enabled, allowing the costmap to incorporate obstacle information from sensor data.

observation_sources: (string)

  • Default: “”
  • My value: scan
  • Different from default. The obstacle layer is configured to use the “scan” observation source, which corresponds to a laser scanner sensor.

scan.topic: (string)

  • Default: “”
  • My value: /scan
  • Different from default. The obstacle layer subscribes to the “/scan” topic to receive laser scan data.

scan.raytrace_min_range: (meters)

  • Default: 0.0
  • My value: 0.20
  • Different from default. The minimum range for raytracing is set to 0.20 meters. This means that the obstacle layer will start clearing obstacles from this distance.
  • I set this value to 0.20 because part of the robot’s body is next to the LIDAR.

scan.obstacle_min_range: (meters)

  • Default: 0.0
  • My value: 0.20
  • Different from default. The minimum range for adding obstacles to the costmap is set to 0.20 meters. Obstacles closer than this distance will not be added.
  • I set this value to 0.20 because part of the robot’s body is next to the LIDAR.

scan.max_obstacle_height: (meters)

  • Default: 0.0
  • My value: 2.0
  • Different from default. The maximum height of obstacles to be added to the costmap is set to 2.0 meters.

scan.clearing: (bool)

  • Default: False
  • My value: True
  • Different from default. Clearing is enabled, allowing the obstacle layer to clear free space in the costmap based on laser scan data.

scan.marking: (bool)

  • Default: True
  • My value: True
  • Same as default. Marking is enabled, allowing the obstacle layer to mark obstacles in the costmap based on laser scan data.

scan.data_type: (string)

  • Default: “LaserScan”
  • My value: “LaserScan”
  • Same as default. The data type is set to “LaserScan”, indicating that the obstacle layer expects laser scan data.

range_sensor_layer

The range sensor layer in the global costmap is similar to the obstacle layer but uses data from range sensors like ultrasonic or infrared sensors to detect obstacles. It helps to incorporate obstacle information from sensors that have different characteristics than laser scanners. 

Here is the official configuration guide for the range sensor layer.

Here is the tuning guide for the range sensor layer parameters in the global costmap based on the provided configuration guide and your yaml file:

enabled: (bool)

  • Default: True
  • My value: False
  • Different from default. The range sensor layer is disabled in your configuration, meaning that it will not contribute to the global costmap.

topics: (vector<string>)

  • Default: [“”]
  • My value: [“/ultrasonic1”]
  • Different from default. The range sensor layer is configured to subscribe to the “/ultrasonic1” topic, which is expected to provide range sensor data.

phi: (double)

  • Default: 1.2
  • My value: 1.2
  • Same as default. The phi parameter determines the width of the sensor’s field of view. A value of 1.2 means that the sensor’s coverage area will be 1.2 radians wide.

inflate_cone: (double)

  • Default: 1.0
  • My value: 1.0
  • Same as default. The inflate_cone parameter determines how much the triangular area covered by the sensor is inflated. A value of 1.0 means no inflation.

no_readings_timeout: (double)

  • Default: 0.0
  • My value: 0.0
  • Same as default. If the range sensor layer does not receive any sensor readings for this duration (in seconds), it will mark the layer as not current. A value of 0.0 disables this timeout.

clear_threshold: (double)

  • Default: 0.2
  • My value: 0.2
  • Same as default. The clear_threshold parameter determines the probability below which cells are marked as free in the costmap.

mark_threshold: (double)

  • Default: 0.8
  • My value: 0.8
  • Same as default. The mark_threshold parameter determines the probability above which cells are marked as occupied in the costmap.

clear_on_max_reading: (bool)

  • Default: False
  • My value: True
  • Different from default. When clear_on_max_reading is set to True, the range sensor layer will clear obstacles from the costmap when the sensor reports its maximum range reading.

input_sensor_type: (string)

  • Default: ALL
  • My value: ALL
  • Same as default. The input_sensor_type parameter is set to “ALL”, meaning that the range sensor layer will automatically detect the sensor type based on the minimum and maximum range values reported by the sensor.

voxel_layer

The voxel layer in the global costmap is responsible for adding 3D obstacle information from depth sensors, such as RGB-D cameras, to the costmap. It divides the space into 3D voxels and marks them as occupied or free based on the depth sensor data. This allows the global costmap to represent obstacles not only on the ground plane but also at different heights, providing a more comprehensive understanding of the environment for navigation.

Here is the official configuration guide for the voxel layer.

enabled: (bool)

  • Default: True
  • My value: True
  • Same as default. The voxel layer is enabled, allowing the global costmap to incorporate 3D obstacle information from depth sensors.

publish_voxel_map: (bool)

  • Default: False
  • My value: False
  • Same as default. The voxel layer will not publish the 3D voxel map, which can save computational resources.

origin_z: (double)

  • Default: 0.0
  • My value: 0.0
  • Same as default. The origin_z parameter sets the height of the first voxel layer relative to the robot’s base frame.

z_resolution: (double)

  • Default: 0.2
  • My value: 0.2
  • Same as default. The z_resolution parameter determines the height resolution of each voxel layer.

z_voxels: (int)

  • Default: 10
  • My value: 10
  • Same as default. The z_voxels parameter sets the number of voxel layers in the voxel grid.

min_obstacle_height: (double)

  • Default: 0.0
  • My value: 0.0
  • Same as default. The min_obstacle_height parameter defines the minimum height of obstacles to be considered in the voxel layer.

max_obstacle_height: (double)

  • Default: 2.0
  • My value: 2.0
  • Same as default. The max_obstacle_height parameter defines the maximum height of obstacles to be considered in the voxel layer.

mark_threshold: (int)

  • Default: 0
  • My value: 0
  • Same as default. The mark_threshold parameter determines the minimum number of voxels required to mark a cell as occupied in the 2D costmap.

observation_sources: (string)

  • Default: “”
  • My value: robot_depth_camera
  • Different from default. The observation_sources parameter specifies the depth sensor source for the voxel layer. In your case, it is set to “robot_depth_camera”.

robot_depth_camera.topic: (string)

  • Default: “”
  • My value: /rgbd_camera
  • Different from default. The topic parameter specifies the ROS topic where the depth sensor data is published.

robot_depth_camera.raytrace_min_range: (double)

  • Default: 0.0
  • My value: 0.05
  • Different from default. The raytrace_min_range parameter sets the minimum range for raytracing in the voxel layer.
  • My value is based on my experiences with the Intel RealSense D435.

robot_depth_camera.raytrace_max_range: (double)

  • Default: 3.0
  • My value: 1.25
  • Different from default. The raytrace_max_range parameter sets the maximum range for raytracing in the voxel layer.
  • My value is based on my experiences with the Intel RealSense D435.

robot_depth_camera.obstacle_min_range: (double)

  • Default: 0.0
  • My value: 0.05
  • Different from default. The obstacle_min_range parameter sets the minimum range for considering obstacles in the voxel layer.

robot_depth_camera.obstacle_max_range: (double)

  • Default: 2.5
  • My value: 1.25
  • Different from default. The obstacle_max_range parameter sets the maximum range for considering obstacles in the voxel layer.

robot_depth_camera.min_obstacle_height: (double)

  • Default: 0.0
  • My value: 0.0
  • Same as default. The min_obstacle_height parameter sets the minimum height of obstacles to be considered in the voxel layer.

robot_depth_camera.max_obstacle_height: (double)

  • Default: 2.0
  • My value: 2.0
  • Same as default. The max_obstacle_height parameter sets the maximum height of obstacles to be considered in the voxel layer.

robot_depth_camera.clearing: (bool)

  • Default: False
  • My value: False
  • Same as default. The clearing parameter determines whether the voxel layer should clear free space in the costmap based on the depth sensor data.

robot_depth_camera.marking: (bool)

  • Default: True
  • My value: True
  • Same as default. The marking parameter determines whether the voxel layer should mark obstacles in the costmap based on the depth sensor data.

robot_depth_camera.data_type: (string)

  • Default: “PointCloud2”
  • My value: “PointCloud2”

inflation_layer

The inflation layer in the global costmap adds a safety buffer around obstacles by gradually increasing the cost of the cells near the obstacles. This encourages the robot to maintain a safe distance from obstacles when planning paths. The inflation radius and cost scaling factor determine the size of the safety buffer and how quickly the cost increases as the robot gets closer to obstacles.

Here is the official configuration guide for the inflation layer.

plugin: (string)

  • Default: “nav2_costmap_2d::InflationLayer”
  • My value: “nav2_costmap_2d::InflationLayer”
  • Same as default. The plugin parameter specifies the plugin type for the inflation layer, which is “nav2_costmap_2d::InflationLayer”.

cost_scaling_factor: (double)

  • Default: 1.0
  • My value: 2.58
  • Different from default. The cost_scaling_factor parameter determines the rate at which the cost values decrease with distance from obstacles. 
  • Credit to this ROS Tuning Guide for finding this magic number which has worked really well on my own mobile robots for indoor environments.

inflation_radius: (double)

  • Default: 0.55
  • My value: 1.75
  • Different from default. The inflation_radius parameter sets the maximum distance from an obstacle at which the cost will be inflated. A higher value means that the cost will be inflated over a larger area around obstacles.
  • The inflation radius is set to 1.75 meters, which is larger than the default value. This means that the cost will decay over a larger distance around obstacles, making the robot maintain a greater clearance from them. 
  • Credit to this ROS Tuning Guide for finding this magic number which has worked really well on my own mobile robots for indoor environments.

map_saver

Description

Here is the official configuration guide for the map_saver.

The map_saver package allows the robot to save its current map of the environment to a file. This can be useful for creating a map of a new environment or updating an existing map with new data. It provides an easy way to preserve the robot’s understanding of its surroundings.

When you use the map_saver, it creates two essential files:

  1. A .pgm file – This is like a black and white photograph of the map, showing which areas are open space, unknown, or contain obstacles
  2. A .yaml file – This contains important details about the map, such as its scale (resolution), origin point, and how to interpret the black and white values

The map_saver takes the occupancy grid data that the robot has built up through its mapping process and converts it into these permanent files. The occupancy data uses three main values:

  • Free space (marked as white in the .pgm)
  • Occupied space like walls (marked as black in the .pgm)
  • Unknown areas (marked as gray in the .pgm)

This saved map becomes important for autonomous navigation, as it provides the foundation for the robot to understand where it can and cannot go. Just as you might save a building’s floor plan to use for future visits, the robot uses this saved map to navigate effectively when it returns to the same space later.

Parameters

save_map_timeout: (seconds)Default: 2.0

  • My value: 5.0
  • Different from default. 

I use a longer timeout to ensure large maps save completely. Think of this like giving yourself extra time to save a large file. At 5.0 seconds (vs 2.0), it’s less likely to timeout when saving bigger or more detailed maps. Increasing this means more patience when saving but more reliable results, while decreasing it makes saving faster but might fail with large maps.

free_thresh_default: (probability)

  • Default: 0.25
  • My value: 0.25
  • Matches default. 

This parameter decides what counts as “empty space” in your map…like deciding how sure you need to be that a space is empty. 

At 0.25, if the robot is 25% or less sure something is there, it marks it as empty space. Increasing this means being more strict about what counts as empty space, while decreasing it means being more lenient about calling spaces empty.

occupied_thresh_default: (probability)

  • Default: 0.65 
  • My value: 0.65
  • Same as default. 0.65 is a good threshold for marking a cell as occupied in the occupancy grid.

This parameter affects how the map data is received. Like choosing between getting instant updates or waiting for complete information. Setting this to true means getting more reliable, complete map data but might take slightly longer. Setting it to false means getting faster updates but might miss some details.

planner_server

Description

Here is the official configuration guide for the planner_server.

The planner_server package is responsible for computing the optimal path for the robot to reach its goal. This package ensures that the robot can navigate through complex environments by following a well-thought-out path.

When given a goal location, the Planner Server studies the global costmap (the complete map with all known obstacles and safe areas) to calculate an optimal path. It considers important factors like:

  • Finding the shortest reasonable route
  • Maintaining safe distances from obstacles
  • Avoiding dead ends or impossible paths
  • Considering the robot’s size and movement capabilities

The Planner Server works through plugins, which are like different strategies for path planning. The default plugin (NavfnPlanner) uses a navigation function to find efficient paths, but you can also use other plugins like SMAC planner or Theta Star planner depending on your robot’s needs. Each plugin might be better suited for different situations – just as you might use different strategies to navigate through an open warehouse versus a crowded office.

Parameters

expected_planner_frequency: (Hz)

  • Default: 20.0
  • My value: 20.0
  • Matches default. 20 Hz is a reasonable expected frequency for the planner to operate at.

planner_plugins: (list of strings)

  • Default: [‘GridBased’]
  • My value: [“GridBased”]
  • Same as default. The GridBased plugin using the NavfnPlanner is a good default choice.

GridBased.plugin: (string)

  • Default: “nav2_navfn_planner/NavfnPlanner”
  • My value: “nav2_navfn_planner::NavfnPlanner”
  • Slightly different syntax, but refers to the same NavfnPlanner plugin. The “::” namespace separator is preferred in newer ROS 2 versions over “/”.

GridBased.tolerance: (meters)

  • Default: 0.5
  • My value: 0.5
  • Matches default. A tolerance of 0.5 meters around the goal is reasonable for most applications.

GridBased.use_astar: (boolean)

  • Default: false
  • My value: false
  • Same as default. The default Dijkstra’s algorithm is efficient for most navigation tasks, no need to use A*.

GridBased.allow_unknown: (boolean)

  • Default: true
  • My value: true
  • Matches default. Allowing planning in unknown space gives more flexibility if the map is not fully explored.

smoother_server

Description

Here is the official configuration guide for the smoother_server.

The Smoother Server refines the paths created by the Planner Server, much like how you might smooth out a rough sketch into a flowing drawing. While the planner creates a basic path that gets the robot from start to goal, the smoother makes this path more natural and efficient for the robot to follow.

Think of the raw planned path like walking through a building by making sharp turns at every corner. The smoother transforms this into a more natural path, like how people tend to curve around corners rather than making exact 90-degree turns. This smoother motion is generally more efficient and puts less stress on the robot’s motors.

The Smoother Server accomplishes this by:

  1. Taking the original path from the planner
  2. Analyzing each segment and turn
  3. Creating gentle curves where appropriate
  4. Ensuring these smoother paths still maintain safe distances from obstacles
  5. Optimizing the path for the robot’s movement capabilities

Parameters

smoother_plugins: (vector<string>)

  • Default:  [“simple_smoother”]
  • My value: [“simple_smoother”]
  • Matches the default.

This parameter tells the robot which path smoothing method to use. The simple_smoother is great for basic path smoothing needs. You can add multiple smoothers if needed, but one is usually sufficient. Each smoother needs its own configuration section.

simple_smoother.plugin: (string)

  • Default: “nav2_smoother::SimpleSmoother”
  • My value: “nav2_smoother::SimpleSmoother”
  • Matches default. This parameter specifies the exact smoother code to use. 

Think of this as telling the robot exactly which smoothing algorithm to load. This particular smoother is like using a basic path-smoothing tool that rounds off sharp corners.

simple_smoother.tolerance: (double)

  • Default: 1.0e-10
  • My value: 1.0e-10
  • Matches default. How precise the smoothing needs to be before considering it done. 

This very small number (0.0000000001) means it will try to get very precise results. Increasing this value makes smoothing faster but less precise, while decreasing it makes smoothing more precise but slower.

simple_smoother.max_its: (integer)

  • Default: None specified
  • My value: 1000
  • Maximum number of attempts to smooth the path. 

At 1000 attempts, it gives plenty of chances to get a good result. Increasing this allows more attempts for better smoothing but takes longer, while decreasing it makes smoothing faster but might give less optimal results.

behavior_server

Description

Here is the official configuration guide for the behavior_server.

The Behavior Server acts as a specialized component in the Nav2 system, managing and executing specific, well-defined actions that a robot might need during navigation. While the BT Navigator handles overall navigation strategy (like “navigate to the conference room”), the Behavior Server executes precise, individual behaviors that might be needed along the way (like “back up when stuck”).

Think of the Behavior Server like a team of specialists, each expert at a particular maneuver. These specialists include:

  1. A Spin specialist that knows exactly how to rotate the robot safely in place
  2. A Backup specialist that can guide the robot backward when needed
  3. A Drive-on-heading specialist that keeps the robot moving straight along a specific direction
  4. A Wait specialist that handles proper pausing behavior
  5. An Assisted Teleop specialist that combines human control with autonomous safety features

Each behavior is implemented as a plugin, sharing resources like costmaps and transformation data to maintain efficiency. When a behavior is called upon, the server ensures safe execution by:

  • Checking both local and global costmaps for obstacles
  • Monitoring the robot’s position and orientation
  • Managing proper movement speeds and accelerations
  • Coordinating timing of actions
  • Maintaining appropriate update frequencies (default 10Hz)

What makes the Behavior Server particularly powerful is how it complements the broader navigation system. When the BT Navigator encounters a situation requiring a specific action, it can call on the Behavior Server to execute that precise maneuver with expert-level skill and safety considerations.

Parameters

local_costmap_topic: (string)

  • Default: “local_costmap/costmap_raw” 
  • My value: “local_costmap/costmap_raw”
  • Matches default. This is the standard topic for the raw local costmap.

global_costmap_topic: (string)

  • Default: “global_costmap/costmap_raw”
  • My value: “global_costmap/costmap_raw” 
  • Matches default. This is the standard topic for the raw global costmap.

local_footprint_topic: (string)

  • Default: “local_costmap/published_footprint”
  • My value: “local_costmap/published_footprint”
  • Matches default. This is the standard topic for the robot’s footprint in the local costmap frame.

global_footprint_topic: (string)

  • Default: “global_costmap/published_footprint”
  • My value: “global_costmap/published_footprint”
  • Matches default. This is the standard topic for the robot’s footprint in the global costmap frame.

cycle_frequency: (Hz)

  • Default: 10.0
  • My value: 10.0
  • Matches default. 10 Hz is a good frequency for running the behavior plugins.

behavior_plugins: (vector<string>) 

  • Default: {“spin”, “backup”, “drive_on_heading”, “wait”}
  • My value: [“spin”, “backup”, “drive_on_heading”, “assisted_teleop”, “wait”]
  • These are the core recovery and helper behaviors typically needed.

simulate_ahead_time: (seconds)

  • Default: 2.0 
  • My value: 2.0
  • Matches default. 2 seconds is a reasonable amount of time to look ahead for collisions.

max_rotational_vel: (rad/s)

  • Default: 1.0
  • My value: 0.5
  • Lower than default. 0.5 rad/s puts a safer limit on the maximum rotational speed.

min_rotational_vel: (rad/s) 

  • Default: 0.4
  • My value: 0.4
  • Matches default. 0.4 rad/s is a good minimum rotational speed to allow the robot to rotate in place.

rotational_acc_lim: (rad/s^2)

  • Default: 3.2
  • My value: 3.2 
  • Matches default. 3.2 rad/s^2 is a reasonable acceleration limit for rotational movement.

enable_stamped_cmd_vel: (bool)

  • Default: true for new versions (Kilted+), false for older versions (Jazzy or older)
  • My value: false
  • Determines whether to use basic or timestamped velocity commands. 

This parameter is like choosing between a simple speed command or one with a timestamp. Setting this false uses basic Twist messages, while true uses TwistStamped which is a velocity command with a timestamp and coordinate reference frame included. 

local_frame: (string)

  • Default: “odom”
  • My value: “odom”
  • Matches default. The odometry frame is the standard local Extended Kalman Filter reference frame.

global_frame: (string) 

  • Default: “map”
  • My value: “map”
  • Matches default. The map frame is the standard global reference frame.

robot_base_frame: (string)

  • Default: “base_link” 
  • My value: “base_link”
  • Matches default. The base_link frame is the standard frame for the robot’s body.

transform_timeout: (seconds)

  • Default: 0.1
  • My value: 0.1
  • Matches default. 0.1s is a reasonable timeout for transforms from the tf buffer.

waypoint_follower

Description

Here is the official configuration guide for the waypoint follower.

The waypoint_follower package allows the robot to follow a predefined set of waypoints in the environment. This is useful for tasks that require the robot to visit specific locations in a sequence. It ensures that the robot can navigate through multiple points efficiently.

It also has a special plugin which you can use to perform custom behaviors at each waypoint, like taking a photo or picking up an object.

Parameters

loop_rate: (Hz)

  • Default: 20
  • My value: 2
  • Different from default. A lower rate of 2 Hz is sufficient for checking navigation task results, reducing computational load while still providing timely updates.

stop_on_failure: (bool)

  • Default: true
  • My value: false
  • Different from default. Setting this to false allows the robot to continue to the next waypoint even if one fails, which can be more robust in real-world scenarios with dynamic obstacles.

waypoint_task_executor_plugin: (string)

  • Default: ‘wait_at_waypoint’
  • My value: ‘wait_at_waypoint’
  • Matches default. This plugin is suitable for basic waypoint following tasks.

wait_at_waypoint.plugin: (string)

  • Default: “nav2_waypoint_follower::WaitAtWaypoint”
  • My value: “nav2_waypoint_follower::WaitAtWaypoint”
  • Matches default. This is the correct plugin name for the wait_at_waypoint functionality.

wait_at_waypoint.enabled: (bool)

  • Default: Not specified
  • My value: True
  • Explicitly enables the wait_at_waypoint plugin, ensuring it’s active.

wait_at_waypoint.waypoint_pause_duration: (seconds)

  • Default: Not specified
  • My value: 10
  • Sets a 5-second pause at each waypoint, which can be useful for allowing the robot to stabilize or perform tasks at each point. You can set this value to whatever you want.

global_frame_id: (string)

  • Default: ‘map’
  • My value: Not specified
  • The default ‘map’ is typically sufficient for most setups, so not specifying it in your YAML is fine.

bond_heartbeat_period: (seconds)

  • Default: 0.1
  • My value: Not specified
  • The default of 0.1 seconds works well for most systems, so not specifying it is fine.

action_server_result_timeout: (seconds)

  • Default: 900.0
  • My value: 900.0
  • Matches default. This timeout value is for action servers to discard a goal handle if a result hasn’t been produced within 900 seconds (15 minutes). This long timeout allows for complex or long-running navigation tasks to complete without being prematurely terminated.

velocity_smoother

Description

Here is the official configuration guide for the velocity_smoother.

The velocity_smoother package ensures that the velocity commands sent to the robot are smooth and gradual. This prevents jerky movements and helps in maintaining a stable and controlled motion. It is particularly useful for preventing sudden starts and stops that can be hard on the robot’s hardware.

Parameters

smoothing_frequency: (Hz)

  • Default: 20.0
  • My value: 20.0
  • Matches default. 20 Hz is a good frequency for smoothing out velocity commands to reduce wear on the motors.

scale_velocities: (boolean)

  • Default: false
  • My value: false
  • Same as default. Scaling velocities proportionally is not necessary for most applications.

feedback: (string)

  • Default: “OPEN_LOOP”
  • My value: “OPEN_LOOP” 
  • Matches default. Open loop control, assuming the last commanded velocity, is sufficient when acceleration limits are set appropriately.

max_velocity: (m/s or rad/s)

  • Default: [0.5, 0.0, 2.5]
  • My value: [0.5, 0.5, 2.5]
  • Different from default in the y-axis. Allowing 0.5 m/s in the y-axis enables omni-directional movement if the robot supports it. If you do not have an omni-directional robot, leave this parameter as the default.

min_velocity: (m/s or rad/s)

  • Default: [-0.5, 0.0, -2.5]  
  • My value: [-0.5, -0.5, -2.5]
  • Different from default in the y-axis. Allowing -0.5 m/s in the y-axis enables reverse omni-directional movement if the robot supports it. If you do not have an omni-directional robot, leave this parameter as the default.

deadband_velocity: (m/s or rad/s)

  • Default: [0.0, 0.0, 0.0]
  • My value: [0.0, 0.0, 0.0]
  • Same as default. No deadband is needed in most cases to prevent hardware damage.

velocity_timeout: (s)

  • Default: 1.0
  • My value: 1.0 
  • Matches default. 1 second is a reasonable timeout after which the smoother should stop publishing commands if no new ones are received.

max_accel: (m/s^2 or rad/s^2)

  • Default: [2.5, 0.0, 3.2]
  • My value: [0.3, 0.3, 3.2]
  • Different from default in x and y. 0.3 m/s^2 provides gentler acceleration for x and y while still allowing fast rotational acceleration.

max_decel: (m/s^2 or rad/s^2)

  • Default: [-2.5, 0.0, -3.2]
  • My value: [-0.5, -0.5, -3.2]
  • Different from default in x and y. -0.5 m/s^2 allows for smooth deceleration in x and y to reduce wear on the motors.

odom_topic: (string)

  • Default: “odom”
  • My value: “odometry/filtered”
  • If I changed the mode to CLOSED_LOOP, we would use this odometry topic, which is generated by the Extended Kalman Filter (i.e. robot_localization package).

odom_duration: (s)

  • Default: 0.1
  • My value: 0.1
  • Matches default. 0.1 seconds is a good duration to average odometry data for estimating current velocity in closed loop mode.

use_realtime_priority: (boolean)

  • Default: false
  • My value: false 
  • Same as default. Realtime priority is not needed for the velocity smoother in most applications.

enable_stamped_cmd_vel: (bool)

  • Default: true for new versions (Kilted+), false for older versions (Jazzy or older)
  • My value: false
  • Determines whether to use basic or timestamped velocity commands. 

collision monitor

Description

Here is the official configuration guide for the collision monitor.

The Collision Monitor serves as a critical safety system in Nav2, providing an extra layer of protection beyond the standard navigation stack. Think of it as a vigilant safety officer who constantly watches for potential collisions and can quickly intervene to prevent accidents, much like how advanced cars have emergency braking systems that work independently from normal braking.

What makes the Collision Monitor special is that it operates directly with sensor data, bypassing the usual costmap and planning systems. This allows for much faster reaction times to sudden obstacles. It’s particularly valuable for:

  • Large industrial robots where safety is paramount
  • Fast-moving robots that need quick reaction times
  • Robots operating around people or other moving robots
  • Situations where obstacles might appear suddenly

The Collision Monitor uses a concept of “zones” around the robot – imagine invisible safety bubbles that trigger different responses when obstacles enter them. These zones can be:

  • Stop zones: The robot stops completely if obstacles enter this area
  • Slowdown zones: The robot reduces speed when obstacles are detected here
  • Limit zones: The robot’s speed is capped when obstacles are present
  • Approach zones: The robot maintains a safe time-to-collision with detected obstacles

Each zone can be configured as different shapes:

  1. Custom polygons you define around the robot
  2. Simple circles for efficient processing
  3. The robot’s own footprint
  4. Velocity-based polygons that change size based on how fast the robot is moving

The monitor works with various types of sensor data:

  • Laser scans for precise 2D detection
  • Point clouds from 3D sensors
  • Range data from IR sensors or sonars

While this system doesn’t replace certified safety hardware, it provides a valuable additional safety layer that can help prevent collisions through quick reaction times and configurable safety behaviors. Think of it as adding defensive driving capabilities to your robot’s navigation system.

Parameters

base_frame_id: (string)

  • Default: “base_footprint”
  • My value: “base_footprint”
  • Matches default. This is the reference frame attached to your robot’s base. 

Think of this as the robot’s center point for all measurements. Changing this is only needed if your robot uses different frame names, but “base_footprint” is the standard name most robots use.

odom_frame_id: (string)

  • Default: “odom”
  • My value: “odom”
  • Matches default. This is the frame used for tracking robot movement. 

Like a coordinate system that moves with the robot. The standard “odom” frame name works for most robots. Only change this if your robot uses a different name for its odometry frame.

transform_tolerance: (seconds)

  • Default: 0.1
  • My value: 0.2
  • Different from default. How old we allow position data to be. 

My higher value (0.2s vs 0.1s) allows for slightly older position data, which can help on slower computers. Increasing this makes the system more tolerant of delays but might use outdated information.

source_timeout: (seconds)

  • Default: 2.0
  • My value: 1.0
  • Different from default. How long to wait before assuming sensor data is too old. 

My lower value (1.0s vs 2.0s) means we’re more strict about needing fresh sensor data. Increasing this helps with slow sensors but might react slower to obstacles.

cmd_vel_in_topic: (string)

  • Default: “cmd_vel_smoothed”
  • My value: “cmd_vel_smoothed”
  • Matches default. The topic where the robot receives velocity commands to check.

This is where the collision monitor looks for commands to verify. Change this if your velocity commands come from a different topic.

cmd_vel_out_topic: (string)

  • Default: “cmd_vel”
  • My value: “cmd_vel”
  • Matches default. The topic where safety-checked commands are sent. 

After checking for potential collisions, commands are sent here. This is typically “cmd_vel” as most robots listen for commands on this topic.

state_topic: (string)

  • Default: “” (empty)
  • My value: “collision_monitor_state”
  • Different from default. Where to publish information about active safety behaviors.

By setting this (instead of leaving it empty), we can monitor which safety zones are active. This helps with debugging and monitoring the system’s behavior.

base_shift_correction: (bool)

  • Default: true
  • My value: true
  • Matches default. Whether to account for robot movement when processing sensor data. 

Like compensating for taking a photo from a moving car…keeping this true makes collision detection more accurate but uses more CPU power. Setting it false would be faster but less accurate for fast-moving robots.

polygons: (vector<string>)

  • Default: None specified
  • My value: [“FootprintApproach”]
  • Only using the approach-based collision checking method. 

Think of this as defining different safety zones around the robot. I’m only using a footprint-based approach that looks ahead to predict collisions. You can add more zones for immediate stopping or slowing, but I find the approach method works well alone.

FootprintApproach.type: (string)

  • Default: None specified
  • My value: “polygon”
  • Defines that we’re using a polygon shape to check for collisions. 

Like drawing a shape around the robot to check for obstacles. Using “polygon” lets us match the robot’s actual shape using its footprint. The other option “circle” would be simpler but less precise.

FootprintApproach.time_before_collision: (seconds)

  • Default: 2.0
  • My value: 1.2
  • Different from default. How far ahead in time to check for potential collisions. 

We’re being a bit more aggressive than the default 2.0 seconds. Increasing this value makes the robot more cautious but might make it stop unnecessarily far from obstacles. Decreasing it allows closer approaches but gives less reaction time.

FootprintApproach.simulation_time_step: (seconds)

  • Default: 0.1
  • My value: 0.1
  • Matches default. How often to check for collisions during the prediction. 

Think of this as how detailed our collision prediction is. At 0.1 seconds, it checks 10 times per second of prediction. Decreasing this makes prediction more accurate but uses more CPU. Increasing it saves CPU but might miss potential collisions.

observation_sources: (vector<string>)

  • Default: None specified
  • My value: [“scan”]
  • Using only the LIDAR as a source of obstacle detection. Like choosing which sensors to use for safety. 

I’m only using the laser scanner in my example on GitHub, while you could also use other sensors like RGBD cameras (pointcloud) or range sensors. Using fewer sensors is simpler but might miss obstacles that only certain sensors can see.

docking server

Description

Here is the official configuration guide for the docking server.

The docking_server manages the precise process of connecting robots to charging stations or other docking points. Think of it like an automated parking system that needs to carefully guide vehicles into exact positions for charging or loading.

The server coordinates the complete docking sequence: approaching a pre-staging position, using sensors for precise alignment, executing the final approach, and confirming successful connection. 

For undocking, it reverses this process to safely disconnect and move away. The system maintains controlled speeds and precise positioning throughout while monitoring for any issues.

Using a plugin-based architecture, the server adapts to different robot types, charging methods, and sensor systems. It can handle multiple docking stations in an environment, making it useful for facilities where robots need to dock at various locations for charging or material handling.

I will create a separate tutorial devoted to the docking server. If you don’t have a dock set up, you can use these default parameters.

slam_toolbox

Description

Here is the official configuration guide for the slam_toolbox.

The slam_toolbox package is used for simultaneous localization and mapping (SLAM) in ROS 2, which means it helps the robot build a map of an unknown environment while keeping track of its location using sensors like LIDAR. This package allows the robot to explore and map new areas autonomously. It is essential for robots operating in dynamic or previously unmapped environments.

I will not go through a detailed step-by-step analysis of each of the parameters for the slam_toolbox because you’re generally better off using the default parameters (which are the same ones in my yaml file). The package’s author has invested significant effort in fine-tuning these parameters to work well out of the box for a wide range of robots. This means you can usually achieve good results without needing to tune the settings yourself.

The slam_toolbox includes both synchronous and asynchronous modes for mapping. Synchronous mode, which is the default, will work best in most use cases. 

In both modes, the robot is constantly moving and collecting data from its LIDAR sensor. The key difference lies in how this data is processed:

Synchronous mapping (default for Nav2):

  • The robot processes LIDAR scans in a strict sequence, one after another.
  • Each scan is fully integrated into the map before the next one is processed.
  • This can result in more consistent and accurate maps, but might introduce a slight delay in map updates.

Asynchronous mapping:

  • The robot processes LIDAR scans as soon as they become available.
  • Multiple scans can be processed simultaneously.
  • This can lead to faster map updates, but might occasionally result in slight inconsistencies in the map.

For most applications, the default synchronous mode will provide the best balance of accuracy and performance. However, in scenarios where rapid map updates are important, asynchronous mode might be beneficial.

To use asynchronous mode, you will need to make the “slam” launch configuration parameter in the main bringup launch file False, and then launch a separate launch file dedicated to asynchronous mapping:

ros2 launch slam_toolbox online_async_launch.py

Remember, these modes affect only how the data is processed, not how it’s collected. The robot continues to move and gather data constantly in both modes.

Final Notes on Navigation Tuning

This guide represents my preferred configuration after thousands of hours of working with mobile robots and ROS 2. However, remember that:

  1. Every robot is unique – use these parameters as a starting point, not absolute rules
  2. Test changes systematically – modify one parameter at a time and observe the effects
  3. Safety first – always test new configurations in a safe environment
  4. Consider your specific needs:
    • Is smooth motion more important than precise positioning?
    • Do you need to prioritize CPU efficiency?
    • How dynamic is your environment?

Getting Help

If you run into issues while tuning:

Next Steps

Try these parameters on your robot, observe its behavior, and adjust based on your specific needs. Navigation tuning is an iterative process – don’t be afraid to experiment and find what works best for your application.

Keep Building!

Autonomous Navigation for a Mobile Robot Using ROS 2 Jazzy

Introduction

In the previous tutorial, we learned how to use SLAM (Simultaneous Localization and Mapping) to create a map of the environment while keeping track of the robot’s location. Now that we have a map, it’s time to take things to the next level and make our robot navigate autonomously. 

The official autonomous navigation tutorial is here on the Nav2 website

By the end of this tutorial, you’ll have a robot that can independently navigate from point A to point B while avoiding obstacles. Here is what you will make happen in this tutorial:

go-to-goal-gazebo-nav2-small
go-to-goal-cafe-world-rviz-small

Real-World Applications

Autonomous navigation has numerous applications in the real world. Here are a few examples of how the concepts you’ll learn in this tutorial can be applied:

  • Home service robots: Imagine a robot that can navigate your home to perform tasks like delivering snacks, collecting laundry, or even reminding you to take your medicine. Autonomous navigation enables robots to move around homes safely and efficiently.
  • Warehouse automation: In large warehouses, autonomous mobile robots can be used to transport goods from one location to another. They can navigate through the warehouse aisles, avoid obstacles, and deliver items to the correct storage locations or shipping stations.
  • Agriculture: Autonomous navigation can be used in agricultural robots that perform tasks such as harvesting, planting, or soil analysis. These robots can navigate through fields, greenhouses, or orchards without human intervention, increasing efficiency and reducing labor costs.
  • Hospital and healthcare: In hospitals, autonomous mobile robots can be used to transport medical supplies, medication, or even patients. They can navigate through the hospital corridors and elevators, ensuring timely and safe delivery of essential items.
  • Search and rescue: In emergency situations, autonomous robots can be deployed to search for and rescue people in hazardous environments. These robots can navigate through rubble, collapsed buildings, or other challenging terrains to locate and assist victims.

By mastering autonomous navigation, you’ll be opening doors to a wide range of exciting applications that can benefit various industries and improve people’s lives.

Prerequisites

All my code for this project is located here on GitHub.

Understanding the Configuration File

First, let’s navigate to the folder where the YAML configuration file is located.

Open a terminal and move to the config folder:

cd ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_navigation/config/

Now, let’s open the YAML file using a text editor. 

gedit rosmaster_x3_nav2_default_params.yaml

With the file open, we can now explore and understand the different sections and parameters within the configuration file. 

Let’s break down the parts that are most important for autonomous navigation.

amcl (Adaptive Monte Carlo Localization)

AMCL helps the robot determine its location within the map. It uses a particle filter approach, where each particle represents a possible pose (position and orientation) of the robot. 

The parameters in the amcl section control how AMCL updates and resamples these particles based on sensor data and the robot’s motion. 

Key parameters include the number of particles (max_particles), the frequency of updates (update_min_d and update_min_a), and the noise models for the motion and the LIDAR (robot_model_type, laser_model_type).

bt_navigator (Behavior Tree Navigator)

The Behavior Tree Navigator is responsible for high-level decision making during navigation. It uses a behavior tree structure to define the logic for tasks like following a path, avoiding obstacles, and recovering from stuck situations. 

The bt_navigator section specifies the plugins and behaviors used by the navigator, such as the path following algorithm (FollowPath), the obstacle avoidance strategy, and the recovery behaviors.

controller_server

The controller server handles the execution of the robot’s motion commands. It receives the path from the planner and generates velocity commands to follow that path. 

The parameters in the controller_server section configure the controller plugins, such as the path tracking algorithm (FollowPath), the goal tolerance (xy_goal_tolerance, yaw_goal_tolerance), and the velocity thresholds (min_x_velocity_threshold, min_theta_velocity_threshold). 

It also includes parameters for the progress checker (progress_checker_plugin), which monitors the robot’s progress along the path.

You will notice that we are using the Model Predictive Path Integral Controller. I chose this controller because it works well for mecanum wheeled robots. 

You can find a high-level description of the other controllers on this page.

velocity_smoother

The velocity smoother takes the velocity commands from the controller and smooths them to ensure smooth and stable robot motion. It helps to reduce sudden changes in velocity and acceleration. 

The velocity_smoother section controls the smoothing algorithm, such as the smoothing frequency (smoothing_frequency), velocity limits (max_velocity, min_velocity), and acceleration limits (max_accel, max_decel).

planner_server

The planner server is responsible for generating paths from the robot’s current position to the goal location. It uses the global costmap to find the optimal path while avoiding obstacles. The planner_server section specifies the planner plugin (GridBased) and its associated parameters, such as the tolerance for the path search (tolerance) and the use of A* algorithm (use_astar).

smoother_server

The smoother server is responsible for optimizing and smoothing the global path generated by the planner before it’s sent to the controller. This helps create more natural and efficient robot trajectories.

behavior_server

The behavior server handles recovery behaviors when the robot gets stuck or encounters an error during navigation. It includes plugins for actions like spinning in place (Spin), backing up (BackUp), or waiting (Wait). 

These sections work together to provide a comprehensive configuration for the robot’s autonomous navigation system. By adjusting these parameters in the YAML file, you can fine-tune the robot’s behavior to suit your specific requirements and environment.

collision_monitor

The collision monitor is a safety system that continuously checks for potential collisions and can trigger emergency behaviors when obstacles are detected.

Launch Autonomous Navigation

Let’s start navigating. Open a terminal window, and use this command to launch the robot:

nav 

or

bash ~/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_bringup/scripts/rosmaster_x3_navigation.sh 

In the bash script, you can change cafe -> house everywhere you see it in the file in case you want to autonomously navigate your robot in the house world.

#!/bin/bash
# Single script to launch the Yahboom ROSMASTERX3 with Gazebo, Nav2 and ROS 2 Controllers

cleanup() {
    echo "Cleaning up..."
    echo "Killing all ROS 2 and Gazebo related processes..."
    sleep 5
    pkill -9 -f "ros2|gazebo|gz|nav2|amcl|bt_navigator|nav_to_pose|rviz2|python3|assisted_teleop|cmd_vel_relay|robot_state_publisher|joint_state_publisher|move_to_free|mqtt|autodock|cliff_detection|moveit|move_group|basic_navigator"
    sleep 0.2

    echo "Removing IPC resources..."
    ipcrm -a
    sleep 0.2

    echo "Cleaning up shared memory..."
    rm -rf /dev/shm/fastrtps_* /dev/shm/sem.fastrtps_*
    sleep 0.2

    echo "Cleaning up temporary files..."
    rm -rf /tmp/fastrtps* /tmp/ros* /tmp/snap.ros* /tmp/*.sem /var/run/ros*
    sleep 0.2

    echo "Restarting ROS 2 daemon..."
    ros2 daemon stop
    sleep 0.2
    ros2 daemon start

    kill 0
    exit
}

# Set up cleanup trap
trap 'cleanup' SIGINT

# Check if SLAM argument is provided
if [ "$1" = "slam" ]; then
    SLAM_ARG="slam:=True"
else
    SLAM_ARG="slam:=False"
fi

# For cafe.world -> z:=0.20
# For house.world -> z:=0.05
# To change Gazebo camera pose: gz service -s /gui/move_to/pose --reqtype gz.msgs.GUICamera --reptype gz.msgs.Boolean --timeout 2000 --req "pose: {position: {x: 0.0, y: -2.0, z: 2.0} orientation: {x: -0.2706, y: 0.2706, z: 0.6533, w: 0.6533}}"

echo "Launching Gazebo simulation with Nav2..."
ros2 launch yahboom_rosmaster_bringup rosmaster_x3_navigation_launch.py \
   enable_odom_tf:=false \
   headless:=False \
   load_controllers:=true \
   world_file:=cafe.world \
   use_rviz:=true \
   use_robot_state_pub:=true \
   use_sim_time:=true \
   x:=0.0 \
   y:=0.0 \
   z:=0.20 \
   roll:=0.0 \
   pitch:=0.0 \
   yaw:=0.0 \
   "$SLAM_ARG" \
   map:=/home/ubuntu/ros2_ws/src/yahboom_rosmaster/yahboom_rosmaster_navigation/maps/cafe_world_map.yaml &

echo "Waiting 25 seconds for simulation to initialize..."
sleep 25
echo "Adjusting camera position..."
gz service -s /gui/move_to/pose --reqtype gz.msgs.GUICamera --reptype gz.msgs.Boolean --timeout 2000 --req "pose: {position: {x: 0.0, y: -2.0, z: 2.0} orientation: {x: -0.2706, y: 0.2706, z: 0.6533, w: 0.6533}}"

# Keep the script running until Ctrl+C
wait

Running the bash script will launch the Gazebo simulator with the Yahboom ROSMASTER X3 robot and the necessary navigation nodes.

You should see Gazebo and RViz windows open up. In Gazebo, you will see the simulated environment with the robot. RViz will display the robot’s sensor data, the map, and the navigation-related visualizations.

Wait for all the nodes to initialize and for the robot to be spawned in the Gazebo environment. You can check the terminal output for any error messages or warnings.

Once everything is up and running, you will see the robot in the simulated environment, ready to navigate autonomously.

1-starting-point

Initialize the Location of the Robot Using the 2D Pose Estimate Button in RViz

Before the robot can start navigating autonomously, it needs to know its initial position and orientation (“initial pose”) within the map. This process is known as localization. In this section, we will learn how to set the initial pose of the robot using the “2D Pose Estimate” button in RViz.

1. In the RViz window, locate the “2D Pose Estimate” button in the toolbar at the top.

1-2d-pose-estimate

2. Click on the “2D Pose Estimate” button to activate the pose estimation tool.

3. Move your mouse cursor to the location on the map where you want to set the initial pose of the robot. This should be the robot’s actual starting position in the simulated environment.

4. Click and hold the left mouse button at the desired location on the map.

5. While holding the left mouse button, drag the mouse in the direction that represents the robot’s initial orientation. The arrow will follow your mouse movement, indicating the direction the robot is facing.

6. Release the left mouse button to set the initial pose of the robot.

7. The robot’s localization system (AMCL) will now use this initial pose as a starting point and continuously update its estimated position and orientation based on sensor data and movement commands.

Setting the initial pose is important because it gives the robot a reference point to start localizing itself within the map. Without an accurate initial pose, the robot may have difficulty determining its precise location and orientation, which can lead to navigation issues.

Remember to set the initial pose whenever you restart the autonomous navigation system or if you manually relocate the robot in the simulated environment.

By the way, for a real-world application, the initial pose can be set automatically using the NVIDIA Isaac ROS Map Localization package. This package uses LIDAR scans and deep learning to automatically estimate the robot’s pose within a pre-built map. It provides a more automated and accurate way of initializing the robot’s location compared to manually setting the pose in RViz.

Send a Goal Pose

Once the robot’s initial pose is set, you can command it to navigate autonomously to a specific goal location on the map. RViz provides an intuitive way to send navigation goals using the “2D Nav Goal” button. Follow these steps to send a goal pose to the robot:

1. In the RViz window, locate the “Nav2 Goal” button in the toolbar at the top.

2-nav2-goal

2. Click on the “Nav2 Goal” button to activate the goal setting tool.

3. Move your mouse cursor to the location on the map where you want the robot to navigate. This will be the goal position.

4. Click and hold the left mouse button at the desired goal location on the map.

5. While holding the left mouse button, drag the mouse in the direction that represents the desired orientation of the robot at the goal position. An arrow will appear, indicating the goal pose.

6. Release the left mouse button to set the goal pose.

7. The robot will now plan a path from its current position to the goal pose, taking into account the obstacles in the map and the configured navigation parameters.

8. Once the path is planned, the robot will start navigating towards the goal pose, following the planned trajectory.

9. As the robot moves, you will see its position and orientation updating in real-time on the map in RViz. 

10. The robot will continue navigating until it reaches the goal pose or until it encounters an obstacle that prevents it from reaching the goal.

You can send multiple goal poses to the robot by repeating the above steps. Each time you set a new goal pose, the robot will replan its path and navigate towards the new goal.

autonomous-navigation-1

Keep in mind that the robot’s ability to reach the goal pose depends on various factors, such as the accuracy of the map, the presence of obstacles, and the configuration of the navigation stack. If the robot is unable to reach the goal pose, it may attempt to replan or abort the navigation task based on the configured behavior.

During autonomous navigation, you can monitor the robot’s progress, path planning, and other relevant information through the RViz visualizations. The navigation stack provides feedback on the robot’s status, including any errors or warnings.

By sending goal poses, you can test the robot’s autonomous navigation capabilities and observe how it handles different scenarios in the simulated environment.

Send Waypoints

In addition to sending a single goal pose, you can also command the robot to navigate through a sequence of waypoints. Waypoints are intermediate goal positions that the robot should pass through before reaching its final destination. This is useful when you want the robot to follow a specific path or perform tasks at different locations.

Here’s how to do it…

Set the initial pose of the robot by clicking the “2D Pose Estimate” on top of the RViz2 screen. 

Then click on the map in the estimated position where the robot is in Gazebo.

Now click the “Waypoint/Nav Through Poses” mode button in the bottom left corner of RViz. Clicking this button puts the system in waypoint follower mode.

3-waypoint-nav-through-poses-rviz

Click the “Nav2 Goal” button, and click on areas of the map where you would like your robot to go (i.e. select your waypoints). 

Click the Nav2 Goal button, set a waypoint. Click it again, and set another waypoint. Select as many waypoints as you want. 

4-as-many-waypoints-as-you-want

Each waypoint is labeled wp_#, where # is the number of the waypoint.

4-each-waypoint-is-labeled

When you’re ready for the robot to follow the waypoints, click the Start Waypoint Following button.

5-start-waypoint-following

If you want the robot to visit each location without stopping, click the Start Nav Through Poses button.

You should see your robot autonomously navigate to all the waypoints. 

Check the CPU and Memory Usage

I often like to check the CPU and memory usage. In general, I want to see usage less than 50%. When you get over 80%, that is when you really start to run into real performance issues.

sudo apt install htop
htop

My CPU usage isn’t great. You can see it is quite high.

6-htop-not-great

Let’s close the RViz window, and see what we get:

7-much-better

Looks much better.

You can see how RViz really is a resource hog. For this reason, for the robots I develop professionally, I do not run RViz in a production environment. I only use it during development and for debugging.

That’s it! Keep building.