ROS 2 Navigation Tuning Guide – Nav2

ros2-navigation-tuning-guide-parameters-3

In this tutorial, we will explore the process of fine-tuning the parameters for Nav2 (the ROS 2 Navigation stack), which is essential for achieving optimal navigation performance in your mobile robot. With hundreds of parameters in the yaml file for Nav2, the configuration process can be quite confusing and time-consuming. However, this guide is designed to save you time and effort by leveraging my years of experience in tuning parameters for ROS-based mobile robots.

If you are working with ROS 1, I encourage you to check out this ROS Navigation Tuning Guide by Kaiyu Zheng. Otherwise, if you are working with ROS 2, you have come to the right place.

Prerequisites

All my code for this project is located here on GitHub. The file we will be going through is located here.

For the purposes of comparison, you can also take a look at these parameters here from the Nav2 GitHub for some other default values.

Introduction

It is important to understand that the tuning process is more of an art than a science. It involves a significant amount of trial and error and experimentation to find the right balance of parameters that work best for your specific robot and its operating environment. Don’t be discouraged if it takes some time to get the desired results.

In this guide, I will share my insights on which parameters are best left at their default values and which ones are worth investing time in tuning, regardless of the type of mobile robot you are working with. I will also provide my recommended values that have proven to work well for most of my projects.

By following the advice in this guide, you can streamline your Nav2 parameter tuning process and avoid the common pitfalls that many roboticists face when optimizing their navigation stack. Whether you’re a beginner or an experienced roboticist, this guide will help you understand the key concepts and reasoning behind each parameter, enabling you to make informed decisions and achieve the best possible navigation performance for your mobile robot.

amcl

Description

Here is the official configuration guide for AMCL.

The AMCL (Adaptive Monte Carlo Localization) package helps the robot understand its location on a map. It uses a particle filter to estimate the robot’s position by matching laser scans from a LIDAR sensor with the existing map. This process allows the robot to navigate more accurately in known environments.

Imagine you’re blindfolded in your house. How would you figure out exactly where you are? You would probably reach out to touch walls and furniture, comparing what you feel to your mental map of your home. This is similar to how AMCL helps a robot understand its position in a known environment.

Let’s break down how AMCL works…

First, think about the name itself. “Adaptive” means it can adjust itself based on circumstances. “Monte Carlo” refers to using random sampling to solve problems (named after the famous casino!), and “Localization” simply means finding out where something is.

AMCL starts by spreading out thousands of virtual “guesses” (we call them particles) across the map. Each particle represents a possible position and direction the robot might be in. Think of it like dropping thousands of pins on a map, with each pin being a “maybe I’m here” guess.

As the robot moves and uses its LIDAR, it does the following:

  • It looks at what the LIDAR is seeing – walls, corners, furniture, etc.
  • It compares these readings to what it would expect to see at each of those guessed positions
  • It gives higher “scores” to guesses that match well with what the sensors are actually seeing
  • It gradually eliminates unlikely guesses and creates new ones around the more promising positions

The “Adaptive” part is particularly smart: when the robot is uncertain about its position (like when you first turn it on), it uses more particles to cast a wider net. Once it is pretty sure where it is, it reduces the number of particles to save computing power while maintaining accuracy.

Why is this so important? Think of it this way: if you’re using a GPS navigation app but it shows you on the wrong street, all its directions will be useless! Similarly, a robot needs to know exactly where it is before it can effectively plan paths or navigate to goals.

The AMCL server in Nav2 handles all of this complex math and probability behind the scenes. It:

  • Can start with either a rough or precise initial guess of the robot’s position
  • Constantly updates its position estimate as the robot moves
  • Handles situations where the environment has changed somewhat (30% or less in my experience) from the original map
  • Shares its position information with other parts of the navigation system

One of the most fascinating aspects of AMCL is how it mirrors human navigation behaviors. Just as you become more confident about your position when you recognize more landmarks, AMCL becomes more certain as it matches more sensor readings to known map features.

Parameters

alpha1: (unitless)

  • Default: 0.2
  • My value: 0.2
  • Matches the default value in the official guide. This is a reasonable default for the expected process noise in odometry’s rotation estimate from rotation.

alpha1 controls how much rotation affects the robot’s rotation estimate. Increasing this value makes the system more cautious about rotational movements, assuming more uncertainty when the robot turns. Decreasing it makes the system more confident about its rotation estimates, which can be risky if your robot’s sensors aren’t very accurate.

alpha2: (unitless)

  • Default: 0.2
  • My value: 0.2
  • Matches the default value. A suitable default for expected process noise in rotation estimate from translation.

alpha2 handles how forward motion affects rotation estimates. If you increase this value, the robot becomes more uncertain about its rotation when moving forward, leading to more conservative behavior. Decreasing it means the robot will be more confident that moving forward won’t affect its rotational accuracy, which might be too optimistic for many robots.

alpha3: (unitless)

  • Default: 0.2
  • My value: 0.2
  • Matches the default. Appropriate for expected process noise in translation estimate from translation.

alpha3 determines how forward motion affects position estimates. Increasing this makes the robot more uncertain about its position when moving forward, causing it to rely more on sensor data. Decreasing it means the robot will trust its forward motion more, which can be good for robots with very accurate wheel encoders but risky for others.

alpha4: (unitless)

  • Default: 0.2
  • My value: 0.2
  • Same as default. Works well for expected process noise in translation estimate from rotation in most cases.

alpha4 manages how rotation affects position estimates. When increased, the system assumes more position uncertainty during turns, making it more reliant on sensor data to confirm its location. Decreasing it means the robot will be more confident about maintaining position accuracy during turns, which might not be realistic for many robots.

alpha5: (unitless)

  • Default: 0.2
  • My value: 0.2
  • Aligns with default. Good value for translation noise in omni models.

Specific to omnidirectional robots, this controls translation noise. Increasing this value makes the robot more cautious about its position estimates during omnidirectional movement. Decreasing it makes the robot more confident in its position during sideways motion, which should only be done if you have very accurate encoders.

base_frame_id: (string)

  • Default: “base_footprint”
  • My value: “base_footprint”
  • Matches the typical default base frame for many robots.

beam_skip_distance: (meters)

  • Default: 0.5
  • My value: 0.5
  • Same as default. 0.5m is a reasonable max distance to consider skipping beams that don’t match the map.

This parameter tells AMCL how different a laser reading can be from the expected map reading before being considered for skipping. For example, if a laser beam expects to hit a wall at 3 meters according to the map, but actually hits something at 2.3 meters, the difference is 0.7 meters. If this difference is larger than beam_skip_distance (i.e. 0.5 meters), this reading becomes a candidate for being skipped. 

Increasing the beam_skip_distance means more readings that don’t match the map will be considered for skipping, which helps when your environment changes often but might make localization less precise. Decreasing it means only readings that are very different from expectations will be considered for skipping, making localization more precise but potentially slower in changing environments.

beam_skip_error_threshold: (unitless)

  • Default: 0.9
  • My value: 0.9
  • Matches default. 0.9 is a good percentage of beams to skip before forcing a full update due to bad convergence.

Percentage of beams that can be skipped before forcing a full update. Increasing this allows more beams to be skipped before requiring a full update, which can help in noisy environments. Decreasing it makes the system more conservative about skipping beams, potentially improving accuracy but increasing computational load.

beam_skip_threshold: (unitless)

  • Default: 0.3
  • My value: 0.3
  • Aligns with default. 0.3 works well as the percentage of beams required to skip.

Percentage of beams that can be skipped before forcing a full update. Increasing this allows more beams to be skipped before requiring a full update, which can help in noisy environments. Decreasing it makes the system more conservative about skipping beams, potentially improving accuracy but increasing computational load.

do_beamskip: (bool)

  • Default: False
  • My value: false
  • Same as default. Beam skipping is not necessary in the likelihood field model.

Percentage of beams that can be skipped before forcing a full update. Increasing this allows more beams to be skipped before requiring a full update, which can help in noisy environments. Decreasing it makes the system more conservative about skipping beams, potentially improving accuracy but increasing computational load.

global_frame_id: (string)

  • Default: “map”
  • My value: “map”
  • Matches the standard global frame name.

lambda_short: (unitless)

  • Default: 0.1
  • My value: 0.1
  • Same as default. A good exponential decay parameter for the z_short part of the model.

Increasing this value makes the system more sensitive to unexpected short readings, which can help detect obstacles. Decreasing it makes the system more tolerant of short readings, which might help in environments with lots of small objects.

laser_likelihood_max_dist: (meters)

  • Default: 2.0
  • My value: 2.0
  • Matches default. 2.0m is a reasonable maximum distance for obstacle inflation in the likelihood field model.

Increasing this value makes the system consider obstacles further away, which can improve accuracy but increases computation time. Decreasing it makes the system focus on closer obstacles, which can be faster but might miss important features.

laser_max_range: (meters)

  • Default: 100.0
  • My value: 100.0
  • Aligns with default. 100m is far enough to cover most laser scanners. -1.0 would use the laser’s reported max range instead.

Increasing this lets the robot use more distant readings but can slow down processing and might include more noise. Decreasing it means ignoring distant readings, which can be good in cluttered environments but might make localization harder in large open spaces.

laser_min_range: (meters)

  • Default: -1.0
  • My value: -1.0
  • Same as default. -1.0 will use the laser’s reported minimum range.

Increasing this ignores closer readings, which can help if your LIDAR gets a lot of false close readings. Decreasing it (or keeping at -1.0) uses the sensor’s built-in minimum range, which is usually fine.

laser_model_type: (string)

  • Default: “likelihood_field”
  • My value: “likelihood_field”
  • Matches the recommended default, which works well for most use cases. Considers the beamskip feature if enabled.

There are three main options for laser_model_type:

  1. “likelihood_field” (default)
    • This is the recommended default
    • Most computationally efficient
    • Works well for most scenarios
    • Creates a pre-computed likelihood field of expected laser readings
  2. “beam”
    • Also known as the ray-cast model
    • More computationally intensive
    • Can be more accurate in some situations
    • Directly simulates individual laser beams
    • Better at handling dynamic environments but slower
  3. “likelihood_field_prob”
    • A probabilistic variant of the likelihood field model
    • Adds probability calculations to the standard likelihood field
    • Can be more accurate than basic likelihood field but more computationally intensive
    • Good for environments where you need more precise probability estimates

max_beams: (unitless)

  • Default: 60
  • My value: 60
  • Matches default. 60 is a good number of evenly-spaced beams to use from each scan.

Increasing this can improve accuracy but slows down processing significantly. Decreasing it speeds up processing but might make localization less accurate.

max_particles: (unitless)

  • Default: 2000
  • My value: 2000
  • Same as default. 2000 is the recommended maximum number of particles for the particle filter.

Increasing this can improve accuracy but uses more computational power. Decreasing it saves processing power but might make localization less reliable, especially in complex environments.

min_particles: (unitless)

  • Default: 500
  • My value: 500
  • Matches default. 500 is a good minimum to ensure a sufficient particle distribution.

Increasing this value ensures more thorough position checking but uses more processing power. Decreasing it saves computation but might make the system less reliable at finding the robot’s position

odom_frame_id: (string)

  • Default: “odom”
  • My value: “odom”
  • Same as default. “odom” is the standard frame for odometry data.

pf_err: (unitless)

  • Default: 0.05
  • My value: 0.05
  • Matches default. 0.05 is a reasonable value for the particle filter population error.

Increasing this value makes the system reduce particles more aggressively, saving computation but potentially reducing accuracy. Decreasing it maintains more particles, using more computation but potentially improving accuracy.

pf_z: (unitless)

  • Default: 0.99
  • My value: 0.99
  • Same as default. 0.99 works well for the particle filter population density.

Increasing this value maintains more particles longer, using more computation but potentially improving accuracy. Decreasing it reduces particles more quickly, saving computation but possibly reducing accuracy.

recovery_alpha_fast: (unitless)

  • Default: 0.0
  • My value: 0.0
  • Matches default. 0.0 disables the fast particle filter recovery.

When set above 0.0, it enables a fast recovery mode that quickly spreads particles to find the robot’s position. Increasing this makes recovery more aggressive but potentially less stable. Keeping it at 0.0 means relying on normal localization methods.

recovery_alpha_slow: (unitless)

  • Default: 0.0
  • My value: 0.0
  • Same as default. 0.0 disables the slow particle filter recovery.

When set above 0.0, it enables a slower, more methodical recovery mode. Increasing this value makes recovery more thorough but slower. Keeping it at 0.0 means relying on normal localization methods.

resample_interval: (unitless)

  • Default: 1
  • My value: 1
  • Matches default. Resampling after each filter update is usually appropriate.

A value of 1 means it updates after every movement. Increasing this number means less frequent updates, which can help maintain diverse position estimates but might make the robot slower to adapt. Decreasing it below 1 isn’t recommended.

robot_model_type: (string)

  • Default: “nav2_amcl::DifferentialMotionModel”
  • My value: “nav2_amcl::OmniMotionModel”
  • In my navigation tutorials, I am using an omnidirectional mecanum wheeled robot. If you are using a differential drive robot, use the nav2_amcl::DifferentialMotionModel. Differential drive robots are the most common mobile robots in industry.

save_pose_rate: (Hz)

  • Default: 0.5
  • My value: 0.5
  • Same as default. 0.5Hz is a good rate for saving the estimated pose to the parameter server.

A value of 0.5 means it saves twice per second. Increasing this saves positions more frequently but uses more system resources. Decreasing it saves less often but is more efficient.

sigma_hit: (meters)

  • Default: 0.2
  • My value: 0.2
  • Matches default. 0.2 is a reasonable standard deviation for the Gaussian model in the z_hit part.

Increasing this value makes the robot more tolerant of small sensor inaccuracies but might reduce precision. Decreasing it makes the robot more strict about matching sensor readings to the map.

tf_broadcast: (bool)

  • Default: True
  • My value: true
  • Same as the default example. Broadcasting the map->odom transform is why we use ACML.

This parameter should almost always be true. The main job of AMCL is to figure out how to connect the map to the robot’s odometry, and this parameter enables that connection.

transform_tolerance: (seconds)

  • Default: 1.0
  • My value: 1.0
  • Matches default. 1.0s is a good duration for how long the published transform should be considered valid.

If set to 1.0, the robot will trust its position data for 1 second. Increasing this helps handle communication delays but might use outdated positions. Decreasing it ensures more current data but might cause jittery behavior.

update_min_a: (radians)

  • Default: 0.2
  • My value: 0.05
  • Different from the default of 0.2, but 0.05 requires less rotational movement of the robot before performing a filter update.

The default 0.2 is about 11.5 degrees. Increasing this value means fewer updates, saving computation. Decreasing it (like to 0.05) makes the robot update more often, which is better for slow or precise movements.

update_min_d: (meters)

  • Default: 0.25
  • My value: 0.05
  • Smaller than the default of 0.25, requiring less translational movement before updating. Better for slow-moving robots.

z_hit: (unitless)

  • Default: 0.5
  • My value: 0.5
  • Same as default. A good mixture weight for the z_hit part of the likelihood field model.

How much to trust sensor readings that match the map perfectly. A weight of 0.5 means 50% importance. Increasing this makes the robot trust perfect matches more. Decreasing it makes the robot more forgiving of small mismatches.

z_max: (unitless)

  • Default: 0.05
  • My value: 0.05
  • Matches default. A reasonable mixture weight for the z_max part.

How much to trust maximum-range readings (when the sensor doesn’t see anything in range). A small value of 0.05 means these readings aren’t very important. Increasing this makes the robot trust empty space more. Decreasing it makes the robot mostly ignore maximum-range readings.

z_rand: (unitless)

  • Default: 0.5
  • My value: 0.5
  • Same as default. Works well as the mixture weight for the z_rand part.

How much to expect random, unexplainable sensor readings. A value of 0.5 means expecting quite a few random readings. Increasing this helps in busy, unpredictable environments. Decreasing it assumes sensor readings should mostly match the map.

z_short: (unitless)

  • Default: 0.005
  • My value: 0.05
  • Different from the default of 0.005. A higher value of 0.05 gives more weight to the z_short part of the model, which accounts for unexpected short readings. This may be beneficial if your robot frequently encounters objects that cause short readings, such as small obstacles or reflective surfaces.

scan_topic: (string)

  • Default: scan
  • My value: scan
  • Same as example. “scan” is the typical ROS 2 topic for laser scan data.

This parameter must match exactly what your robot’s laser scanner is using. Changing this is only needed if your robot uses a different topic name for its laser data.

bt_navigator

Description

Here is the official configuration guide for bt_navigator.

The bt_navigator (Behavior Tree Navigator) package acts as the central decision-maker in Nav2, coordinating the robot’s navigation tasks through behavior trees. 

Think of it like giving directions to someone in a complex building: “Go down the hall, if you see a closed door, either wait or try another route. If you get lost, go back to the last place you were sure about.” This is how bt_navigator helps robots make navigation decisions.

The bt_navigator implements two main types of navigation:

  1. NavigateToPose: Gets the robot to a single destination
  2. NavigateThroughPoses: Guides the robot through multiple waypoints

When navigating, the bt_navigator coordinates the entire process. It takes your goal position, works with planners to create and follow paths, monitors for problems, triggers recovery behaviors when needed, and provides status updates. The behavior tree structure allows the robot to handle real-world complexity by making dynamic decisions, like finding new paths or initiating recovery behaviors when obstacles appear.

Parameters

global_frame: (unitless)

  • Default: “map”
  • My value: “map”
  • Matches default. “map” is the conventional name for the global reference frame.

You’ll almost never change this from “map” unless you have a very specific multi-map setup.

robot_base_frame: (unitless)

  • Default: “base_link”
  • My value: “base_link”
  • Matches default. “base_link” is the conventional name for the robot’s base frame.

You’ll typically only change this value if your robot uses a different naming convention. Using the wrong name will prevent the robot from navigating properly.

odom_topic: (unitless)

  • Default: “odom”
  • My value: “/odometry/filtered”
  • Different from the default. Using “/odometry/filtered” means I am using a filtered odometry source, which can provide better localization estimates.

bt_loop_duration: (milliseconds)

  • Default: 10
  • My value: 10
  • Matches default. 10 ms is a reasonable value for the behavior tree execution loop duration.

How often the robot’s “brain” (behavior tree) makes decisions. At 10ms, it makes 100 decisions per second. Increasing this gives the CPU more breathing room but makes the robot react slower. Decreasing it makes the robot more responsive but uses more CPU power.

default_server_timeout: (milliseconds)

  • Default: 20
  • My value: 20
  • Matches default. 20 ms is a good timeout value for the behavior tree action node to wait for an action server response.

How long to wait for navigation components to respond. At 20ms, it’s a brief timeout that catches quick failures. Increasing this makes the system more tolerant of slow responses but could delay error detection. Decreasing it might cause premature timeouts if components are slow.

wait_for_service_timeout: (milliseconds)

  • Default: 1000
  • My value: 1000
  • Matches default. This is the timeout value for BT nodes waiting for acknowledgement from a service or action server during initialization. It can be overwritten for individual BT nodes if needed.

How long to wait for navigation services to start up. At 1 second (1000ms), it’s usually enough for normal startup. Increasing this helps with slow-starting systems but delays overall startup. Decreasing it might cause failures if services start slowly.

action_server_result_timeout: (seconds)

  • Default: 900.0
  • My value: 900.0
  • Matches default. This is the timeout value for action servers to discard a goal handle if a result hasn’t been produced. The high value of 900 seconds (15 minutes) allows for long-running navigation tasks.

Increasing this parameter allows longer missions but might keep failed tasks running too long. Decreasing it might interrupt valid long-running tasks.

navigators: (vector<string>)

  • Default: [‘navigate_to_pose’, ‘navigate_through_poses’]
  • My value: [“navigate_to_pose”, “navigate_through_poses”]
  • Matches default. These are the plugins for navigator types implementing the nav2_core::BehaviorTreeNavigator interface. They define custom action servers with custom interface definitions for behavior tree navigation requests.

The types of navigation commands your robot understands:

  • ‘navigate_to_pose’: Go to a single location
  • ‘navigate_through_poses’: Follow a series of waypoints 

Adding custom navigators lets you create new types of movement commands. Removing options limits what navigation commands your robot can accept.

error_code_names: (vector<string>)

  •  Default: [“compute_path_error_code”, “follow_path_error_code”]
  •  My value: [“compute_path_error_code”, “follow_path_error_code”]
  • Matches default. This is a list of error codes that we want to keep track of during navigation tasks. Other error codes you can add include: smoother_error_code, navigate_to_pose_error_code, navigate_through_poses_error_code, etc.

The defaults track:

  • Problems finding a path
  • Problems following a path

Adding more codes (like “smoother_error_code”) lets you track more specific issues. Removing codes means you’ll get less detailed error reporting.

transform_tolerance: (seconds)

  • Default: 0.1
  • My value: 0.1
  • Matches default. This specifies the TF transform tolerance, which is the time window in which transforms are considered valid.

How old position information can be before it’s considered outdated. At 0.1 seconds, it’s fairly strict. Increasing this helps with laggy systems but might use outdated position data. Decreasing it ensures more current data but might cause jitter with slight delays.

controller_server

Description

Here is the official configuration guide for the controller_server.

The controller_server takes in planned routes and figures out the exact velocity commands needed to move the robot along these paths smoothly and accurately. It ensures the robot follows the path correctly, avoids obstacles using the local map around it, and uses various plugins to check its progress and confirm when the robot reaches its goal.

Parameters

controller_frequency: (Hz)

  • Default: 20.0
  • My value: 5.0  
  • Different from default. This parameter depends on what your CPU can handle. I often use a lower frequency of 5 Hz to reduce computational load, but you are free to keep it as 20.0 Hz if you wish. 

costmap_update_timeout: (seconds)

  • Default: 0.30
  • My value: 0.30  
  • How long to wait for the costmap to update before giving up. If the costmap hasn’t updated within 0.3 seconds, the controller assumes something’s wrong. Increasing this helps with slow computers but might let the robot use outdated obstacle information. Decreasing it makes the system more responsive to obstacles but might cause more timeouts.

min_x_velocity_threshold: (m/s)

  • Default: 0.0001
  • My value: 0.001
  • Similar to default. 0.001 m/s is a reasonable minimum velocity threshold to filter odometry noise in the x direction.

At 0.001 m/s, tiny movements are ignored. Increasing this value means the robot must move faster to be considered “moving”. Decreasing it makes the system more sensitive to tiny movements.

min_y_velocity_threshold: (m/s)

  • Default: 0.0001 
  • My value: 0.001
  • Different from default. 0.001 m/s is a suitable minimum velocity threshold for filtering odometry noise in the y direction for robots that can move in the y-direction like a mecanum wheel omnidirectional robot.

The default (0.5) is for differential drive robots that can’t move sideways. Increasing this ignores small sideways movements. Decreasing it makes the robot more responsive to sideways commands.

min_theta_velocity_threshold: (rad/s)

  • Default: 0.0001
  • My value: 0.001
  • Similar to default. 0.001 rad/s works well as a minimum angular velocity threshold to filter odometry noise.

At 0.001 rad/s, very slow rotations are still tracked. Increasing this value ignores slower rotations. Decreasing it makes the system track even tinier rotational movements.

failure_tolerance: (seconds)

  • Default: 0.0
  • My value: 0.3
  • Different from default. Allowing the controller to fail for up to 0.3 seconds before the FollowPath action fails provides some tolerance for temporary failures.

Setting this parameter to -1.0 makes it never give up. 0.0 disables failure tolerance completely. Increasing this value makes the system more tolerant of temporary failures, while decreasing it makes it give up more quickly on problematic paths.

progress_checker_plugin: (string)

  • Default: “progress_checker”
  • My value: “progress_checker” 
  • Matches default. Using the SimpleProgressChecker plugin is suitable for monitoring the robot’s progress.

progress_checker.required_movement_radius: (meters)

  • Default: 0.5
  • My value: 0.5
  • A movement radius of 0.5 meters is a good threshold for determining if the robot has made sufficient progress.

Increasing this value requires more movement to prove progress, which might cause false “stuck” detections. Decreasing it makes the system more lenient but might miss actual stuck conditions.

progress_checker.movement_time_allowance: (seconds)

  • Default: 10.0
  • My value: 10.0
  • Allowing 10 seconds for the robot to move the required distance is a reasonable time limit before considering it stuck.

If the robot doesn’t move the required radius within this time, it’s considered stuck. Increasing this gives more time for slow maneuvers but delays stuck detection. Decreasing it catches stuck conditions faster but might interrupt valid slow movements.

goal_checker_plugins: (vector<string>)

  • Default: [“goal_checker”]
  • My value: [“general_goal_checker”]
  • Different from default. The general_goal_checker namespace is used instead of goal_checker. It d

general_goal_checker.xy_goal_tolerance: (meters)

  • Default: 0.25
  • My value: 0.35
  • An xy goal tolerance of 0.35 meters is suitable for considering the goal reached in terms of position. You can set a smaller value if you want the robot to stop closer to your goal.

A larger value of 0.35m makes it easier for the robot to consider goals achieved. Increasing this value makes navigation more reliable but less precise. Decreasing it ensures more precise positioning but might cause the robot to spend more time trying to reach exact positions (i.e. dancing and twirling around the goal).

general_goal_checker.yaw_goal_tolerance: (radians)

  • Default: 0.25
  • My value: 0.50
  • A yaw goal tolerance of 0.50 radians (about 28.6 degrees) is appropriate for considering the goal orientation reached.

You can set this at 0.25 if you want; however, setting it too low can cause the robot to “dance” when it reaches the goal as it struggles to meet the goal tolerance.

controller_plugins: (vector<string>)

  • Default: [‘FollowPath’]
  • My value: [“FollowPath”]
  • Matches default. Using the FollowPath controller plugin is suitable for following paths.

FollowPath.plugin: (string)

Option 1 (MPPI): Model Predictive Path Integral controller simulates many possible trajectories and picks the best one. It’s computationally intensive but very smooth and handles dynamic constraints well. 

Option 2 (Rotation Shim): First rotates the robot to face the path, then uses a simpler controller for following it. This can make navigation more predictable but might be slower due to the initial rotation.

FollowPath.primary_controller: (string)

This parameter is only used with the Rotation Shim controller to specify which controller handles path following after rotation.

While you can use the DWBLocalPlanner if you want, I’ve had more success with the Regulated Pure Pursuit controller. 

  • The Regulated Pure Pursuit controller steers the robot towards a point ahead on the path, resulting in smoother and more natural-looking motion.
  • The Regulated Pure Pursuit controller also makes better turns into doorways. I have found that DWB can come close to scraping the wall.

Model Predictive Path Integral Controller Parameters

time_steps: (integer)

  • Default: 56
  • My value: 15
  • Different from default. Using fewer timesteps reduces computational load while maintaining adequate planning horizon.

Increasing this value allows the controller to plan further ahead but significantly increases computational load. Each timestep represents a future point where trajectories are evaluated.

model_dt: (seconds)

  • Default: 0.05
  • My value: 0.2
  • Different from default. Larger timestep matches our control frequency and reduces computational overhead.

This is how far apart each prediction step is in time. Increasing it means bigger jumps between predictions, saving computation but potentially missing obstacles. Decreasing it gives finer predictions but increases computational load substantially.

batch_size: (integer)

  • Default: 1000
  • My value: 10000
  • Different from default. Larger batch size provides better sampling of possible trajectories.

This is how many potential trajectories are simulated in parallel. Increasing it gives better chances of finding optimal paths but uses more memory and CPU. Decreasing it saves resources but might miss better path options.

vx_std: (m/s)

  • Default: 0.2
  • My value: 0.2
  • Matches default. This standard deviation provides good sampling for linear velocity.

Increasing this value makes the controller try more diverse speeds but might lead to erratic behavior. Decreasing it makes speed choices more conservative but might miss useful options.

vy_std: (m/s)

  • Default: 0.2
  • My value: 0.2
  • Matches default. Appropriate for holonomic robots.

Controls how varied the sideways velocity samples are. Increasing this value allows more diverse lateral movements but might cause unstable behavior. Decreasing it makes lateral motion more predictable but might limit maneuverability.

wz_std: (rad/s)

  • Default: 0.2
  • My value: 0.4
  • Different from default. Higher angular velocity standard deviation allows for more varied rotational sampling.

Controls how much the rotation speeds vary in samples. Your higher value tries more diverse turning speeds, useful for finding better rotational movements. Increasing it further might lead to erratic rotation behavior.

vx_max: (m/s)

  • Default: 0.5
  • My value: 0.5
  • Matches default. Maximum forward velocity suitable for most indoor robots.

Increasing this allows faster motion but might make the robot harder to control. Decreasing it makes the robot move more cautiously but takes longer to reach goals.

vx_min: (m/s)

  • Default: -0.35
  • My value: 0.0
  • Different from default. Set to 0.0 to prevent reverse motion.

vy_max: (m/s)

  • Default: 0.5
  • My value: 0.5
  • Matches default. Maximum lateral velocity for holonomic robots.

wz_max: (rad/s)

  • Default: 1.9
  • My value: 1.9
  • Matches default. Suitable maximum angular velocity for controlled turns.

ax_max: (m/s²)

  • Default: 3.0
  • My value: 3.0
  • Matches default. Maximum forward acceleration.

ax_min: (m/s²)

  • Default: -3.0
  • My value: -3.0
  • Matches default. Maximum deceleration.

ay_max: (m/s²)

  • Default: 3.0
  • My value: 3.0
  • Matches default. Maximum lateral acceleration.

az_max: (rad/s²)

  • Default: 3.5
  • My value: 3.5
  • Matches default. Maximum angular acceleration.

iteration_count: (integer)

  • Default: 1
  • My value: 1
  • Matches default. Single iteration is recommended with larger batch sizes.

Multiple iterations can improve path quality but are rarely needed with large batch sizes. Keeping it at 1 is efficient when using many samples.

temperature: (double)

  • Default: 0.3
  • My value: 0.3
  • Matches default. Good balance for trajectory selection based on costs.

Controls how strongly the controller favors lower-cost trajectories. Higher values make selection more random, potentially finding creative solutions. Lower values make it stick to the most obvious low-cost paths.

gamma: (double)

  • Default: 0.015
  • My value: 0.015
  • Matches default. Appropriate trade-off between smoothness and low energy.

gamma controls how the controller weighs trajectory costs in its decision-making. This parameter determines how “picky” the controller is about choosing trajectories based on their costs. Think of it as a cost sensitivity setting. 

With a higher gamma (like 0.03), the controller will strongly prefer the very best trajectories and mostly ignore mediocre ones. With a lower gamma (like 0.005), it will be more willing to consider trajectories even if they’re not optimal, which can help in complex situations where the perfect path isn’t obvious. 

The default of 0.015 provides a good balance between being selective enough to find good paths while still being flexible enough to handle various situations.

ConstraintCritic (robot limits):

  • Default: enabled (true), cost_power (1), cost_weight (4.0)
  • My values: enabled (true), cost_power (1), cost_weight (4.0)
  • Matches default. This parameter makes sure the robot doesn’t try to move in impossible ways. Think of this like a speed limit enforcer. It makes sure the robot doesn’t try to turn too sharply or move too fast – essentially keeping it within what it can physically do. Increasing the weight makes it more strict about these limits, while decreasing makes it more relaxed but might push the robot too hard.

CostCritic (obstacle avoidance):

  • Default: enabled (true), cost_power (1), cost_weight (3.81), critical_cost (300.0), consider_footprint (false), collision_cost (1000000.0)
  • My values: enabled (true), cost_power (1), cost_weight (3.81), critical_cost (300.0), consider_footprint (true), collision_cost (1000000.0), near_goal_distance (1.0), trajectory_point_step (2)
  • Different from default only in consider_footprint. 

This parameter helps the robot understand its size and stay safe from obstacles. Think of this like a bubble around the robot to keep it safe. The critical_cost is like a warning zone, telling the robot “this is getting dangerous” when it’s too close to obstacles. 

The collision_cost is like a big red stop sign – the robot will do almost anything to avoid an actual collision. By setting consider_footprint to true (different from default), we make the robot more aware of its actual shape instead of just treating it like a point, which is safer but takes more computer power.

The near_goal_distance of 1.0 meter means the robot can get a bit closer to obstacles when it’s trying to reach its final destination.

GoalCritic (getting to destination):

  • Default: enabled (true), cost_power (1), cost_weight (5.0), threshold_to_consider (1.4)
  • My values: enabled (true), cost_power (1), cost_weight (5.0), threshold_to_consider (1.4)
  • Matches default. Appropriate weight for goal-oriented behavior.

This parameter helps the robot focus on getting to its destination point. Think of this like a magnet pulling the robot toward its goal. 

The weight of 5.0 determines how strong this “pull” is. The threshold_to_consider of 1.4 meters means the robot starts really focusing on getting to the exact goal point when it’s within this distance. Before that, it’s more focused on following the path. Increasing the weight makes the robot more aggressive about getting to the goal, while decreasing it makes it more relaxed about reaching the exact spot.

GoalAngleCritic (final rotation):

  • Default: enabled (true), cost_power (1), cost_weight (3.0), threshold_to_consider (0.5)
  • My values: enabled (true), cost_power (1), cost_weight (3.0), threshold_to_consider (0.5)
  • Matches default. Good balance for goal orientation alignment.

This parameter helps the robot turn to face the right direction at the goal…like making sure you’re facing the right way when you park a car. 

When the robot gets within 0.5 meters (threshold_to_consider) of its goal, it starts focusing on turning to the requested final direction. The weight of 3.0 determines how much the robot cares about getting this final rotation right. Increasing it makes the robot more picky about its final orientation, while decreasing it means it won’t try as hard to get the exact angle right.

PathAlignCritic: (staying on path)

  • Default: enabled (true), cost_power (1), cost_weight (10.0), threshold_to_consider (0.5)
  • My value: enabled (true), cost_power (1), cost_weight (14.0), threshold_to_consider (0.5)
  • Different from default. 

I use a higher weight to make the robot stick to the path more strictly. Think of this like trying to walk on a line drawn on the ground. My higher weight (14.0 vs 10.0) means the robot tries harder to stay exactly on the planned path. The threshold of 0.5 meters means it stops worrying about strict path following when it’s very close to the goal. Increasing the weight makes the robot follow the path more precisely, while decreasing it allows more deviation from the path.

PathAngleCritic: (path direction)

  • Default: enabled (true), cost_power (1), cost_weight (2.2), threshold_to_consider (0.5)
  • My value: enabled (true), cost_power (1), cost_weight (2.0), threshold_to_consider (0.5)
  • Different from default. I use a slightly lower weight to allow more flexible turning. 

My slightly lower weight (2.0 vs 2.2) means the robot is a bit more relaxed about matching the exact path direction. Within 0.5 meters of the goal, it stops caring about path direction and focuses on final positioning. Increasing this weight makes the robot more strictly follow the path’s direction, while decreasing it allows more freedom in orientation.

PathFollowCritic: (forward progress)

  • Default: enabled (true), cost_power (1), cost_weight (5.0), threshold_to_consider (1.4)
  • My value: enabled (true), cost_power (1), cost_weight (5.0), threshold_to_consider (1.4)
  • Matches default. This encourages the robot to make progress along the path. 

Think of this parameter like a gentle push forward along the path. The weight of 5.0 determines how strongly the robot is urged to move forward. When within 1.4 meters of the goal, it switches from path following to focusing on the final approach. Increasing the weight makes the robot more aggressive about moving forward, while decreasing it makes it more willing to slow down or adjust position.

PreferForwardCritic: (forward motion)

  • Default: enabled (true), cost_power (1), cost_weight (5.0), threshold_to_consider (0.5)
  • My value: enabled (true), cost_power (1), cost_weight (5.0), threshold_to_consider (0.5)
  • Matches default. This encourages the robot to drive forward instead of backward….like preferring to walk forward instead of backward. 

The weight of 5.0 determines how much the robot prefers forward motion. Within 0.5 meters of the goal, it stops caring about forward motion to allow any adjustments needed for final positioning. Increasing this value makes the robot more strongly prefer forward motion, while decreasing it makes it more willing to drive backward.

TwirlingCritic (smooth rotation):

  • Default: enabled (true), cost_power (1), cost_weight (10.0)
  • My values: enabled (true), cost_power (1), cost_weight (10.0)
  • Matches default. Good weight for preventing unnecessary rotation for omnidirectional robots.

This parameter prevents unnecessary dancing and spinning. The weight of 10.0 determines how much the robot tries to avoid extra rotation. Increasing this value makes the robot more resistant to changing its rotation, while decreasing it allows more rotational movement.

Regulated Pure Pursuit 

Here’s the analysis of the Regulated Pure Pursuit controller parameters, comparing the defaults with my values:

desired_linear_vel: (m/s)

  • Default: 0.5
  • My value: 0.4
  • Different from default. I use a slightly lower cruising speed for smoother motion.

Think of this as the robot’s preferred cruising speed. Increasing this makes the robot move faster overall, decreasing it makes it move slower but more carefully. Choose a speed that makes sense for your environment and robot’s capabilities.

lookahead_dist: (m)

  • Default: 0.6
  • My value: 0.7
  • Different from default. A lookahead distance of 0.7 m is used, which is slightly higher than the default. This allows the robot to anticipate and react to the path just a touch further ahead.

Increasing this makes the robot take smoother, wider turns but might cut corners more. Decreasing it makes the robot follow the path more precisely but might make motion more jerky.

min_lookahead_dist: (m)

  • Default: 0.3
  • My value: 0.5
  • Different from default. The minimum lookahead distance is set to 0.5 m, higher than the default. This ensures a minimum level of path anticipation, even at lower speeds.

Increasing this value makes turns smoother but might make tight maneuvers harder. Decreasing it allows for tighter maneuvers but might make motion less smooth.

max_lookahead_dist: (m)

  • Default: 0.9
  • My value: 0.7
  • Different from default. The maximum lookahead distance is limited to 0.7 m, lower than the default. This prevents the robot from looking too far ahead, which can be beneficial in environments with frequent turns or obstacles.

lookahead_time: (s)

  • Default: 1.5
  • My value: 1.5
  • Matches default. The lookahead time of 1.5 seconds is used, which is a reasonable value for maintaining a balance between path anticipation and responsiveness.

The 1.5 second preview allows the robot to anticipate and smoothly adjust to upcoming path changes. Increasing this makes the robot more forward-thinking but might react slower to sudden changes, while decreasing it makes it more reactive but potentially less smooth.

rotate_to_heading_angular_vel: (rad/s)

  • Default: 1.8
  • My value: 0.375
  • Different from default. A significantly lower angular velocity of 0.375 rad/s is used for rotating to the desired heading. This slower rotation speed allows for more controlled and stable heading adjustments. You can use whatever value makes sense for your robot.

transform_tolerance: (s)

  • Default: 0.1
  • My value: 0.1
  • Matches default. The transform tolerance of 0.1 seconds is used, which is a reasonable value for allowing small time differences between transforms.

The 0.1 second tolerance means position info must be very fresh. Increasing this helps with slower computers but might use outdated information, while decreasing it demands more up-to-date data but might cause more errors.

use_velocity_scaled_lookahead_dist: (bool)

  • Default: false
  • My value: true
  • Different from default. Velocity scaling of the lookahead distance is enabled. This means that the lookahead distance is adjusted based on the robot’s velocity, providing better path following behavior.

Setting this to true means the robot automatically adjusts how far it looks ahead based on its speed. This generally gives better path following behavior than a fixed distance.

min_approach_linear_velocity: (m/s)

  • Default: 0.05
  • My value: 0.05
  • Matches default. The minimum approach linear velocity of 0.05 m/s is used, which is a suitable value for the final approach to the goal.

approach_velocity_scaling_dist: (m)

  • Default: 1.0
  • My value: 0.6
  • Different from default. The approach velocity scaling distance is set to 0.6 m, which is smaller than the default. This means that the robot starts slowing down earlier when approaching the goal.

use_cost_regulated_linear_velocity_scaling: (bool)

  • Default: false
  • My value: true
  • Different from default. Cost-regulated linear velocity scaling is enabled. If there are obstacles nearby, the robot will slow down.

Like slowing down when driving close to parked cars, setting this to true makes the robot automatically slow down when near obstacles. This makes navigation safer but might make the robot move slower in tight spaces.

regulated_linear_scaling_min_radius: (m)

  • Default: 0.9
  • My value: 0.85
  • Different from default. The minimum radius for regulated linear velocity scaling is set to 0.85 m, slightly lower than the default. This allows for velocity scaling in tighter spaces.

Think of this as your comfort zone around obstacles. At 0.85m, it’s slightly more tolerant of tight spaces than default. Increasing this makes the robot slow down further from obstacles, while decreasing it allows closer approaches at higher speeds

regulated_linear_scaling_min_speed: (m/s)

  • Default: 0.25
  • My value: 0.25
  • Matches default. The minimum speed for regulated linear velocity scaling is set to 0.25 m/s, which is a reasonable value for maintaining a minimum speed while scaling.

Increasing this value makes the robot maintain higher speeds near obstacles, while decreasing it allows slower, more cautious movement in tight spaces.

use_fixed_curvature_lookahead: (bool)

  • Default: false
  • My value: false
  • Matches default. Fixed curvature lookahead is not used, allowing for dynamic lookahead distance based on the path curvature.

Think of this like looking closer when going around corners and further on straight paths. Keeping this false lets the robot automatically adjust how far it looks ahead based on the path. Setting it to true would force a fixed lookahead distance.

curvature_lookahead_dist: (m)

  • Default: 1.0
  • My value: 0.6
  • Different from default. The curvature lookahead distance is set to 0.6 m, which is shorter than the default. This allows the robot to be more responsive and precise in following curved paths, at the potential cost of not planning as far ahead. 

use_rotate_to_heading: (bool)

  • Default: true
  • My value: true
  • Matches default. The rotate to heading behavior is enabled, allowing the robot to rotate in place to align with the path heading when necessary.

rotate_to_heading_min_angle: (rad)

  • Default: 0.785
  • My value: 0.785
  • Matches default. The minimum angle for rotating to heading is set to 0.785 radians (approximately 45 degrees), which is a reasonable threshold for triggering the rotate to heading behavior.

max_angular_accel: (rad/s^2)

  • Default: 3.2
  • My value: 3.2
  • Matches default. The maximum angular acceleration is set to 3.2 rad/s^2, which is a suitable value for limiting the angular acceleration of the robot.

At 3.2 rad/s², this value provides smooth transitions when starting or stopping turns. Increasing this allows faster turn adjustments but might make motion jerky, while decreasing it makes turns smoother but less responsive.

interpolate_curvature_after_goal: (bool)

  • Default: false
  • My value: false
  • Matches default. Enabling this parameter can lead to smoother path following near the goal by mitigating oscillations. This feature requires another parameter, use_fixed_curvature_lookahead, to be set to true. The use_fixed_curvature_lookahead parameter, when enabled, ensures that a fixed lookahead distance is used for curvature calculation.

use_cancel_deceleration: (bool)

  • Default: false
  • My value: false
  • Matches default. If set to true, the robot will decelerate gracefully by the cancel_deceleration value (see below) when a goal is canceled. You can set this to true if you want.

Keeping this false means the robot stops normally when a goal is canceled. Setting it true would make it slow down more gradually using the cancel_deceleration value.

cancel_deceleration: (m/s^2)

  • Default: 3.2
  • My value: 3.2
  • Matches default. You can lower this value if you want a more gradual deceleration when a goal is canceled.

max_robot_pose_search_dist: (m)

  • Default: 10.0
  • My value: 10.0
  • Matches default. The maximum distance to search for the robot’s pose during path following is set to 10.0 m. 

At 10 meters, it gives plenty of room to find where the robot is on the path. Increasing this value allows recovery from larger position errors but takes more computation, while decreasing it might make the robot get lost more easily but uses less computation.

use_collision_detection: (bool)

  • Default: true
  • My value: true
  • Matches default. Collision detection is enabled, allowing the controller to consider potential collisions while following the path.

max_allowed_time_to_collision_up_to_carrot: (s)

  • Default: 1.0
  • My value: 1.5
  • Different from default. The maximum allowed time to collision up to the carrot (lookahead point) is set to 1.5 seconds, giving the robot more time to react to potential collisions.

My higher value (1.5s vs 1.0s) gives more time to react to potential collisions. Increasing this makes navigation safer but more conservative, while decreasing it allows more aggressive movement but might catch obstacles later.

local_costmap

Description

Here is the official configuration guide for the local_costmap.

The local costmap acts like the robot’s immediate awareness of its surroundings, similar to how you use your eyes while walking or driving. Just as you constantly watch for obstacles in your path, the local costmap creates a detailed map of the area immediately around the robot using sensors like laser scanners, depth cameras, or sonars.

Think of it as a moving “safety bubble” that travels with the robot. Inside this bubble, the robot creates a grid where each cell is marked based on sensor data. When sensors detect an obstacle – like a wall, furniture, or a person walking by – those areas get marked as unsafe in the grid. This constantly updating grid helps the robot make smart decisions about how to move safely through its immediate space.

The local costmap works hand-in-hand with the robot’s controller server, which decides the robot’s actual movements. As the robot moves, the local costmap feeds information about nearby obstacles to the controller, allowing it to adjust the robot’s speed and direction to avoid collisions – much like how you might slow down or change direction when you see something in your path.

This immediate awareness complements the global costmap, which provides the bigger picture needed for overall route planning. While the global costmap helps plot the general path (like a map showing your entire route), the local costmap handles the moment-to-moment navigation decisions (like watching where you’re going while walking).

Parameters

update_frequency: (Hz)

  • Default: 5.0
  • My value: 5.0
  • Same as default. 5 Hz provides a good balance between keeping the costmap updated and computational load.

publish_frequency: (Hz)

  • Default: 5.0 
  • My value: 5.0
  • Matches default. Publishing the costmap at 5 Hz is sufficient for most use cases.

global_frame: (string)

  • Default: “map”
  • My value: “odom”
  • Different from default. Using the “odom” frame helps keep the local costmap consistent with the robot’s movement, making it easier for the robot to navigate and avoid obstacles in its immediate surroundings, regardless of any changes or updates to the global map.

robot_base_frame: (string)

  • Default: “base_link”
  • My value: “base_link” 
  • Same as default. The “base_link” frame represents the robot’s center of rotation.

rolling_window: (bool)

  • Default: false
  • My value: true
  • Different from default. Enabling the rolling window allows the costmap to move with the robot, providing a local view around the robot.

width: (meters)

  • Default: 5.0
  • My value: 5.0
  • Same as default. A width of 5 meters provides a sufficiently large area around the robot for local planning.

height: (meters)

  • Default: 5.0
  • My value: 5.0
  • Matches default. A height of 5 meters is suitable for covering the local area around the robot.

resolution: (meters/cell)

  • Default: 0.1
  • My value: 0.05
  • Different from default. A higher resolution of 0.05 m/cell provides more detailed obstacle representation for precise navigation.

robot_radius: (meters)

  • Default: 0.1
  • My value: 0.15
  • Slightly different from default. You need to set this value to the radius of your robot base.

plugins: (list of strings)

  • Default: [“static_layer”, “obstacle_layer”, “inflation_layer”]
  • Common values for real-world robots: [“obstacle_layer”,  “voxel_layer”, “range_sensor_layer”, “denoise_layer”, “inflation_layer”] (order matters here)
  • Different from default.
    • Added “voxel_layer” for 3D obstacle representation using a depth camera.
    • Added “range_sensor_layer” for handling range sensor data from an ultrasonic sensor (if you have one). 
    • Added “denoise_layer” for removing salt and pepper noise from the sensors.
    • Removed “static_layer” since it’s not needed for the local costmap.

obstacle_layer

The obstacle layer creates a grid-like map of the robot’s surroundings using data from sensors like LIDAR. It marks cells in the grid as either occupied by obstacles or free space, helping the robot understand where it can and cannot safely move. As the robot navigates, the obstacle layer continuously updates this map based on the latest sensor readings.

Here is the official configuration guide for the obstacle layer.

enabled: (bool)

  • Default: True
  • My value: True
  • Matches default. The obstacle layer is enabled.

observation_sources: (vector<string>)

  • Default: {“”}
  • My value: scan
  • Different from default. The “scan” observation source is specified, which corresponds to the LIDAR.

scan.topic: (string)

  • Default: “”
  • My value: /scan
  • Different from default. The topic “/scan” is specified as the source of laser scan data.

scan.raytrace_min_range: (meters)

  • Default: 0.0
  • My value: 0.20
  • Different from default. The minimum range for raytracing is set to 0.20 meters, which means that the obstacle layer will start clearing obstacles from this distance. I use this distance because part of the robot’s body is within 0.20 meters of the LIDAR.

scan.obstacle_min_range: (meters)

  • Default: 0.0
  • My value: 0.20
  • Different from default. The minimum range for marking obstacles is set to 0.20 meters, which means that obstacles closer than this distance will not be marked in the costmap.

scan.max_obstacle_height: (meters)

  • Default: 0.0
  • My value: 2.0
  • Different from default. The maximum height of obstacles to be marked in the costmap is set to 2.0 meters.

scan.clearing: (bool)

  • Default: False
  • My value: True
  • Different from default. Clearing is enabled, allowing the obstacle layer to clear free space in the costmap based on laser data.

scan.marking: (bool)

  • Default: True
  • My value: True
  • Same as default. Marking is enabled, allowing the obstacle layer to mark obstacles in the costmap.

scan.data_type: (string)

  • Default: “LaserScan”
  • My value: “LaserScan”
  • Matches default. The data type is correctly set to “LaserScan” for laser scanner data.

voxel_layer

The voxel layer is like a 3D version of the obstacle layer. Instead of creating a 2D grid map, it divides the space around the robot into small 3D cubes called voxels. This allows the robot to understand the environment in three dimensions, detecting obstacles not only on the ground but also at different heights, like tabletops or overhanging objects. The data source for this layer is typically a depth camera like the Intel RealSense.

Here is the official configuration guide for the voxel layer.

Here is the tuning guide for the voxel layer parameters based on my experience messing around with this on various mobile robots:

enabled: (bool)

  • Default: True
  • My value: True
  • Same as default. Enabling the voxel layer allows the costmap to incorporate 3D obstacle information from depth sensors.

footprint_clearing_enabled: (bool)

  • Default: True
  • My value: true
  • Matches default. Clearing the robot’s footprint in the 3D costmap helps prevent the robot from being stuck by obstacles detected in its own footprint.

max_obstacle_height: (meters)

  • Default: 2.0
  • My value: 2.0
  • Same as default. A maximum obstacle height of 2 meters is suitable for most indoor environments.

z_voxels: (int)

  • Default: 10
  • My value: 16
  • Different from default. Using 16 voxels in the height dimension provides a higher resolution representation of the 3D space, up to the maximum allowed value.

origin_z: (meters)

  • Default: 0.0
  • My value: 0.0
  • Matches default. The origin of the voxel grid in the z-axis is set to the ground level.

z_resolution: (meters)

  • Default: 0.2
  • My value: 0.2
  • Same as default. A z-resolution of 0.2 meters provides a good balance between detail and computational efficiency.

unknown_threshold: (int)

  • Default: 15
  • My value: 15
  • Matches default. A minimum of 15 empty voxels in a column is required to mark that cell as unknown in the 2D occupancy grid.

mark_threshold: (int)

  • Default: 0
  • My value: 0
  • Same as default. Any occupied voxel in a column will result in marking that cell as occupied in the 2D occupancy grid.

combination_method: (int)

  • Default: 1
  • My value: 1
  • Matches default. The voxel layer will update the master costmap by taking the maximum value between the existing costmap value and the voxel layer’s value for each cell.

publish_voxel_map: (bool)

  • Default: False
  • My value: False
  • Same as default. Publishing the 3D voxel grid is disabled to reduce computational overhead.

observation_sources: (vector<string>)

  • Default: {“”}
  • My value: realsense1
  • Different from default. The “realsense1” observation source is specified, which corresponds to the 3D depth sensor.

realsense1.topic: (string)

  • Default: “”
  • My value: /cam_1/depth/color/points
  • Different from default. The topic must be your source of 3D pointcloud data.

realsense1.max_obstacle_height: (meters)

  • Default: 0.0
  • My value: 2.0
  • Different from default. The maximum height of obstacles to be marked in the costmap is set to 2.0 meters.

realsense1.min_obstacle_height: (meters)

  • Default: 0.0
  • My value: 0.0
  • Same as default. The minimum height of obstacles to be marked in the costmap is set to ground level.

realsense1.obstacle_max_range: (meters)

  • Default: 2.5
  • My value: 1.25
  • Different from default. The maximum range for marking obstacles is set to 1.25 meters, a lower value than the default of 2.5 meters, providing a more focused view of nearby obstacles. 
  • This value is largely based on my experience with the Intel RealSense D435 depth camera. 

realsense1.obstacle_min_range: (meters)

  • Default: 0.0
  • My value: 0.05
  • Different from default. The minimum range for marking obstacles is set to 0.05 meters, filtering out noise very close to the sensor.

realsense1.raytrace_max_range: (meters)

  • Default: 3.0
  • My value: 3.0
  • Same as default. The maximum range for raytracing to clear obstacles is set to 3.0 meters.

realsense1.raytrace_min_range: (meters)

  • Default: 0.0
  • My value: 0.05
  • Different from default. The minimum range for raytracing to clear obstacles is set to 0.05 meters, filtering out noise very close to the sensor.

realsense1.clearing: (bool)

  • Default: False
  • My value: False
  • Matches default. Clearing of obstacles using raytracing is disabled.

realsense1.marking: (bool)

  • Default: True
  • My value: True
  • Same as default. Marking of obstacles in the costmap is enabled.

realsense1.data_type: (string)

  • Default: “LaserScan”
  • My value: “PointCloud2”
  • Different from default. The data type is set to “PointCloud2” to match the 3D depth camera data format.

range_sensor_layer 

The range sensor layer is similar to the obstacle layer, but it uses data from range sensors like ultrasonic sensors or infrared sensors instead of laser scanners. It helps the robot detect obstacles at closer ranges and in areas where laser scanners might not be effective. The range sensor layer adds an extra level of safety by ensuring the robot is aware of nearby obstacles.

Here is the official configuration guide for the range sensor layer.

Here is the tuning guide for the range sensor layer parameters based on my experience messing around with this on various mobile robots:

enabled: (bool)

  • Default: True
  • My value: False
  • Different from default. The range sensor layer is disabled, meaning that it will not be used to update the costmap.

topics: (vector<string>)

  • Default: [“”]
  • My value: [“/ultrasonic1”]
  • Different from default. I have specified the topic “/ultrasonic1” as the input source for the range sensor layer. This assumes that you have a range sensor (e.g., ultrasonic sensor) publishing data on this topic.

phi: (double)

  • Default: 1.2
  • My value: 1.2
  • Same as default. The phi value determines the width of the sensor’s field of view. A value of 1.2 means that the sensor’s coverage area will be 1.2 radians wide.

inflate_cone: (double)

  • Default: 1.0
  • My value: 1.0
  • Same as default. The inflate_cone parameter determines how much the triangular area covered by the sensor is inflated. A value of 1.0 means no inflation.

no_readings_timeout: (double)

  • Default: 0.0
  • My value: 0.0
  • Same as default. If the layer does not receive sensor data for this amount of time (in seconds), it will warn the user and mark the layer as not current. A value of 0.0 disables this timeout.

clear_threshold: (double)

  • Default: 0.2
  • My value: 0.2
  • Same as default. Cells with a probability below this threshold are marked as free in the costmap.

mark_threshold: (double)

  • Default: 0.8
  • My value: 0.8
  • Same as default. Cells with a probability above this threshold are marked as occupied in the costmap.

clear_on_max_reading: (bool)

  • Default: False
  • My value: True
  • Different from default. When set to True, the sensor readings will be cleared when the maximum range is reached. This can help to remove false positives and stale readings.

input_sensor_type: (string)

  • Default: ALL
  • My value: ALL
  • Same as default. The input sensor type is set to “ALL”, which means the layer will automatically select the appropriate type based on the sensor’s minimum and maximum range values.

denoise_layer

The denoise layer acts like a digital filter for your robot’s map. It helps clean up false obstacles that might appear due to sensor errors, especially from LIDARs.

Imagine a LIDAR sometimes sees a speck of dust and thinks it’s a wall. This can confuse the robot, making it think there are obstacles where there aren’t any. The denoise layer works like a filter, removing these “ghost” obstacles.

How it works:

  • It looks at the map data and identifies small groups of cells marked as obstacles.
  • By default, it removes single isolated obstacle points.
  • You can adjust its settings to remove larger groups of false obstacles if needed.

Key points:

  • The layer processes only obstacle information in the costmap.
  • Cells identified as noise are replaced with free space.
  • It’s typically placed before the inflation layer to prevent inflating noise-induced obstacles.

Here is the official configuration guide for the denoise layer.

enabled: (bool)

  • Default: True
  • My value: true
  • This parameter turns the denoise layer on or off. I’ve kept it on to help clean up my map.

minimal_group_size: (int)

  • Default: 2
  • My value: 2
  • This parameter sets how big a group of obstacle points needs to be to stay on the map. With 2, single dots get erased, but anything bigger stays. It’s like telling the robot, “If you see just one obstacle point all alone, ignore it. It’s probably not real.”

group_connectivity_type: (int)

  • Default: 8
  • My value: 8
  • This parameter decides how obstacle points connect to form groups. 8 means points touching corners count as connected, like a checkers board. 4 would only count direct side-to-side connections.

Important Note:

  • While this layer can really help clean up your map, it might slow things down, especially if you’re using a long-range LIDAR (like 20+ meters) or have a big map. It’s like running a spell-check on a huge document – helpful, but it takes time. 
  • For my indoor robots, the benefits outweigh the small speed loss, but you might need to test it out in your specific setup.

Remember, the goal is to help your robot navigate better by removing false obstacles, but without erasing real ones. These settings have worked well for me, but feel free to adjust based on how your robot behaves in its environment.

inflation_layer

The inflation layer adds a safety buffer around obstacles in the costmap. It expands the size of the obstacles by a certain amount, making the robot keep a distance from them. This helps the robot navigate more safely and avoid getting too close to walls, furniture, or other objects.

Here is the official configuration guide for the inflation layer.

enabled: (bool)

  • Default: True
  • My value: true
  • Same as default. The inflation layer is enabled to create a cost decay around obstacles.

inflation_radius: (meters)

  • Default: 0.55
  • My value: 1.75
  • Different from default. The inflation radius is set to 1.75 meters, which is larger than the default value. This means that the cost will decay over a larger distance around obstacles, making the robot maintain a greater clearance from them. 
  • Credit to this ROS Tuning Guide for finding this magic number which has worked really well on my own mobile robots for indoor environments.

cost_scaling_factor: (unitless)

  • Default: 1.0
  • My value: 2.58
  • Different from default. The cost scaling factor determines the rate at which the cost decays exponentially across the inflation radius. A value of 2.58 means that the cost will decay more quickly compared to the default value of 1.0. 
  • Credit to this ROS Tuning Guide for finding this magic number which has worked really well on my own mobile robots for indoor environments.

inflate_unknown: (bool)

  • Default: False
  • My value: Not specified
  • Stick with the default value. Unknown cells will not be inflated as if they were lethal obstacles.

inflate_around_unknown: (bool)

  • Default: False
  • My value: Not specified
  • Assuming default value of False. The inflation layer will not inflate costs around unknown cells.

global_costmap

Description

Here is the official configuration guide for the global_costmap.

The global_costmap creates a global occupancy grid map that represents the entire environment in which the robot operates. It combines data from the robot’s localization system, static map, and sensor observations to build and update the global map. The global_costmap is used to generate high-level paths from the robot’s current position to its goal.

Here is an analogy…

The global costmap is like a full building map that helps the robot plan its overall routes. Imagine having a detailed floor plan where every area is color-coded based on whether the robot can safely travel there or not.

The global costmap creates this safety map by combining three key sources of information:

  1. The static map – like a basic building floor plan showing permanent features like walls and doorways
  2. The robot’s location tracking system – so it knows where it is within this map
  3. Sensor data – to detect changes in the environment over time

This complete map helps the robot plan efficient routes from its current location to any goal, much like how you might look at a building map to plan your route from the entrance to a specific room. The robot can see all possible paths and choose the best one while avoiding known obstacles.

The global costmap is particularly important because it gives the robot the “big picture” view it needs for smart path planning. While the local costmap handles immediate navigation (like watching where you step), the global costmap ensures the robot can efficiently reach its final destination (like knowing which hallways and rooms to use to reach your goal).

Parameters

Below are the parameters I often use for the global costmap along with my explanation.

update_frequency: (Hz)

  • Default: 5.0
  • My value: 5.0
  • Same as default. The global costmap is updated at a frequency of 5 Hz.

publish_frequency: (Hz)

  • Default: 1.0
  • My value: 5.0
  • Different from default. The global costmap is published at a frequency of 5 Hz, which is higher than the default value of 1 Hz. This means that the costmap will be sent to other nodes more frequently.

global_frame: (string)

  • Default: “map”
  • My value: “map”
  • Same as default. The global frame is set to “map”, which is typically the fixed frame of the environment.

robot_base_frame: (string)

  • Default: “base_link”
  • My value: “base_link”
  • Same as default. The robot’s base frame is set to “base_link”.

robot_radius: (meters)

  • Default: 0.1
  • My value: 0.15
  • Slightly different from default. You need to set this value to the radius of your robot base. 

resolution: (meters/cell)

  • Default: 0.1
  • My value: 0.05
  • Different from default. The costmap resolution is set to 0.05 meters per cell, providing a higher resolution than the default value of 0.1 meters per cell.

track_unknown_space: (bool)

  • Default: False
  • My value: true
  • Different from default. Unknown space is tracked in the costmap, meaning that the costmap will distinguish between free, occupied, and unknown space.

plugins: (list)

  • Default: [“static_layer”, “obstacle_layer”, “inflation_layer”]
  • My value: [“static_layer”, “obstacle_layer”, “voxel_layer”, “range_sensor_layer”, “inflation_layer”]
  • Different from default. Additional plugins are used in the global costmap, including the “voxel_layer” for 3D obstacle detection and the “range_sensor_layer” for incorporating range sensor data.

static_layer.map_subscribe_transient_local: (bool)

  • Default: True
  • My value: True
  • Same as default. The static layer subscribes to the map topic using the “transient local” durability.
  • “Transient local” durability means that the static layer will only receive the map data if it is available at the moment it subscribes to the topic. If the map data was published before the static layer subscribed, it won’t receive that old data. It will only get the map data that is published after it starts listening to the topic.
  • This is useful because it ensures that the static layer always uses the most recent map available when it starts up, without relying on old data that might be outdated or inconsistent with the current state of the system.

Parameters

obstacle_layer

The obstacle layer in the global costmap is responsible for adding obstacle information detected by sensors, such as laser scanners, to the costmap. It marks cells in the costmap as occupied if obstacles are detected within a certain range and height. The obstacle layer helps the robot navigate by providing a representation of the obstacles in the environment, allowing it to plan paths that avoid collisions.

Here is the official configuration guide for the obstacle layer.

enabled: (bool)

  • Default: True
  • My value: True
  • Same as default. The obstacle layer is enabled, allowing the costmap to incorporate obstacle information from sensor data.

observation_sources: (string)

  • Default: “”
  • My value: scan
  • Different from default. The obstacle layer is configured to use the “scan” observation source, which corresponds to a laser scanner sensor.

scan.topic: (string)

  • Default: “”
  • My value: /scan
  • Different from default. The obstacle layer subscribes to the “/scan” topic to receive laser scan data.

scan.raytrace_min_range: (meters)

  • Default: 0.0
  • My value: 0.20
  • Different from default. The minimum range for raytracing is set to 0.20 meters. This means that the obstacle layer will start clearing obstacles from this distance.
  • I set this value to 0.20 because part of the robot’s body is next to the LIDAR.

scan.obstacle_min_range: (meters)

  • Default: 0.0
  • My value: 0.20
  • Different from default. The minimum range for adding obstacles to the costmap is set to 0.20 meters. Obstacles closer than this distance will not be added.
  • I set this value to 0.20 because part of the robot’s body is next to the LIDAR.

scan.max_obstacle_height: (meters)

  • Default: 0.0
  • My value: 2.0
  • Different from default. The maximum height of obstacles to be added to the costmap is set to 2.0 meters.

scan.clearing: (bool)

  • Default: False
  • My value: True
  • Different from default. Clearing is enabled, allowing the obstacle layer to clear free space in the costmap based on laser scan data.

scan.marking: (bool)

  • Default: True
  • My value: True
  • Same as default. Marking is enabled, allowing the obstacle layer to mark obstacles in the costmap based on laser scan data.

scan.data_type: (string)

  • Default: “LaserScan”
  • My value: “LaserScan”
  • Same as default. The data type is set to “LaserScan”, indicating that the obstacle layer expects laser scan data.

range_sensor_layer

The range sensor layer in the global costmap is similar to the obstacle layer but uses data from range sensors like ultrasonic or infrared sensors to detect obstacles. It helps to incorporate obstacle information from sensors that have different characteristics than laser scanners. 

Here is the official configuration guide for the range sensor layer.

Here is the tuning guide for the range sensor layer parameters in the global costmap based on the provided configuration guide and your yaml file:

enabled: (bool)

  • Default: True
  • My value: False
  • Different from default. The range sensor layer is disabled in your configuration, meaning that it will not contribute to the global costmap.

topics: (vector<string>)

  • Default: [“”]
  • My value: [“/ultrasonic1”]
  • Different from default. The range sensor layer is configured to subscribe to the “/ultrasonic1” topic, which is expected to provide range sensor data.

phi: (double)

  • Default: 1.2
  • My value: 1.2
  • Same as default. The phi parameter determines the width of the sensor’s field of view. A value of 1.2 means that the sensor’s coverage area will be 1.2 radians wide.

inflate_cone: (double)

  • Default: 1.0
  • My value: 1.0
  • Same as default. The inflate_cone parameter determines how much the triangular area covered by the sensor is inflated. A value of 1.0 means no inflation.

no_readings_timeout: (double)

  • Default: 0.0
  • My value: 0.0
  • Same as default. If the range sensor layer does not receive any sensor readings for this duration (in seconds), it will mark the layer as not current. A value of 0.0 disables this timeout.

clear_threshold: (double)

  • Default: 0.2
  • My value: 0.2
  • Same as default. The clear_threshold parameter determines the probability below which cells are marked as free in the costmap.

mark_threshold: (double)

  • Default: 0.8
  • My value: 0.8
  • Same as default. The mark_threshold parameter determines the probability above which cells are marked as occupied in the costmap.

clear_on_max_reading: (bool)

  • Default: False
  • My value: True
  • Different from default. When clear_on_max_reading is set to True, the range sensor layer will clear obstacles from the costmap when the sensor reports its maximum range reading.

input_sensor_type: (string)

  • Default: ALL
  • My value: ALL
  • Same as default. The input_sensor_type parameter is set to “ALL”, meaning that the range sensor layer will automatically detect the sensor type based on the minimum and maximum range values reported by the sensor.

voxel_layer

The voxel layer in the global costmap is responsible for adding 3D obstacle information from depth sensors, such as RGB-D cameras, to the costmap. It divides the space into 3D voxels and marks them as occupied or free based on the depth sensor data. This allows the global costmap to represent obstacles not only on the ground plane but also at different heights, providing a more comprehensive understanding of the environment for navigation.

Here is the official configuration guide for the voxel layer.

enabled: (bool)

  • Default: True
  • My value: True
  • Same as default. The voxel layer is enabled, allowing the global costmap to incorporate 3D obstacle information from depth sensors.

publish_voxel_map: (bool)

  • Default: False
  • My value: False
  • Same as default. The voxel layer will not publish the 3D voxel map, which can save computational resources.

origin_z: (double)

  • Default: 0.0
  • My value: 0.0
  • Same as default. The origin_z parameter sets the height of the first voxel layer relative to the robot’s base frame.

z_resolution: (double)

  • Default: 0.2
  • My value: 0.2
  • Same as default. The z_resolution parameter determines the height resolution of each voxel layer.

z_voxels: (int)

  • Default: 10
  • My value: 10
  • Same as default. The z_voxels parameter sets the number of voxel layers in the voxel grid.

min_obstacle_height: (double)

  • Default: 0.0
  • My value: 0.0
  • Same as default. The min_obstacle_height parameter defines the minimum height of obstacles to be considered in the voxel layer.

max_obstacle_height: (double)

  • Default: 2.0
  • My value: 2.0
  • Same as default. The max_obstacle_height parameter defines the maximum height of obstacles to be considered in the voxel layer.

mark_threshold: (int)

  • Default: 0
  • My value: 0
  • Same as default. The mark_threshold parameter determines the minimum number of voxels required to mark a cell as occupied in the 2D costmap.

observation_sources: (string)

  • Default: “”
  • My value: robot_depth_camera
  • Different from default. The observation_sources parameter specifies the depth sensor source for the voxel layer. In your case, it is set to “robot_depth_camera”.

robot_depth_camera.topic: (string)

  • Default: “”
  • My value: /rgbd_camera
  • Different from default. The topic parameter specifies the ROS topic where the depth sensor data is published.

robot_depth_camera.raytrace_min_range: (double)

  • Default: 0.0
  • My value: 0.05
  • Different from default. The raytrace_min_range parameter sets the minimum range for raytracing in the voxel layer.
  • My value is based on my experiences with the Intel RealSense D435.

robot_depth_camera.raytrace_max_range: (double)

  • Default: 3.0
  • My value: 1.25
  • Different from default. The raytrace_max_range parameter sets the maximum range for raytracing in the voxel layer.
  • My value is based on my experiences with the Intel RealSense D435.

robot_depth_camera.obstacle_min_range: (double)

  • Default: 0.0
  • My value: 0.05
  • Different from default. The obstacle_min_range parameter sets the minimum range for considering obstacles in the voxel layer.

robot_depth_camera.obstacle_max_range: (double)

  • Default: 2.5
  • My value: 1.25
  • Different from default. The obstacle_max_range parameter sets the maximum range for considering obstacles in the voxel layer.

robot_depth_camera.min_obstacle_height: (double)

  • Default: 0.0
  • My value: 0.0
  • Same as default. The min_obstacle_height parameter sets the minimum height of obstacles to be considered in the voxel layer.

robot_depth_camera.max_obstacle_height: (double)

  • Default: 2.0
  • My value: 2.0
  • Same as default. The max_obstacle_height parameter sets the maximum height of obstacles to be considered in the voxel layer.

robot_depth_camera.clearing: (bool)

  • Default: False
  • My value: False
  • Same as default. The clearing parameter determines whether the voxel layer should clear free space in the costmap based on the depth sensor data.

robot_depth_camera.marking: (bool)

  • Default: True
  • My value: True
  • Same as default. The marking parameter determines whether the voxel layer should mark obstacles in the costmap based on the depth sensor data.

robot_depth_camera.data_type: (string)

  • Default: “PointCloud2”
  • My value: “PointCloud2”

inflation_layer

The inflation layer in the global costmap adds a safety buffer around obstacles by gradually increasing the cost of the cells near the obstacles. This encourages the robot to maintain a safe distance from obstacles when planning paths. The inflation radius and cost scaling factor determine the size of the safety buffer and how quickly the cost increases as the robot gets closer to obstacles.

Here is the official configuration guide for the inflation layer.

plugin: (string)

  • Default: “nav2_costmap_2d::InflationLayer”
  • My value: “nav2_costmap_2d::InflationLayer”
  • Same as default. The plugin parameter specifies the plugin type for the inflation layer, which is “nav2_costmap_2d::InflationLayer”.

cost_scaling_factor: (double)

  • Default: 1.0
  • My value: 2.58
  • Different from default. The cost_scaling_factor parameter determines the rate at which the cost values decrease with distance from obstacles. 
  • Credit to this ROS Tuning Guide for finding this magic number which has worked really well on my own mobile robots for indoor environments.

inflation_radius: (double)

  • Default: 0.55
  • My value: 1.75
  • Different from default. The inflation_radius parameter sets the maximum distance from an obstacle at which the cost will be inflated. A higher value means that the cost will be inflated over a larger area around obstacles.
  • The inflation radius is set to 1.75 meters, which is larger than the default value. This means that the cost will decay over a larger distance around obstacles, making the robot maintain a greater clearance from them. 
  • Credit to this ROS Tuning Guide for finding this magic number which has worked really well on my own mobile robots for indoor environments.

map_saver

Description

Here is the official configuration guide for the map_saver.

The map_saver package allows the robot to save its current map of the environment to a file. This can be useful for creating a map of a new environment or updating an existing map with new data. It provides an easy way to preserve the robot’s understanding of its surroundings.

When you use the map_saver, it creates two essential files:

  1. A .pgm file – This is like a black and white photograph of the map, showing which areas are open space, unknown, or contain obstacles
  2. A .yaml file – This contains important details about the map, such as its scale (resolution), origin point, and how to interpret the black and white values

The map_saver takes the occupancy grid data that the robot has built up through its mapping process and converts it into these permanent files. The occupancy data uses three main values:

  • Free space (marked as white in the .pgm)
  • Occupied space like walls (marked as black in the .pgm)
  • Unknown areas (marked as gray in the .pgm)

This saved map becomes important for autonomous navigation, as it provides the foundation for the robot to understand where it can and cannot go. Just as you might save a building’s floor plan to use for future visits, the robot uses this saved map to navigate effectively when it returns to the same space later.

Parameters

save_map_timeout: (seconds)Default: 2.0

  • My value: 5.0
  • Different from default. 

I use a longer timeout to ensure large maps save completely. Think of this like giving yourself extra time to save a large file. At 5.0 seconds (vs 2.0), it’s less likely to timeout when saving bigger or more detailed maps. Increasing this means more patience when saving but more reliable results, while decreasing it makes saving faster but might fail with large maps.

free_thresh_default: (probability)

  • Default: 0.25
  • My value: 0.25
  • Matches default. 

This parameter decides what counts as “empty space” in your map…like deciding how sure you need to be that a space is empty. 

At 0.25, if the robot is 25% or less sure something is there, it marks it as empty space. Increasing this means being more strict about what counts as empty space, while decreasing it means being more lenient about calling spaces empty.

occupied_thresh_default: (probability)

  • Default: 0.65 
  • My value: 0.65
  • Same as default. 0.65 is a good threshold for marking a cell as occupied in the occupancy grid.

This parameter affects how the map data is received. Like choosing between getting instant updates or waiting for complete information. Setting this to true means getting more reliable, complete map data but might take slightly longer. Setting it to false means getting faster updates but might miss some details.

planner_server

Description

Here is the official configuration guide for the planner_server.

The planner_server package is responsible for computing the optimal path for the robot to reach its goal. This package ensures that the robot can navigate through complex environments by following a well-thought-out path.

When given a goal location, the Planner Server studies the global costmap (the complete map with all known obstacles and safe areas) to calculate an optimal path. It considers important factors like:

  • Finding the shortest reasonable route
  • Maintaining safe distances from obstacles
  • Avoiding dead ends or impossible paths
  • Considering the robot’s size and movement capabilities

The Planner Server works through plugins, which are like different strategies for path planning. The default plugin (NavfnPlanner) uses a navigation function to find efficient paths, but you can also use other plugins like SMAC planner or Theta Star planner depending on your robot’s needs. Each plugin might be better suited for different situations – just as you might use different strategies to navigate through an open warehouse versus a crowded office.

Parameters

expected_planner_frequency: (Hz)

  • Default: 20.0
  • My value: 20.0
  • Matches default. 20 Hz is a reasonable expected frequency for the planner to operate at.

planner_plugins: (list of strings)

  • Default: [‘GridBased’]
  • My value: [“GridBased”]
  • Same as default. The GridBased plugin using the NavfnPlanner is a good default choice.

GridBased.plugin: (string)

  • Default: “nav2_navfn_planner/NavfnPlanner”
  • My value: “nav2_navfn_planner::NavfnPlanner”
  • Slightly different syntax, but refers to the same NavfnPlanner plugin. The “::” namespace separator is preferred in newer ROS 2 versions over “/”.

GridBased.tolerance: (meters)

  • Default: 0.5
  • My value: 0.5
  • Matches default. A tolerance of 0.5 meters around the goal is reasonable for most applications.

GridBased.use_astar: (boolean)

  • Default: false
  • My value: false
  • Same as default. The default Dijkstra’s algorithm is efficient for most navigation tasks, no need to use A*.

GridBased.allow_unknown: (boolean)

  • Default: true
  • My value: true
  • Matches default. Allowing planning in unknown space gives more flexibility if the map is not fully explored.

smoother_server

Description

Here is the official configuration guide for the smoother_server.

The Smoother Server refines the paths created by the Planner Server, much like how you might smooth out a rough sketch into a flowing drawing. While the planner creates a basic path that gets the robot from start to goal, the smoother makes this path more natural and efficient for the robot to follow.

Think of the raw planned path like walking through a building by making sharp turns at every corner. The smoother transforms this into a more natural path, like how people tend to curve around corners rather than making exact 90-degree turns. This smoother motion is generally more efficient and puts less stress on the robot’s motors.

The Smoother Server accomplishes this by:

  1. Taking the original path from the planner
  2. Analyzing each segment and turn
  3. Creating gentle curves where appropriate
  4. Ensuring these smoother paths still maintain safe distances from obstacles
  5. Optimizing the path for the robot’s movement capabilities

Parameters

smoother_plugins: (vector<string>)

  • Default:  [“simple_smoother”]
  • My value: [“simple_smoother”]
  • Matches the default.

This parameter tells the robot which path smoothing method to use. The simple_smoother is great for basic path smoothing needs. You can add multiple smoothers if needed, but one is usually sufficient. Each smoother needs its own configuration section.

simple_smoother.plugin: (string)

  • Default: “nav2_smoother::SimpleSmoother”
  • My value: “nav2_smoother::SimpleSmoother”
  • Matches default. This parameter specifies the exact smoother code to use. 

Think of this as telling the robot exactly which smoothing algorithm to load. This particular smoother is like using a basic path-smoothing tool that rounds off sharp corners.

simple_smoother.tolerance: (double)

  • Default: 1.0e-10
  • My value: 1.0e-10
  • Matches default. How precise the smoothing needs to be before considering it done. 

This very small number (0.0000000001) means it will try to get very precise results. Increasing this value makes smoothing faster but less precise, while decreasing it makes smoothing more precise but slower.

simple_smoother.max_its: (integer)

  • Default: None specified
  • My value: 1000
  • Maximum number of attempts to smooth the path. 

At 1000 attempts, it gives plenty of chances to get a good result. Increasing this allows more attempts for better smoothing but takes longer, while decreasing it makes smoothing faster but might give less optimal results.

behavior_server

Description

Here is the official configuration guide for the behavior_server.

The Behavior Server acts as a specialized component in the Nav2 system, managing and executing specific, well-defined actions that a robot might need during navigation. While the BT Navigator handles overall navigation strategy (like “navigate to the conference room”), the Behavior Server executes precise, individual behaviors that might be needed along the way (like “back up when stuck”).

Think of the Behavior Server like a team of specialists, each expert at a particular maneuver. These specialists include:

  1. A Spin specialist that knows exactly how to rotate the robot safely in place
  2. A Backup specialist that can guide the robot backward when needed
  3. A Drive-on-heading specialist that keeps the robot moving straight along a specific direction
  4. A Wait specialist that handles proper pausing behavior
  5. An Assisted Teleop specialist that combines human control with autonomous safety features

Each behavior is implemented as a plugin, sharing resources like costmaps and transformation data to maintain efficiency. When a behavior is called upon, the server ensures safe execution by:

  • Checking both local and global costmaps for obstacles
  • Monitoring the robot’s position and orientation
  • Managing proper movement speeds and accelerations
  • Coordinating timing of actions
  • Maintaining appropriate update frequencies (default 10Hz)

What makes the Behavior Server particularly powerful is how it complements the broader navigation system. When the BT Navigator encounters a situation requiring a specific action, it can call on the Behavior Server to execute that precise maneuver with expert-level skill and safety considerations.

Parameters

local_costmap_topic: (string)

  • Default: “local_costmap/costmap_raw” 
  • My value: “local_costmap/costmap_raw”
  • Matches default. This is the standard topic for the raw local costmap.

global_costmap_topic: (string)

  • Default: “global_costmap/costmap_raw”
  • My value: “global_costmap/costmap_raw” 
  • Matches default. This is the standard topic for the raw global costmap.

local_footprint_topic: (string)

  • Default: “local_costmap/published_footprint”
  • My value: “local_costmap/published_footprint”
  • Matches default. This is the standard topic for the robot’s footprint in the local costmap frame.

global_footprint_topic: (string)

  • Default: “global_costmap/published_footprint”
  • My value: “global_costmap/published_footprint”
  • Matches default. This is the standard topic for the robot’s footprint in the global costmap frame.

cycle_frequency: (Hz)

  • Default: 10.0
  • My value: 10.0
  • Matches default. 10 Hz is a good frequency for running the behavior plugins.

behavior_plugins: (vector<string>) 

  • Default: {“spin”, “backup”, “drive_on_heading”, “wait”}
  • My value: [“spin”, “backup”, “drive_on_heading”, “assisted_teleop”, “wait”]
  • These are the core recovery and helper behaviors typically needed.

simulate_ahead_time: (seconds)

  • Default: 2.0 
  • My value: 2.0
  • Matches default. 2 seconds is a reasonable amount of time to look ahead for collisions.

max_rotational_vel: (rad/s)

  • Default: 1.0
  • My value: 0.5
  • Lower than default. 0.5 rad/s puts a safer limit on the maximum rotational speed.

min_rotational_vel: (rad/s) 

  • Default: 0.4
  • My value: 0.4
  • Matches default. 0.4 rad/s is a good minimum rotational speed to allow the robot to rotate in place.

rotational_acc_lim: (rad/s^2)

  • Default: 3.2
  • My value: 3.2 
  • Matches default. 3.2 rad/s^2 is a reasonable acceleration limit for rotational movement.

enable_stamped_cmd_vel: (bool)

  • Default: true for new versions (Kilted+), false for older versions (Jazzy or older)
  • My value: false
  • Determines whether to use basic or timestamped velocity commands. 

This parameter is like choosing between a simple speed command or one with a timestamp. Setting this false uses basic Twist messages, while true uses TwistStamped which is a velocity command with a timestamp and coordinate reference frame included. 

local_frame: (string)

  • Default: “odom”
  • My value: “odom”
  • Matches default. The odometry frame is the standard local Extended Kalman Filter reference frame.

global_frame: (string) 

  • Default: “map”
  • My value: “map”
  • Matches default. The map frame is the standard global reference frame.

robot_base_frame: (string)

  • Default: “base_link” 
  • My value: “base_link”
  • Matches default. The base_link frame is the standard frame for the robot’s body.

transform_timeout: (seconds)

  • Default: 0.1
  • My value: 0.1
  • Matches default. 0.1s is a reasonable timeout for transforms from the tf buffer.

waypoint_follower

Description

Here is the official configuration guide for the waypoint follower.

The waypoint_follower package allows the robot to follow a predefined set of waypoints in the environment. This is useful for tasks that require the robot to visit specific locations in a sequence. It ensures that the robot can navigate through multiple points efficiently.

It also has a special plugin which you can use to perform custom behaviors at each waypoint, like taking a photo or picking up an object.

Parameters

loop_rate: (Hz)

  • Default: 20
  • My value: 2
  • Different from default. A lower rate of 2 Hz is sufficient for checking navigation task results, reducing computational load while still providing timely updates.

stop_on_failure: (bool)

  • Default: true
  • My value: false
  • Different from default. Setting this to false allows the robot to continue to the next waypoint even if one fails, which can be more robust in real-world scenarios with dynamic obstacles.

waypoint_task_executor_plugin: (string)

  • Default: ‘wait_at_waypoint’
  • My value: ‘wait_at_waypoint’
  • Matches default. This plugin is suitable for basic waypoint following tasks.

wait_at_waypoint.plugin: (string)

  • Default: “nav2_waypoint_follower::WaitAtWaypoint”
  • My value: “nav2_waypoint_follower::WaitAtWaypoint”
  • Matches default. This is the correct plugin name for the wait_at_waypoint functionality.

wait_at_waypoint.enabled: (bool)

  • Default: Not specified
  • My value: True
  • Explicitly enables the wait_at_waypoint plugin, ensuring it’s active.

wait_at_waypoint.waypoint_pause_duration: (seconds)

  • Default: Not specified
  • My value: 10
  • Sets a 5-second pause at each waypoint, which can be useful for allowing the robot to stabilize or perform tasks at each point. You can set this value to whatever you want.

global_frame_id: (string)

  • Default: ‘map’
  • My value: Not specified
  • The default ‘map’ is typically sufficient for most setups, so not specifying it in your YAML is fine.

bond_heartbeat_period: (seconds)

  • Default: 0.1
  • My value: Not specified
  • The default of 0.1 seconds works well for most systems, so not specifying it is fine.

action_server_result_timeout: (seconds)

  • Default: 900.0
  • My value: 900.0
  • Matches default. This timeout value is for action servers to discard a goal handle if a result hasn’t been produced within 900 seconds (15 minutes). This long timeout allows for complex or long-running navigation tasks to complete without being prematurely terminated.

velocity_smoother

Description

Here is the official configuration guide for the velocity_smoother.

The velocity_smoother package ensures that the velocity commands sent to the robot are smooth and gradual. This prevents jerky movements and helps in maintaining a stable and controlled motion. It is particularly useful for preventing sudden starts and stops that can be hard on the robot’s hardware.

Parameters

smoothing_frequency: (Hz)

  • Default: 20.0
  • My value: 20.0
  • Matches default. 20 Hz is a good frequency for smoothing out velocity commands to reduce wear on the motors.

scale_velocities: (boolean)

  • Default: false
  • My value: false
  • Same as default. Scaling velocities proportionally is not necessary for most applications.

feedback: (string)

  • Default: “OPEN_LOOP”
  • My value: “OPEN_LOOP” 
  • Matches default. Open loop control, assuming the last commanded velocity, is sufficient when acceleration limits are set appropriately.

max_velocity: (m/s or rad/s)

  • Default: [0.5, 0.0, 2.5]
  • My value: [0.5, 0.5, 2.5]
  • Different from default in the y-axis. Allowing 0.5 m/s in the y-axis enables omni-directional movement if the robot supports it. If you do not have an omni-directional robot, leave this parameter as the default.

min_velocity: (m/s or rad/s)

  • Default: [-0.5, 0.0, -2.5]  
  • My value: [-0.5, -0.5, -2.5]
  • Different from default in the y-axis. Allowing -0.5 m/s in the y-axis enables reverse omni-directional movement if the robot supports it. If you do not have an omni-directional robot, leave this parameter as the default.

deadband_velocity: (m/s or rad/s)

  • Default: [0.0, 0.0, 0.0]
  • My value: [0.0, 0.0, 0.0]
  • Same as default. No deadband is needed in most cases to prevent hardware damage.

velocity_timeout: (s)

  • Default: 1.0
  • My value: 1.0 
  • Matches default. 1 second is a reasonable timeout after which the smoother should stop publishing commands if no new ones are received.

max_accel: (m/s^2 or rad/s^2)

  • Default: [2.5, 0.0, 3.2]
  • My value: [0.3, 0.3, 3.2]
  • Different from default in x and y. 0.3 m/s^2 provides gentler acceleration for x and y while still allowing fast rotational acceleration.

max_decel: (m/s^2 or rad/s^2)

  • Default: [-2.5, 0.0, -3.2]
  • My value: [-0.5, -0.5, -3.2]
  • Different from default in x and y. -0.5 m/s^2 allows for smooth deceleration in x and y to reduce wear on the motors.

odom_topic: (string)

  • Default: “odom”
  • My value: “odometry/filtered”
  • If I changed the mode to CLOSED_LOOP, we would use this odometry topic, which is generated by the Extended Kalman Filter (i.e. robot_localization package).

odom_duration: (s)

  • Default: 0.1
  • My value: 0.1
  • Matches default. 0.1 seconds is a good duration to average odometry data for estimating current velocity in closed loop mode.

use_realtime_priority: (boolean)

  • Default: false
  • My value: false 
  • Same as default. Realtime priority is not needed for the velocity smoother in most applications.

enable_stamped_cmd_vel: (bool)

  • Default: true for new versions (Kilted+), false for older versions (Jazzy or older)
  • My value: false
  • Determines whether to use basic or timestamped velocity commands. 

collision monitor

Description

Here is the official configuration guide for the collision monitor.

The Collision Monitor serves as a critical safety system in Nav2, providing an extra layer of protection beyond the standard navigation stack. Think of it as a vigilant safety officer who constantly watches for potential collisions and can quickly intervene to prevent accidents, much like how advanced cars have emergency braking systems that work independently from normal braking.

What makes the Collision Monitor special is that it operates directly with sensor data, bypassing the usual costmap and planning systems. This allows for much faster reaction times to sudden obstacles. It’s particularly valuable for:

  • Large industrial robots where safety is paramount
  • Fast-moving robots that need quick reaction times
  • Robots operating around people or other moving robots
  • Situations where obstacles might appear suddenly

The Collision Monitor uses a concept of “zones” around the robot – imagine invisible safety bubbles that trigger different responses when obstacles enter them. These zones can be:

  • Stop zones: The robot stops completely if obstacles enter this area
  • Slowdown zones: The robot reduces speed when obstacles are detected here
  • Limit zones: The robot’s speed is capped when obstacles are present
  • Approach zones: The robot maintains a safe time-to-collision with detected obstacles

Each zone can be configured as different shapes:

  1. Custom polygons you define around the robot
  2. Simple circles for efficient processing
  3. The robot’s own footprint
  4. Velocity-based polygons that change size based on how fast the robot is moving

The monitor works with various types of sensor data:

  • Laser scans for precise 2D detection
  • Point clouds from 3D sensors
  • Range data from IR sensors or sonars

While this system doesn’t replace certified safety hardware, it provides a valuable additional safety layer that can help prevent collisions through quick reaction times and configurable safety behaviors. Think of it as adding defensive driving capabilities to your robot’s navigation system.

Parameters

base_frame_id: (string)

  • Default: “base_footprint”
  • My value: “base_footprint”
  • Matches default. This is the reference frame attached to your robot’s base. 

Think of this as the robot’s center point for all measurements. Changing this is only needed if your robot uses different frame names, but “base_footprint” is the standard name most robots use.

odom_frame_id: (string)

  • Default: “odom”
  • My value: “odom”
  • Matches default. This is the frame used for tracking robot movement. 

Like a coordinate system that moves with the robot. The standard “odom” frame name works for most robots. Only change this if your robot uses a different name for its odometry frame.

transform_tolerance: (seconds)

  • Default: 0.1
  • My value: 0.2
  • Different from default. How old we allow position data to be. 

My higher value (0.2s vs 0.1s) allows for slightly older position data, which can help on slower computers. Increasing this makes the system more tolerant of delays but might use outdated information.

source_timeout: (seconds)

  • Default: 2.0
  • My value: 1.0
  • Different from default. How long to wait before assuming sensor data is too old. 

My lower value (1.0s vs 2.0s) means we’re more strict about needing fresh sensor data. Increasing this helps with slow sensors but might react slower to obstacles.

cmd_vel_in_topic: (string)

  • Default: “cmd_vel_smoothed”
  • My value: “cmd_vel_smoothed”
  • Matches default. The topic where the robot receives velocity commands to check.

This is where the collision monitor looks for commands to verify. Change this if your velocity commands come from a different topic.

cmd_vel_out_topic: (string)

  • Default: “cmd_vel”
  • My value: “cmd_vel”
  • Matches default. The topic where safety-checked commands are sent. 

After checking for potential collisions, commands are sent here. This is typically “cmd_vel” as most robots listen for commands on this topic.

state_topic: (string)

  • Default: “” (empty)
  • My value: “collision_monitor_state”
  • Different from default. Where to publish information about active safety behaviors.

By setting this (instead of leaving it empty), we can monitor which safety zones are active. This helps with debugging and monitoring the system’s behavior.

base_shift_correction: (bool)

  • Default: true
  • My value: true
  • Matches default. Whether to account for robot movement when processing sensor data. 

Like compensating for taking a photo from a moving car…keeping this true makes collision detection more accurate but uses more CPU power. Setting it false would be faster but less accurate for fast-moving robots.

polygons: (vector<string>)

  • Default: None specified
  • My value: [“FootprintApproach”]
  • Only using the approach-based collision checking method. 

Think of this as defining different safety zones around the robot. I’m only using a footprint-based approach that looks ahead to predict collisions. You can add more zones for immediate stopping or slowing, but I find the approach method works well alone.

FootprintApproach.type: (string)

  • Default: None specified
  • My value: “polygon”
  • Defines that we’re using a polygon shape to check for collisions. 

Like drawing a shape around the robot to check for obstacles. Using “polygon” lets us match the robot’s actual shape using its footprint. The other option “circle” would be simpler but less precise.

FootprintApproach.time_before_collision: (seconds)

  • Default: 2.0
  • My value: 1.2
  • Different from default. How far ahead in time to check for potential collisions. 

We’re being a bit more aggressive than the default 2.0 seconds. Increasing this value makes the robot more cautious but might make it stop unnecessarily far from obstacles. Decreasing it allows closer approaches but gives less reaction time.

FootprintApproach.simulation_time_step: (seconds)

  • Default: 0.1
  • My value: 0.1
  • Matches default. How often to check for collisions during the prediction. 

Think of this as how detailed our collision prediction is. At 0.1 seconds, it checks 10 times per second of prediction. Decreasing this makes prediction more accurate but uses more CPU. Increasing it saves CPU but might miss potential collisions.

observation_sources: (vector<string>)

  • Default: None specified
  • My value: [“scan”]
  • Using only the LIDAR as a source of obstacle detection. Like choosing which sensors to use for safety. 

I’m only using the laser scanner in my example on GitHub, while you could also use other sensors like RGBD cameras (pointcloud) or range sensors. Using fewer sensors is simpler but might miss obstacles that only certain sensors can see.

docking server

Description

Here is the official configuration guide for the docking server.

The docking_server manages the precise process of connecting robots to charging stations or other docking points. Think of it like an automated parking system that needs to carefully guide vehicles into exact positions for charging or loading.

The server coordinates the complete docking sequence: approaching a pre-staging position, using sensors for precise alignment, executing the final approach, and confirming successful connection. 

For undocking, it reverses this process to safely disconnect and move away. The system maintains controlled speeds and precise positioning throughout while monitoring for any issues.

Using a plugin-based architecture, the server adapts to different robot types, charging methods, and sensor systems. It can handle multiple docking stations in an environment, making it useful for facilities where robots need to dock at various locations for charging or material handling.

I will create a separate tutorial devoted to the docking server. If you don’t have a dock set up, you can use these default parameters.

slam_toolbox

Description

Here is the official configuration guide for the slam_toolbox.

The slam_toolbox package is used for simultaneous localization and mapping (SLAM) in ROS 2, which means it helps the robot build a map of an unknown environment while keeping track of its location using sensors like LIDAR. This package allows the robot to explore and map new areas autonomously. It is essential for robots operating in dynamic or previously unmapped environments.

I will not go through a detailed step-by-step analysis of each of the parameters for the slam_toolbox because you’re generally better off using the default parameters (which are the same ones in my yaml file). The package’s author has invested significant effort in fine-tuning these parameters to work well out of the box for a wide range of robots. This means you can usually achieve good results without needing to tune the settings yourself.

The slam_toolbox includes both synchronous and asynchronous modes for mapping. Synchronous mode, which is the default, will work best in most use cases. 

In both modes, the robot is constantly moving and collecting data from its LIDAR sensor. The key difference lies in how this data is processed:

Synchronous mapping (default for Nav2):

  • The robot processes LIDAR scans in a strict sequence, one after another.
  • Each scan is fully integrated into the map before the next one is processed.
  • This can result in more consistent and accurate maps, but might introduce a slight delay in map updates.

Asynchronous mapping:

  • The robot processes LIDAR scans as soon as they become available.
  • Multiple scans can be processed simultaneously.
  • This can lead to faster map updates, but might occasionally result in slight inconsistencies in the map.

For most applications, the default synchronous mode will provide the best balance of accuracy and performance. However, in scenarios where rapid map updates are important, asynchronous mode might be beneficial.

To use asynchronous mode, you will need to make the “slam” launch configuration parameter in the main bringup launch file False, and then launch a separate launch file dedicated to asynchronous mapping:

ros2 launch slam_toolbox online_async_launch.py

Remember, these modes affect only how the data is processed, not how it’s collected. The robot continues to move and gather data constantly in both modes.

Final Notes on Navigation Tuning

This guide represents my preferred configuration after thousands of hours of working with mobile robots and ROS 2. However, remember that:

  1. Every robot is unique – use these parameters as a starting point, not absolute rules
  2. Test changes systematically – modify one parameter at a time and observe the effects
  3. Safety first – always test new configurations in a safe environment
  4. Consider your specific needs:
    • Is smooth motion more important than precise positioning?
    • Do you need to prioritize CPU efficiency?
    • How dynamic is your environment?

Getting Help

If you run into issues while tuning:

Next Steps

Try these parameters on your robot, observe its behavior, and adjust based on your specific needs. Navigation tuning is an iterative process – don’t be afraid to experiment and find what works best for your application.

Keep Building!