By the end of this tutorial, you will be running your first ROS 2 programs.
You Will Need
In order to complete this tutorial, you will need:
A computer or virtual machine running Ubuntu 24.04. If you followed my previous tutorial, you already have that setup.
Set the Locale
The official steps for installing ROS are at this link at ROS.org, but let’s go through this entire process together.
Follow along with me click by click, keystroke by keystroke.
We will begin by installing ROS 2 Jazzy via Debian Packages. Debian packages are software files used to install programs and applications on Ubuntu.
Open a new terminal window.
Type this command inside a terminal window.
locale
A locale is a set of variables that define the language, country, and character encoding settings. These settings are used by applications to determine how to display text, dates, times, and other information.
Now type the following command:
sudo apt update && sudo apt install locales
“sudo apt update” updates the package index list. This list is a database of all the software packages available for your version of Ubuntu.
“sudo apt install locales” installs the locales package, which provides support for different languages and regions.
Now type the following commands into the terminal window to generate locale definition files. After each command, press Enter only your keyboard:
In this case, these commands are generating locale definition files for the English (United States) locale and the English (United States) UTF-8 locale. The UTF-8 locale is a special locale that supports the UTF-8 character encoding, which is the standard encoding for most languages.
Now we need to verify the settings by typing:
locale
Here is what you should see:
Enable the Required Repositories
Let’s add the ROS 2 apt repository to our system. APT stands for “Advanced Package Repository”. This repository provides a convenient way to install and manage ROS 2 packages without having to clone packages to your computer from GitHub and build them from that source code.
Open a terminal window, and type the following two commands:
sudo apt install software-properties-common
sudo add-apt-repository universe
Press Enter.
The software-properties-common package provides a number of tools for managing software sources on Ubuntu and Debian systems.
The universe repository is a software repository that contains a wide variety of software packages, including many that are not included in the default Ubuntu and Debian repositories.
Now we need to add the ROS 2 GPG key with apt. The ROS 2 GPG key makes sure the software packages you are installing are from a trusted source.
Type the following command to install ROS 2 development tools.
sudo apt update && sudo apt install ros-dev-tools
Now update the apt repository:
sudo apt update -y
Upgrade the packages on your system to make sure you have the newest versions.
sudo apt upgrade -y
Install ROS 2
Now for the fun part. Here is where we get to install ROS 2 Jazzy.
Open a terminal window, and type this command:
sudo apt install ros-jazzy-desktop
Set Up the Environment Variables
Once jazzy has finished installing, you need to set up the important environment variables. Environment variables are settings that tell your computer how to find and use ROS 2 commands and packages.
When you run this command, it appends the line source /opt/ros/jazzy/setup.bash to your ~/.bashrc file.
What does this do? Each time you open a new terminal window, you are starting what is called a bash session. The bash session needs to know what version of ROS 2 you are using.
By adding this line (echo “source /opt/ros/jazzy/setup.bash”) to your ~/.bashrc file, you ensure the necessary environment variables and paths for ROS 2 Jazzy are properly set up each time you open a new terminal window, allowing you to use ROS 2 commands and tools without having to manually run the setup.bash script every time.
For the changes to take effect, you now need to open a new terminal window, or you can type this command in the current terminal:
source ~/.bashrc
You can verify that line was added by typing:
sudo apt-get install gedit -y
gedit ~/.bashrc
Close the gedit window.
Check Your ROS 2 Version
Now let’s see what ROS 2 version we are using using:
printenv ROS_DISTRO
You should see “jazzy”.
You can also type:
env | grep ROS
Finally, you can also type:
echo $ROS_DISTRO
Set up the system to manage ROS package dependencies.
sudo rosdep init
rosdep update
Install Gazebo and Other Useful Packages
Let’s install some other useful packages like pip (the Python package manager), Gazebo, a simulation software for robotics, and NumPy, a scientific computing library for Python.
sudo apt-get install python3 python3-pip -y
sudo apt-get install ros-${ROS_DISTRO}-ros-gz -y
sudo apt-get install python3-numpy
Test Your Gazebo Installation
If you want to run an example Gazebo simulation world now, you can type:
These environment variables here “LIBGL_ALWAYS_SOFTWARE=1 QT_QPA_PLATFORM=xcb” help you avoid the following nasty error which happens by default when you try to launch Gazebo from a fresh ROS 2 Jazzy installation:
[GUI] [Err] [Ogre2RenderEngine.cc:1301] Unable to create the rendering window: OGRE EXCEPTION(3:RenderingAPIException): currentGLContext was specified with no current GL context in GLXWindow::create at ./.obj-x86_64-linux-gnu/gz_ogre_next_vendor-prefix/src/gz_ogre_next_vendor/RenderSystems/GL3Plus/src/windowing/GLX/OgreGLXWindow.cpp (line 165)
You can see the error, if you open a new terminal window, and type:
gz sim -v 4 shapes.sdf
Let’s make these environment variables permanent. Open a terminal window, and type these commands, one after the other:
Close the simulation by typing CTRL + C in the terminal window.
Test Your ROS 2 Installation
Now that we have tested Gazebo, let’s test our ROS 2 installation by running some sample programs.
Open a terminal window, and type:
ros2 run demo_nodes_cpp talker
This command runs a pre-built program called “talker” that comes with ROS 2. The “talker” program publishes messages to the ROS 2 system in string format.
Open another terminal window, and type:
ros2 run demo_nodes_py listener
If your output looks similar to the images above, you have installed ROS 2 successfully.
To close these programs, press CTRL + C on your keyboard in both terminal windows.
So what did we just do here, and what do these two hello world programs we just ran have to do with real robots?
So imagine you’re building a robot that needs to navigate through a room. The “talker” program is like a sensor on the robot (e.g., a depth camera for example) that constantly sends out information about what it sees.
The “listener” program, on the other hand, receives the information that was published by the talker program. This listener program, in a real robot, could do something with that data like stop the wheels if an obstacle is detected by the camera.
The talker program is what we call in ROS 2, a publisher. The listener program is called a subscriber. Remember those two terms. Those are the two most important terms in all of ROS 2.
A ROS 2 publisher sends data, and a subscriber receives data.
In future tutorials, we will create our own publishers and subscribers for actual robots.
Publishers and subscribers are the main way programs exchange information with each other inside a robot.
So, again, just remember this…publishers are programs that send data to other programs, and subscribers are programs that receive data from other programs. All these programs are usually written in either Python or C++.
Collectively, publishers and subscribers in ROS 2 are called nodes. You just saw how to run a publisher node named talker and a subscriber node named listener.
The term node is the third most important term in all of ROS 2. So remember it because you will be using that term again and again over the course of your work with ROS 2.
To close out this tutorial, let me show you a cool program that enables you to show multiple terminal windows in a single window.
The program is called terminator.
Open a terminal window, and type:
sudo apt-get install terminator -y
Now type:
terminator
Right-click on the screen, and click “Split Horizontally”.
On the top panel, type:
ros2 run demo_nodes_cpp talker
On the bottom panel, type:
ros2 run demo_nodes_py listener
Now, press Enter in both terminals.
Here is what you should get:
Press CTRL + C in both terminal windows to close everything.
That’s it! Keep building! I’ll see you in the next tutorial.
In this tutorial, we’ll enhance our previous pick and place application by incorporating visual perception. We will still use the MoveIt Task Constructor for ROS 2. Here is what you will build by the end of this tutorial:
Building upon our earlier work, we’ll now use a depth camera to dynamically detect and locate objects in the scene, replacing the need for hardcoded object positions. This advancement will make our pick and place operation more flexible, robust, and adaptable to real-world scenarios.
We’ll modify our existing application to show how visual input can be integrated with the MoveIt Task Constructor framework. By the end of this tutorial, you’ll have created a vision-enhanced pick and place system that demonstrates the power of combining perception with sophisticated motion planning.
Here is what skills you will learn in this tutorial:
Learn how to integrate a simulated depth camera into Gazebo
Learn how to process point cloud data to detect and locate objects.
A point cloud is a collection of data points in 3D space that represent the surface of an object or environment.
Learn how to dynamically update the MoveIt planning scene based on visual information
Learn how to modify the MoveIt Task Constructor pipeline to use real-time object poses (positions and orientations)
Here is a high-level overview of what our enhanced program will do:
Set up the demo scene with randomly placed objects
Acquire and process point cloud data from the depth camera
Detect and localize objects in the scene
Update the planning scene with the detected objects
Define a pick sequence that includes:
Opening the gripper
Moving to a visually determined pre-grasp position
Approaching the target object
Closing the gripper
Lifting the object
Define a place sequence that includes:
Moving to a designated place location
Lowering the object
Opening the gripper
Retreating from the placed object
Plan the entire pick and place task using the updated scene information
Optionally execute the planned task
Provide detailed feedback on each stage of the process, including visual perception results
Real-World Use Cases
Same real-world use cases as in the previous tutorial.
All the code is here on my GitHub repository. Note that I am working with ROS 2 Iron, so the steps might be slightly different for other versions of ROS 2.
Install Cyclone DDS
First we need to install Cyclone DDS, and make it the default middleware to enable the different nodes to communicate with each other with low latency. You don’t need to understand how Cyclone DDS works.
cd ~/ros2_ws/src/mycobot_ros2/hello_mtc_with_perception/
Create a folder named launch.
mkdir launch
cd launch
Add your first launch file:
gedit run.launch.py
This launch file sets up MoveIt 2 for the myCobot robotic arm. It configures various paths, sets up MoveIt with settings for trajectory execution, kinematics, and planning pipelines, and launches the move_group node for motion planning. It also optionally starts RViz for visualization.
Save the file, and close it.
gedit demo.launch.py
This launch file is designed for running a demo of the MoveIt Task Constructor with perception capabilities. It sets up the necessary configurations, launches the required nodes, and provides parameters for the demo execution, including robot description, kinematics, and planning pipelines.
Save the file, and close it.
gedit robot.launch.py
This launch file is responsible for launching the robotic arm in the Gazebo simulation environment. It sets up the Gazebo world, starts the robot state publisher, initializes various controllers for the arm and gripper, and spawns the robot model.
Save the file, and close it.
gedit point_cloud_viewer.launch.py
This launch file is created for viewing point cloud data (.pcd files) in RViz. It starts a node to convert PCD files to point clouds, allows configuration of file paths and publishing intervals, and launches RViz with a specific configuration for visualizing the point cloud data.
Save the file, and close it.
gedit get_planning_scene_server.launch.py
This launch file starts the GetPlanningSceneServer node (we’ll go over this service in detail later in this tutorial), which is responsible for providing the current planning scene. It loads configuration parameters from a YAML file.
cd ~/ros2_ws/src/mycobot_ros2/hello_mtc_with_perception/
mkdir config
cd config
gedit mtc_node_params.yaml
This YAML file contains configuration parameters for the MoveIt Task Constructor node. It includes settings for robot control, object manipulation, motion planning, and various timeout and scaling factors. These parameters define the behavior of the pick and place task, including grasp generation, approach distances, and cartesian motion settings.
Save the file, and close it.
gedit get_planning_scene_server.yaml
This configuration file sets parameters for the GetPlanningSceneServer node. It includes settings for point cloud processing, plane and object segmentation, support surface detection, and various thresholds for filtering and clustering. These parameters are for processing sensor data and creating an accurate representation of the planning scene.
Save the file, and close it.
gedit ros_gz_bridge.yaml
This YAML file configures the bridge between ROS 2 and Gazebo topics. It specifies how sensor data (camera info and point clouds) should be translated between the Gazebo simulation environment and the ROS 2 ecosystem. The configuration ensures that simulated sensor data can be used by ROS 2 nodes for perception and planning tasks.
Save the file, and close it.
These configuration files are essential for setting up the perception pipeline, defining robot behavior, and ensuring proper communication between the simulation and ROS 2 environment in your pick and place task with perception.
Add the Source Code
Below I will guide you through how I set up the source code. If you want to learn more details about the implementation of each piece of code, go to the Appendix.
cd ~/ros2_ws/src/mycobot_ros2/hello_mtc_with_perception/src
gedit mtc_node.cpp
This file implements the main MoveIt Task Constructor (MTC) node for the pick and place task. It sets up the planning scene, creates the MTC task with various stages such as move to pick, grasp, lift, and place. The file also handles the execution of the task, coordinating the robot’s movements to complete the pick and place operation.
Save the file, and close it.
gedit cluster_extraction.cpp
This file contains functions for extracting clusters from a point cloud using a region growing algorithm. It helps in separating different objects in the scene by grouping points that likely belong to the same object. The extracted clusters can then be processed individually for object recognition or manipulation tasks.
Save the file, and close it.
gedit get_planning_scene_client.cpp
This file implements a test client for the GetPlanningScene service. It’s responsible for requesting the current planning scene, which includes information about objects in the environment.
Save the file, and close it.
gedit get_planning_scene_server.cpp
This file implements the server for the GetPlanningScene service. It processes point cloud and RGB image data to generate CollisionObjects for the MoveIt planning scene. These CollisionObjects represent the obstacles and objects in the robot’s environment, allowing for accurate motion planning and object manipulation.
Save the file, and close it.
gedit normals_curvature_and_rsd_estimation.cpp
This file contains functions to estimate normal vectors, curvature values, and Radius-based Surface Descriptor (RSD) values for each point in a point cloud. These geometric features help in understanding the shape and orientation of surfaces in the scene. The estimated features can be used for tasks such as object recognition, segmentation, and grasp planning.
Save the file, and close it.
gedit object_segmentation.cpp
This file does most of the heavy lifting. It implements object segmentation for the input 3D point cloud. It includes functions for fitting geometric primitives (cylinders and boxes) to point cloud data, which is used to identify and represent objects in the scene.
Save the file, and close it.
gedit plane_segmentation.cpp
This file contains functions to segment the support plane and objects from a given point cloud. It identifies the surface on which objects are placed, separating it from the objects themselves. This segmentation is important for tasks such as determining where objects can be placed.
cd ~/ros2_ws/src/mycobot_ros2/hello_mtc_with_perception/include
gedit cluster_extraction.h
Save the file, and close it.
gedit get_planning_scene_client.h
Save the file, and close it.
gedit normals_curvature_and_rsd_estimation.h
Save the file, and close it.
gedit object_segmentation.h
Save the file, and close it.
gedit plane_segmentation.h
Save the file, and close it.
These header files define the interfaces for the corresponding source files we created earlier. They contain function declarations, class definitions, and necessary include statements.
cd ~/ros2_ws/src/mycobot_ros2/hello_mtc_with_perception/
mkdir scripts
cd scripts
gedit pointcloud.sh
Save the file, and close it.
gedit robot.sh
Save the file, and close it.
chmod +x pointcloud.sh
chmod +x robot.sh
These scripts will help you launch the necessary components for viewing point clouds and running the robot simulation with Gazebo, RViz, and MoveIt 2.
The pointcloud.sh script is designed to launch the robot in Gazebo and then view a specific point cloud file.
The robot.sh script launches the full setup including the robot in Gazebo, RViz for visualization, and sets up MoveIt 2 for motion planning. The chmod commands at the end make both scripts executable, allowing you to run them directly from the terminal.
cd ~/ros2_ws/src/mycobot_ros2/hello_mtc_with_perception/
mkdir rviz
cd rviz
Add the RViz configuration file.
Create a Gazebo World
cd ~/ros2_ws/src/mycobot_ros2/mycobot_gazebo/worlds/
gedit cylinder_perception.world
Save the file, and close it.
Create a Cylinder in Gazebo
Now let’s create a cylinder that we will use for pick and place in Gazebo.
cd ~/ros2_ws/src/mycobot_ros2/mycobot_gazebo/models
mkdir hello_mtc_with_perception_cylinder
cd hello_mtc_with_perception_cylinder
gedit model.sdf
Save the file, and close it.
gedit model.config
Save the file, and close it.
Create an RGBD Camera
Let’s start by creating a URDF file for an RGBD camera. An RGBD camera captures both color (Red Green Blue) images and depth information, allowing it to see the world in 3D like human eyes do.
cd ~/ros2_ws/src/mycobot_ros2/hello_mtc_with_perception/
mkdir urdf
cd urdf
gedit mycobot_280.urdf.xacro
Save the file, and close it.
You can see that we have put specific RGBD camera information in the <gazebo> tag so that we can specify this piece in the SDF format needed by Gazebo.
We haven’t built our package yet (using “colcon build”), but it is helpful to know how to test point cloud data in Gazebo once everything has been built.
To confirm the point cloud data is being generated successfully in Gazebo, you would run these commands (you can come back to this section after you build the package later on in this tutorial):
Create a ROS 2 Service Interface for Point Cloud Processing for MoveIt Planning Scene Generation
Now let’s create a ROS 2 service that processes point cloud data to generate CollisionObjects for a MoveIt planning scene. This service will segment the input point cloud, fit primitive shapes to the segments, and create corresponding CollisionObjects. The service will also provide the necessary data for subsequent grasp generation should you decide to use a grasp generation strategy other than the one we will implement in this tutorial.
Here is a description of the service on a high level:
moveit_msgs::msg::PlanningSceneWorld: Contains CollisionObjects for all detected objects
sensor_msgs::msg::PointCloud2: Full scene point cloud
sensor_msgs::msg::Image: RGB image of the scene
std::string: ID of the target object in the PlanningSceneWorld
std::string: ID of the support surface in the PlanningSceneWorld
bool: Success flag
Create a Package
Let’s create a new package called mycobot_interfaces to store our custom service definition. This package will be used across your mycobot projects for custom message and service definitions.
After creating our custom service interface, it’s important to verify that it has been created correctly. Follow these steps to confirm the interface creation:
Open a new terminal.
Navigate to your workspace:
cd ~/ros2_ws
Use the ros2 interface show command to display the content of our newly created service:
ros2 interface show mycobot_interfaces/srv/GetPlanningScene
Remember, whenever you make changes to your interfaces, you need to rebuild the package and source your workspace again for the changes to take effect.
Now you have created a custom service interface for planning scene generation. This service will take a target shape and dimensions as input, and return a planning scene world, full point cloud, RGB image, target object ID, support surface ID (e.g. a table), and a success flag.
To use this service in other packages, add mycobot_interfaces as a dependency in the package.xml of the hello_mtc_with_perception package where you want to use this service:
<depend>mycobot_interfaces</depend>
In your C++ code, you would include the generated header:
You can then create service clients or servers using this interface.
This custom service interface provides a clear contract for communication between your point cloud processing node and other nodes in your system that need planning scene information.
Edit CMakeLists.txt
cd ~/ros2_ws/src/mycobot_ros2/hello_mtc_with_perception/
(OR source ~/ros2_ws/install/setup.bash if you haven’t set up your bashrc file to source your ROS distribution automatically with “source ~/ros2_ws/install/setup.bash”)
Launch the Code
Finally…the moment you have been waiting for.
Run the following commands (each command in a different terminal window) to launch everything:
You can also run the robot.sh file we created earlier that does all that stuff above.
If you want to visualize the raw 3D point cloud data, you can run the pointcloud.sh script. Make sure the files are in the appropriate location.
That’s it!
Deep Learning Alternatives for MoveIt Planning Scene Generation
In the service we developed to generate the collision objects for the planning scene, we fit primitive shapes to the 3D point cloud generated by our RGBD camera. We could have also generated the MoveIt planning scene using modern deep learning methods (I won’t go through this in this tutorial).
For example, you could use a package like isaac_ros_foundationpose to create 3D bounding boxes around objects in the scene and then add those boxes as collision objects. The advantage of this technique is that you have the pose of the object as well as the class information (e.g. mug, plate, bowl, etc.)
Here is what a Detection3D.msg message would look like if you were to type ‘ros2 topic echo <name_of_detection3d_topic>’:
Appendix: ROS 2 Service: Generating a MoveIt Planning Scene from a 3D Point Cloud
Overview
The method I used for generating the planning scene was inspired by the following paper:
Goron, Lucian Cosmin, et al. “Robustly segmenting cylindrical and box-like objects in cluttered scenes using depth cameras.” ROBOTIK 2012; 7th German Conference on Robotics. VDE, 2012.
If you get a chance, I highly recommend you read this entire paper. It provides a robust methodology for generating object primitives (i.e. boxes and cylinders) from a 3D point cloud scene even if the depth camera can only see the objects partially (i.e. from one side).
Let’s go through the technical details of the ROS 2 service we created. The purpose of the service is to process point cloud and RGB image data to generate CollisionObjects for a MoveIt planning scene. I will cover how we segmented the input point cloud, fit primitive shapes to the segments, created corresponding CollisionObjects, and provided the necessary data for subsequent grasp generation.
moveit_msgs::msg::PlanningSceneWorld: Contains CollisionObjects for all detected objects
sensor_msgs::msg::PointCloud2: Full scene point cloud
sensor_msgs::msg::Image: RGB image of the scene
std::string: ID of the target object in the PlanningSceneWorld
std::string: ID of the support surface in the PlanningSceneWorld
bool: Success flag
Implementation Details
Point Cloud Preprocessing
Estimate the support plane for the objects in the scene and extract the points in the point cloud that are above the support plane (plane_segmentation.cpp)
The first step of the algorithm identifies the points that make up the flat surface (like a table) that objects are sitting on.
Input
Point cloud (pcl::PointCloud<pcl::PointXYZRGB>)
Output
Point cloud for the support plane
Point cloud for the points above the detected support plane.
Process
Estimate surface normals
A normal is a vector perpendicular to the surface at a given point. It provides information about the local orientation of the surface
For each point in the cloud, estimate the normal by fitting a plane to its k-nearest neighbors.
Store the computed normals, keeping them aligned with the original point cloud.
Identify potential support surfaces
Use the computed surface normals to find approximately horizontal surfaces.
Group points whose normals are approximately parallel to the world Z-axis (vertical). These points likely belong to horizontal surfaces like tables.
Perform Euclidean clustering on these points to get support surface candidate clusters.
Store the support surface candidate clusters
For each support surface candidate cluster:
Use RANSAC to fit a plane model
Validate the plane model based on the robot’s workspace limits
If cropping is enabled:
Check if the plane is within the cropped area:
If cropping is disabled:
Skip the position check
Check if the plane is close to z=0:
Define z_tolerance (e.g., 0.05 meters)
Ensure the absolute value of plane_center.z is less than z_tolerance
Verify the plane is approximately horizontal:
Define up_vector as (0, 0, 1)
Calculate dot_product between plane_normal and up_vector
Define angle_tolerance based on acceptable tilt (e.g., cos of 2.5 degrees)
Ensure dot_product is greater than angle_tolerance
If all applicable conditions are met, consider the plane model valid; otherwise, reject it
Select the best fitting plane as the support surface
From the set of validated plane models, choose the best candidate based on the following criteria:
Inlier count:
Define inlier_count for each plane model as the number of points that fit the model within a specified distance threshold
Prefer plane models with higher inlier_count, as they represent surfaces with more supporting points
Plane size:
Calculate the area of each plane model by finding the 2D bounding box of inlier points projected onto the plane
Prefer larger planes, as they are more likely to represent the main support surface
Distance to z=0:
Calculate z_distance as the absolute distance of the plane’s center to z=0
Prefer planes with smaller z_distance
Orientation accuracy:
Calculate orientation_score as the dot product between the plane’s normal and the up vector (0, 0, 1)
Prefer planes with higher orientation_score (closer to being perfectly horizontal)
Combine these factors using a weighted scoring system:
Define weights for each factor (e.g., w_inliers, w_size, w_distance, w_orientation)
Calculate a total_score for each plane model
Select the plane model with the highest total_score as the best fitting plane
Store the selected best_plane_model for further use in object segmentation
Return the results:
Create support_plane_cloud:
Extract all inlier points from the original point cloud that belong to the best_plane_model
Store these points in support_plane_cloud
Create objects_cloud:
For each point in the original point cloud:
If the point is above the best_plane_model (use the plane equation to check)
And if cropping is enabled, the point is within the crop boundaries
Add the point to objects_cloud
Return both support_plane_cloud and objects_cloud
References:
R. Mario and V. Markus, “Grasping of Unknown Objects from a Table Top,” in Workshop on Vision in Action: Efficient strategies for cognitive agents in complex environments, 2008.
R. B. Rusu, N. Blodow, Z. C. Marton, and M. Beetz, “Close-range Scene Segmentation and Reconstruction of 3D Point Cloud Maps for Mobile Manipulation in Human Environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), St. Louis, MO, USA, October 2009.
Estimate normal vectors, curvature values, and radius descriptor values for each point in the point cloud (normals_curvature_and_rsd_estimation.cpp)
Input
Point cloud (of type pcl::PointCloudpcl::PointXYZRGB)
Output
Point Cloud with Normal Vectors, Curvature Values, and Radius Descriptor Values
Each point has an associated 3D vector representing its normal vector (a 3D vector indicating the direction the surface is facing at that point)
Each point also has an additional value representing how curved the surface is at that point.
Each point has Radius-based Surface Descriptor (RSD) values (the minimum and maximum surface radius that can be fitted to the point’s neighborhood). This step determines whether a point belongs to a linear or circular surface.
Process
This approach provides a method for handling boundary points and refining normal estimation using MaximumLikelihoodSampleConsensus (MLESAC). It estimates normals, curvature, and RSD values for every point, using a more conservative approach for points identified as potentially being on a boundary.
For each point p in the cloud:
Find neighbors:
Identify the k nearest neighboring points around p (k_neighbors as input parameter)
These k points form the “neighborhood” of p.
Check if the point is a boundary point by seeing if it has fewer than k_neighbors neighbors.
For boundary points, adjust the neighborhood size to use a smaller number of neighbors (up to min_boundary_neighbors or whatever is available).
Estimate the normal vector:
Perform initial Principal Component Analysis (PCA) on the neighborhood:
This captures how the neighborhood points are spread out around point p.
The eigenvector corresponding to the smallest eigenvalue is an initial normal estimate.
Use Maximum Likelihood Estimation SAmple Consensus (MLESAC) to refine the local plane estimate:
MLESAC aims to refine the plane fitting by robustly estimating the best plane model and discarding outliers.
It uses the initial normal estimate as a starting point.
If MLESAC fails, fall back to the initial normal estimate.
Compute a weighted covariance matrix:
Assign weights to neighbor points based on their distance to the estimated local plane
Points closer to the plane and inliers from MLESAC get higher weights
This step helps to reduce the influence of outliers
Perform PCA on this weighted covariance matrix:
Compute eigenvectors and eigenvalues
The eigenvector corresponding to the smallest eigenvalue is the final normal estimate
Compute curvature values:
Use the eigenvalues from PCA on the weighted covariance matrix to calculate the curvature values according to the following formula: curvature = λ₀ / (λ₀ + λ₁ + λ₂) where λ₀ ≤ λ₁ ≤ λ₂
Use pcl::RSDEstimation to compute the minimum and maximum surface radius that can be fitted to the point’s neighborhood.
This step determines whether a point belongs to a linear or circular surface.
Output creation: Point cloud that includes:
The original XYZ coordinates
The RGB color information
The estimated normal vector for each point
The estimated curvature value for each point
The RSD values (r_min and r_max) for each point
Key Parameters
k_neighbors: Number of nearest neighbors to consider
max_plane_error: Maximum allowed error for plane fitting in MLESAC
max_iterations: Maximum number of iterations for MLESAC
min_boundary_neighbors: Minimum number of neighbors to consider for boundary points
rsd_radius: Radius to use for RSD estimation
References
R. B. Rusu, Z. C. Marton, N. Blodow, M. Dolha, and M. Beetz, “Towards 3D Point Cloud Based Object Maps for Household Environments,” Robotics and Autonomous Systems Journal (Special Issue on Semantic Knowledge in Robotics), vol. 56, no. 11, pp. 927–941, 30 November 2008.
Z. C. Marton, D. Pangercic, N. Blodow, and M. Beetz, “Combined 2D-3D Categorization and Classification for Multimodal Perception Systems,” International Journal of Robotics Research, 2011.
Cluster the point cloud into connected components using region growing based on nearest neighbors – cluster_extraction.cpp
Input
Point cloud (of type pcl::PointCloud<PointXYZRGBNormalRSD>::Ptr) – Point cloud with XYZ, RGB, normal vectors, curvature values, and Radius-based Surface Descriptor (RSD) values
Output
Vector of point cloud clusters, where each cluster is a separate point cloud containing:
Points that are close to each other in 3D space.
Each point retains its original attributes (xyz, RGB, normal, curvature, RSD values)
Process
Create a KdTree object for efficient nearest neighbor searches
Extract normals from the input cloud into a separate pcl::PointCloudpcl::Normal object
Create a RegionGrowing object and set its parameters:
Minimum and maximum cluster size
Number of nearest neighbors to consider
Smoothness threshold (angle threshold for neighboring normals)
Curvature threshold
Apply the region growing algorithm to extract clusters
For each extracted cluster:
Create a new point cloud
Copy points from the input cloud to the new cluster cloud based on the extracted indices
Set the cluster’s width, height, and is_dense properties
Key Points
Purpose: Reduce the search space for subsequent segmentation by grouping nearby points
Note: In cluttered scenes, objects that are touching or very close may end up in the same cluster
This step does not perform final object segmentation, but prepares data for later segmentation steps
The algorithm uses both geometric properties (normals) and curvature information for clustering
Smoothness threshold is converted from degrees to radians in the implementation
Parameters
min_cluster_size: Minimum number of points that a cluster needs to contain
max_cluster_size: Maximum number of points that a cluster can contain
smoothness_threshold: Maximum angle difference between normals (in degrees, converted to radians internally)
curvature_threshold: Maximum difference in curvature between neighboring points
nearest_neighbors: Number of nearest neighbors to consider for region growing
Additional Features
The implementation calculates and logs various statistics for each cluster:
Average RSD (Radius-based Surface Descriptor) values
References
R. B. Rusu, N. Blodow, Z. C. Marton, and M. Beetz, “Close-range Scene Segmentation and Reconstruction of 3D Point Cloud Maps for Mobile Manipulation in Human Environments,” in IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), St. Louis, MO, USA, October 2009.
Segmentation of Clutter – object_segmentation.cpp
The input into this step is the vector of point cloud clusters that was generated during the previous step.
Input
Vector of point cloud clusters, where each cluster is a separate point cloud containing:
Points that are close to each other in 3D space.
Each point retains its original attributes (xyz, RGB, normal, curvature, RSD values)
Output
moveit_msgs::CollisionObject[] collision_objects
Outer Loop
The entire process below occurs for each point cloud cluster in the input vector.
Process
Project the Point Cloud Cluster Onto the Surface Plane
For each point (x,y,z,RGB,normal, curvature, RSDmin and RSDmax) in the 3D cluster:
Project the point onto the surface plane (assumed to be z=0). This creates a point (x, y).
Maintain a mapping between each 2D projected point and its original 3D point
Initialize Two Separate Parameter Spaces
Create a vector to store line models for the Hough transform
Each line model stores rho, theta, votes, and inlier count
Create a 3D Hough parameter space for circle models
Dimensions: center_x, center_y, radius
Understanding Hough Space
Hough space is a parameter space used for detecting specific shapes or models. Each point in Hough space represents a possible instance of the shape you’re looking for (in this case, circles or lines).
For lines:
Instead of y = mx + b, a rho-theta parameterization of the line is used
rho is the perpendicular distance from the line to the origin
This parameterization allows for vertical lines and avoids infinite slopes
For circles:
The Hough space is 3D: (center_x, center_y, radius)
Each point in this space represents a possible circle in the original image
The voting process in Hough space helps find shapes even if they’re partially hidden or broken in the original image.
Inner Loop (repeated num_iterations times)
RANSAC Model Fitting
Line Fitting
Use RANSAC to fit a 2D line to the projected points. This is done to identify potential box-like objects.
Circle Fitting
Use RANSAC to fit a 2D circle to the projected points. This is done to identify cylindrical-like objects.
Filter Inliers
For the fitted models from the RANSAC Model Fitting step, apply a series of filters to refine the corresponding set of inlier points.
Circle Filtering
Euclidean Clustering
Use pcl::EuclideanClusterExtraction to group inliers into clusters.
Accept models with a maximum of two clusters (representing complete circles or two visible arcs of the same cylinder).
Reject models with more than the specified maximum number of clusters.
Height Consistency (for two clusters only)
For models with exactly two clusters, check if the height difference between clusters is within the specified tolerance.
This ensures that the fitted circle represents a cross-section of a single, upright cylindrical object.
Curvature Filtering
Keep inlier points with high curvature (above the specified threshold).
Remove inlier points with low curvature.
Radius-based Surface Descriptor (RSD) Filtering
Compare the minimum surface radius (r_min) of each inlier point to the radius of the fitted circle.
Keep points where the difference is within the specified tolerance.
Surface Normal Filtering
Calculate the angle between the point’s normal (projected onto the xy-plane) and the vector from the circle center to the point.
Keep points where this angle is within the specified threshold.
Line Filtering
Euclidean Clustering
Use pcl::EuclideanClusterExtraction to group inliers into clusters.
Accept models with only one cluster.
Reject models with more than the specified maximum number of clusters.
Curvature Filtering
Keep inlier points with low curvature (below the specified threshold).
Remove inlier points with high curvature.
Model Validation
For both circle and line models, check how many inlier points remain after filtering.
If the number of remaining inliers for the model exceeds the threshold:
The model is considered valid.
Add the model to the appropriate Hough parameter space.
If the number of remaining inliers is below the threshold:
The model is rejected.
An additional validation step compares inlier counts between circle and line models, keeping only the model type with more inliers.
Add Model to the Hough Space
If a model is valid:
For circles:
Add a vote to the 3D Hough space (center_x, center_y, radius bins)
For lines:
Add the line model (rho, theta, votes, inlier count) to the line models vector
rho is the perpendicular distance from the origin (0, 0) to the line
theta is the angle formed between the x-axis and this perpendicular vector (positive theta is counter-clockwise measured from x-axis)
Remove inliers and continue
Remove the inliers of valid models from the working point cloud and continue the inner loop until insufficient points remain or no valid models are found.
Cluster Parameter Spaces
After all iterations on a point cloud cluster:
Cluster line models based on similarity in rho and theta.
Cluster circle models in the 3D Hough space.
Select Model with Most Votes
Compare the top line cluster with the top circle cluster. Select the model type (line or circle) with the highest vote count.
Estimate 3D Shape
Using the parameters from the highest-vote cluster, fit the selected solid geometric primitive model type (box or cylinder) to the original 3D point cloud data.
Cylinder Fitting
Use the 2D circle fit for radius and (x,y) center position.
Set cylinder bottom at z=0.
Set top height to the highest point in the cluster.
Calculate dimensions and position of the cylinder.
Box Fitting
Compute box orientation from the line angle.
Project points onto the line direction and perpendicular direction to determine length and width.
Use z-values of points to determine height.
Calculate dimensions and position of the box.
Add Shape as Collision Object
The box or cylinder is added as a collision object (moveit_msgs::CollisionObject) to the planning scene with a unique id (e.g. box_0, box_1, cylinder_0, cylinder_1, etc).
Move to Next Cluster
Proceed to the next point cloud cluster in the vector (i.e., move to the next iteration of the Outer Loop).
Key Parameters
num_iterations: Number of RANSAC iterations per cluster
inlier_threshold: Minimum number of inliers for a model to be considered valid
ransac_distance_threshold: Maximum distance for a point to be considered an inlier in RANSAC
ransac_max_iterations: Maximum number of iterations for RANSAC
circle_min_cluster_size, line_min_cluster_size: Minimum size for Euclidean clusters
circle_max_clusters, line_max_clusters: Maximum number of allowed clusters
circle_height_tolerance: Maximum allowed height difference between two circle clusters
circle_curvature_threshold, line_curvature_threshold: Curvature thresholds for filtering
circle_radius_tolerance: Tolerance for RSD filtering
circle_normal_angle_threshold: Maximum angle between normal and radial vector for circles
circle_cluster_tolerance, line_cluster_tolerance: Distance threshold for Euclidean clustering
line_rho_threshold, line_theta_threshold: Thresholds for clustering line models in the parameter space
Get Planning Scene Server (get_planning_scene_server.cpp)
This code brings together all the previously developed components into a single, unified ROS 2 service. It integrates functions for point cloud processing, plane segmentation, object clustering, and shape fitting into a cohesive workflow. The service processes point cloud and RGB image data to generate a MoveIt planning scene, which can be called by a node using the MoveIt Task Constructor to obtain environment information for manipulation tasks. The core functionality is encapsulated in the handleService method of the GetPlanningSceneServer class.
handleService Method Walkthrough
Initialize Response: The method starts by setting the success flag of the response to false.
Check Data Availability: It verifies that both point cloud and RGB image data are available. If either is missing, it logs an error and returns.
Validate Input Parameters and Prepare Point Cloud:
Checks if target_shape and target_dimensions are valid
Transforms the point cloud to the target frame
Applies optional cropping to the point cloud
Convert PointCloud2 to PCL Point Cloud: Converts the ROS PointCloud2 message to a PCL point cloud for further processing.
Segment Support Plane and Objects: Uses the segmentPlaneAndObjects function to separate the support surface from objects in the scene.
Create CollisionObject for Support Surface: Generates a CollisionObject representing the support surface and adds it to the planning scene.
Estimate Normals, Curvature, and RSD: Calls estimateNormalsCurvatureAndRSD to calculate geometric features for each point in the object cloud.
Extract Clusters: Uses extractClusters to identify distinct object clusters in the point cloud.
Get Collision Objects: Calls segmentObjects to convert point cloud clusters into collision objects for the planning scene.
Identify Target Object: Searches through the collision objects to find one matching the requested target shape and dimensions.
Assemble PlanningSceneWorld: Combines all collision objects into a complete PlanningSceneWorld structure.
Fill the Response:
Sets the full point cloud, RGB image, target object ID, and support surface ID in the response
Sets the success flag to true if all critical steps were successful
Logs detailed information about the response contents
This method transforms raw sensor data into a structured planning scene that can be used by the MoveIt Task Constructor for motion planning and manipulation tasks.
Congratulations on reaching the end! Keep building!
In this tutorial, we’ll explore how to implement inverse kinematics (IK) with clearance cost optimization using the MoveIt Task Constructor. We’ll create an application from scratch that demonstrates how to plan movements for a robotic arm while considering obstacle clearance. The output of your application will provide detailed insights into the planning process, including the number of solutions found and the performance of each stage.
Here is what your final output will look like (I am flipping back and forth between the two successfully found inverse kinematics solutions):
On a high level, your program will demonstrate a sophisticated approach to motion planning that does the following:
Sets up a scene with the mycobot_280 robot and a spherical obstacle
Defines a target pose for the robot’s gripper (end-effector)
Uses the ComputeIK stage to find valid arm configurations reaching the target
Applies a clearance cost term to favor solutions that keep the robot farther from obstacles
Uses ROS 2 parameters to control the behavior of the clearance cost calculation
While OMPL and Pilz are motion planners that generate full trajectories, they rely on IK solutions like those computed in this code to determine feasible goal configurations for the robot. In a complete motion planning pipeline, this IK solver would typically be used to generate goal states, which OMPL or Pilz would then use to plan full, collision-free paths from the robot’s current position to the desired end-effector pose.
Real-World Use Cases
The code you will develop in this tutorial can serve as a foundation for various practical applications:
Robotic Assembly in Cluttered Environments
Generate arm configurations that avoid collisions with nearby parts or fixtures
Optimize for paths that maintain maximum clearance from obstacles
Bin Picking and Sorting
Plan motions that safely navigate around the edges of bins and other items
Minimize the risk of collisions in tight spaces
Collaborative Robot Operations
Ensure the robot maintains a safe distance from human work areas
Dynamically adjust paths based on changing obstacle positions
Quality Inspection Tasks
Generate smooth, collision-free paths for sensors or cameras to inspect parts
Optimize for viewpoints that balance clearance and inspection requirements
By the end of this tutorial, you’ll have a solid understanding of how to implement IK solutions with clearance cost optimization in your motion planning tasks. This approach will make your robotic applications more robust, efficient, and capable of operating safely in complex environments.
Let’s dive into the code and explore how to build this advanced motion planning application!
All the code is here on my GitHub repository. Note that I am working with ROS 2 Iron, so the steps might be slightly different for other versions of ROS 2.
Create the Code
Open a new terminal window, and type:
cd ~/ros2_ws/src/mycobot_ros2/hello_moveit_task_constructor/src/
/**
* @file ik_clearance_cost.cpp
* @brief Demonstrates using MoveIt Task Constructor for motion planning with collision avoidance.
*
* This program sets up a motion planning task for a mycobot_280 robot arm using MoveIt Task Constructor.
* It creates a scene with an obstacle, computes inverse kinematics (IK) solutions, and plans a motion
* while considering clearance from obstacles.
*
* @author Addison Sears-Collins
* @date August 19, 2024
*/
#include <rclcpp/rclcpp.hpp>
#include <moveit/planning_scene/planning_scene.h>
#include <moveit/task_constructor/task.h>
#include <moveit/task_constructor/stages/fixed_state.h>
#include <moveit/task_constructor/stages/compute_ik.h>
#include <moveit/task_constructor/cost_terms.h>
#include "ik_clearance_cost_parameters.hpp"
// Use the moveit::task_constructor namespace for convenience
using namespace moveit::task_constructor;
/* ComputeIK(FixedState) */
int main(int argc, char** argv) {
// Initialize ROS 2
rclcpp::init(argc, argv);
// Create a ROS 2 node
auto node = rclcpp::Node::make_shared("ik_clearance_cost_demo");
// Create a logger
auto logger = node->get_logger();
RCLCPP_INFO(logger, "Starting IK Clearance Cost Demo");
// Start a separate thread to handle ROS 2 callbacks
std::thread spinning_thread([node] { rclcpp::spin(node); });
// Create a parameter listener for IK clearance cost parameters
const auto param_listener = std::make_shared<ik_clearance_cost_demo::ParamListener>(node);
const auto params = param_listener->get_params();
RCLCPP_INFO(logger, "Parameters loaded: cumulative=%s, with_world=%s",
params.cumulative ? "true" : "false",
params.with_world ? "true" : "false");
// Create a Task object to hold the planning stages
Task t;
t.stages()->setName("clearance IK");
RCLCPP_INFO(logger, "Task created: %s", t.stages()->name().c_str());
// Wait for 500 milliseconds to ensure ROS 2 is fully initialized
rclcpp::sleep_for(std::chrono::milliseconds(500));
// Load the robot model (mycobot_280)
t.loadRobotModel(node);
assert(t.getRobotModel()->getName() == "mycobot_280");
RCLCPP_INFO(logger, "Robot model loaded: %s", t.getRobotModel()->getName().c_str());
// Create a planning scene
auto scene = std::make_shared<planning_scene::PlanningScene>(t.getRobotModel());
RCLCPP_INFO(logger, "Planning scene created");
// Set the robot to its default state
auto& robot_state = scene->getCurrentStateNonConst();
robot_state.setToDefaultValues();
RCLCPP_INFO(logger, "Robot state set to default values");
// Set the arm to its "ready" position
[[maybe_unused]] bool found =
robot_state.setToDefaultValues(robot_state.getJointModelGroup("arm"), "ready");
assert(found);
RCLCPP_INFO(logger, "Arm set to 'ready' position");
// Create an obstacle in the scene
moveit_msgs::msg::CollisionObject co;
co.id = "obstacle";
co.primitives.emplace_back();
co.primitives[0].type = shape_msgs::msg::SolidPrimitive::SPHERE;
co.primitives[0].dimensions.resize(1);
co.primitives[0].dimensions[0] = 0.1;
co.header.frame_id = t.getRobotModel()->getModelFrame();
co.primitive_poses.emplace_back();
co.primitive_poses[0].orientation.w = 1.0;
co.primitive_poses[0].position.x = -0.183;
co.primitive_poses[0].position.y = -0.14;
co.primitive_poses[0].position.z = 0.15;
scene->processCollisionObjectMsg(co);
RCLCPP_INFO(logger, "Obstacle added to scene: sphere at position (%.2f, %.2f, %.2f) with radius %.2f",
co.primitive_poses[0].position.x, co.primitive_poses[0].position.y,
co.primitive_poses[0].position.z, co.primitives[0].dimensions[0]);
// Create a FixedState stage to set the initial state
auto initial = std::make_unique<stages::FixedState>();
initial->setState(scene);
initial->setIgnoreCollisions(true);
RCLCPP_INFO(logger, "FixedState stage created");
// Create a ComputeIK stage for inverse kinematics
auto ik = std::make_unique<stages::ComputeIK>();
ik->insert(std::move(initial));
ik->setGroup("arm");
// Set the target pose
ik->setTargetPose(Eigen::Translation3d(-.183, 0.0175, .15) * Eigen::AngleAxisd(M_PI/4, Eigen::Vector3d::UnitX()));
ik->setTimeout(1.0);
ik->setMaxIKSolutions(100);
// Set up the clearance cost term
auto cl_cost{ std::make_unique<cost::Clearance>() };
cl_cost->cumulative = params.cumulative;
cl_cost->with_world = params.with_world;
ik->setCostTerm(std::move(cl_cost));
RCLCPP_INFO(logger, "Clearance cost term added to ComputeIK stage");
// Add the ComputeIK stage to the task
t.add(std::move(ik));
RCLCPP_INFO(logger, "ComputeIK stage added to task");
// Attempt to plan the task
try {
RCLCPP_INFO(logger, "Starting task planning");
// Plan the task
moveit::core::MoveItErrorCode error_code = t.plan(0);
// Log the planning result
if (error_code == moveit::core::MoveItErrorCode::SUCCESS) {
RCLCPP_INFO(logger, "Task planning completed successfully");
RCLCPP_INFO(logger, "Found %zu solutions", t.numSolutions());
// Use printState to log the task state
std::ostringstream state_stream;
t.printState(state_stream);
RCLCPP_INFO(logger, "Task state:\n%s", state_stream.str().c_str());
} else {
RCLCPP_ERROR(logger, "Task planning failed with error code: %d", error_code.val);
// Use explainFailure to log the reason for failure
std::ostringstream failure_stream;
t.explainFailure(failure_stream);
RCLCPP_ERROR(logger, "Failure explanation:\n%s", failure_stream.str().c_str());
}
// Log a simple summary of each stage
RCLCPP_INFO(logger, "Stage summary:");
for (size_t i = 0; i < t.stages()->numChildren(); ++i) {
const auto* stage = t.stages()->operator[](i);
RCLCPP_INFO(logger, " %s: %zu solutions, %zu failures",
stage->name().c_str(), stage->solutions().size(), stage->failures().size());
}
} catch (const InitStageException& e) {
RCLCPP_ERROR(logger, "InitStageException caught during task planning: %s", e.what());
}
RCLCPP_INFO(logger, "IK Clearance Cost Demo completed");
// Wait for the spinning thread to finish
spinning_thread.join();
return 0;
}
Save the file, and close it.
Add the Parameters
Now let’s create a parameter file in the same directory as our source code.
The “cumulative” parameter determines how the robot measures its closeness to obstacles.
When set to false, the robot only considers its single closest point to any obstacle.
When set to true, it considers the distance of multiple points on the robot to obstacles, adding these distances together.
This “cumulative” approach provides a more thorough assessment of the robot’s overall proximity to obstacles, potentially leading to more cautious movements.
The “with_world” parameter determines what the robot considers as obstacles when planning its movements.
When set to true, the robot takes into account all known objects in its environment – this could include tables, chairs, walls, or any other obstacles that have been mapped or sensed. It’s like the robot is aware of its entire surroundings.
When set to false, the robot might only consider avoiding collisions with itself (self-collisions) or a specific subset of objects, ignoring the broader environment.
Save the file, and close it.
Build the Code
cd ~/ros2_ws/
colcon build
source ~/.bashrc
(OR source ~/ros2_ws/install/setup.bash if you haven’t set up your bashrc file to source your ROS distribution automatically with “source ~/ros2_ws/install/setup.bash”)
Launch
Open two terminal windows, and run the following commands to launch our standard MoveIt 2 environment:
The “Motion Planning Tasks” panel in RViz displays the structure and outcomes of our IK clearance cost task. The panel shows a hierarchical view with “Motion Planning Tasks” at the root, followed by “clearance IK”.
Under “clearance IK“, two stages are visible:
“IK”: This represents the ComputeIK stage where inverse kinematics solutions are generated.
“initial state”: This corresponds to the FixedState stage that sets the initial robot configuration.
The second column shows green checkmarks and numbers indicating the quantity of successful solutions for each task component. The image reveals that 2 solutions were found for the overall “clearance IK” task, both originating from the “IK” stage.
The “time” column displays the computational time for each component. For the “IK” stage, we see a value of 1.0055 seconds, indicating the duration of the inverse kinematics calculations.
The “cost” column is particularly noteworthy in this context. For the successful IK solutions, we observe a cost value of 66.5330. This cost is directly related to the clearance cost term we incorporated into our ComputeIK stage.
The “comment” column provides additional context for the solutions. It displays the clearance distances between the obstacle and a specific robot part, “gripper_left1”. This information quantifies how the robot positions itself relative to the obstacle in the computed solutions.
Terminal Window – Planning Results
Analyzing the terminal output of our IK clearance cost demo:
The mycobot_280 robot model was loaded successfully.
A planning scene was generated, and the robot was positioned in its ‘ready’ configuration.
An obstacle, represented by a sphere, was introduced to the scene at coordinates (-0.18, -0.14, 0.15) with a 0.10 radius.
The FixedState and ComputeIK stages were established and incorporated into the task.
Analyzing the terminal output of our IK clearance cost demo, we see the following task structure:
This structure provides insights into the flow of solutions through our planning task:
clearance IK (top level):
2 solutions were propagated backward
2 solutions were propagated forward
2 solutions were ultimately generated
IK stage:
2 solutions were generated at this stage
2 solutions were propagated backward
2 solutions were propagated forward
initial state:
1 solution was generated at this stage
1 solution was propagated backward
1 solution was propagated forward
This output demonstrates the bidirectional nature of the planning process in the MoveIt Task Constructor. The initial state provides a starting point, which is then used by the IK stage to generate solutions. These solutions are propagated both forward and backward through the planning pipeline.
The fact that we see two solutions at the IK stage indicates that our ComputeIK stage, incorporating the clearance cost term, successfully found two distinct inverse kinematics solutions that satisfied our constraints. These solutions maintained sufficient clearance from the obstacle while reaching the target pose.
The propagation of these two solutions both forward and backward means they were feasible within the context of both the initial state and the overall task requirements. This bidirectional flow helps ensure that the generated solutions are consistent and achievable given the robot’s starting configuration and the task’s goals.
By examining these results in conjunction with the RViz visualization, you can gain a comprehensive understanding of how the robot’s configuration changes to maintain clearance from the obstacle while achieving the desired pose of the gripper.
Detailed Code Walkthrough
Now for the C++ part. Let’s go through each piece of this code, step by step.
cd ~/ros2_ws/src/mycobot_ros2/hello_moveit_task_constructor/src/
gedit ik_clearance_cost.cpp
File Header and Includes
The code begins with a comprehensive comment block outlining the file’s purpose: demonstrating motion planning with collision avoidance using the MoveIt Task Constructor. It describes the program’s functionality, which creates a scene with an obstacle and computes inverse kinematics (IK) solutions while considering clearance from obstacles.
The file includes necessary headers for ROS 2, MoveIt, and the Task Constructor library, establishing the foundation for our IK clearance cost demo.
Main Function
All the logic for this program is contained within the main function. Let’s break it down into its key components.
ROS 2 Initialization and Node Setup
The code initializes ROS 2 and creates a node named “ik_clearance_cost_demo”. It sets up a logger for informational output. This setup ensures proper communication within the ROS 2 ecosystem.
Parameter Handling
The code sets up a parameter listener to load and manage parameters for the IK clearance cost demo. It logs whether the clearance cost should be cumulative and whether it should consider the world.
Task Setup and Robot Model Loading
A Task object is created and named “clearance IK”. The robot model (“mycobot_280”) is loaded and verified. This step is important for accurate motion planning based on the specific robot’s characteristics.
Planning Scene Setup
The code creates a planning scene, sets the robot to its default state, and then sets the arm to its “ready” position.
Obstacle Creation
An obstacle (a sphere) is created and added to the planning scene. This obstacle will be considered during the IK calculations to ensure clearance.
FixedState Stage Setup
A FixedState stage is created to set the initial state of the robot. It uses the previously configured scene and ignores collisions at this stage.
ComputeIK Stage Setup
A ComputeIK stage is created for inverse kinematics calculations. It’s configured with the initial state, target group (“arm”), target pose, timeout, and maximum number of IK solutions to compute.
Clearance Cost Term Setup
A clearance cost term is created and added to the ComputeIK stage. This cost term will influence the IK solutions to maintain clearance from obstacles.
Task Planning and Execution
The code attempts to plan the task using the defined stages. It includes error handling for potential exceptions during planning, ensuring robustness in various scenarios.
Results Logging
The code logs the results of the planning process, including the number of solutions found, the task state, or failure explanations if the planning was unsuccessful.
Node Spinning
A separate thread is created for spinning the ROS 2 node. This allows the program to handle callbacks and events while performing its main tasks.