Sensor Fusion Using the Robot Localization Package – ROS 2

In this tutorial, I will show you how to set up the robot_localization ROS 2 package on a simulated mobile robot. We will use the robot_localization package to fuse odometry data from the /wheel/odometry topic with IMU data from the /imu/data topic to provide locally accurate, smooth odometry estimates. Wheels can slip, so using the robot_localization package can help correct for this.

This tutorial is the third tutorial in my Ultimate Guide to the ROS 2 Navigation Stack (also known as Nav2).

You can get the entire code for this project here.

If you are using ROS Galactic or newer, you can get the code here.

Let’s get started!

Prerequisites 

You have completed the first two tutorials of this series:

  1. How to Create a Simulated Mobile Robot in ROS 2 Using URDF
  2. Set Up the Odometry for a Simulated Mobile Robot in ROS 2

About the Robot Localization Package

We will configure the robot_localization package to use an Extended Kalman Filter (ekf_node) to fuse the data from sensor inputs. These sensor inputs come from the IMU Gazebo plugin and the differential drive Gazebo plugin that are defined in our SDF file.

In a real robotics project, instead of simulated IMU and odometry data, we would use data from an IMU sensor like the BNO055 and wheel encoders, respectively.

 The ekf_node will subscribe to the following topics (ROS message types are in parentheses):

  • /wheel/odometry :  Position and velocity estimate based on the information from the differential drive Gazebo plugin (in a real robotics project this would be information drawn from wheel encoder tick counts). The orientation is in quaternion format. (nav_msgs/Odometry)
  • /imu/data : Data from the Inertial Measurement Unit (IMU) sensor Gazebo plugin (sensor_msgs/Imu.msg)

This node will publish data to the following topics:

  • /odometry/filtered : The smoothed odometry information (nav_msgs/Odometry)
  • /tf : Coordinate transform from the odom frame (parent) to the base_footprint frame (child). To learn about coordinate frames in ROS, check out this post.

Install the Robot Localization Package

Let’s begin by installing the robot_localization package. Open a new terminal window, and type the following command:

sudo apt install ros-foxy-robot-localization

If you are using a newer version of ROS 2 like ROS 2 Humble, type the following:

 sudo apt install ros-humble-robot-localization 

Instead of the commands above, you can also type the following command directly into the terminal. It will automatically detect your ROS 2 distribution:

sudo apt install ros-$ROS_DISTRO-robot-localization

If you are using ROS 2 Galactic and want to build from the source instead of using the off-the-shelf packages above, you will need to download the robot_localization package to your workspace.

cd ~/dev_ws/src
git clone -b fix/galactic/load_parameters https://github.com/nobleo/robot_localization.git

The reason you need to download that package above is because the Navigation Stack might throw the following exception if you don’t:

[ekf_node-1] terminate called after throwing an instance of ‘rclcpp::ParameterTypeException’ [ekf_node-1] what(): expected [string] got [not set]

cd ..
colcon build

Set the Configuration Parameters

We now need to specify the configuration parameters of the ekf_node by creating a YAML file.

Open a new terminal window, and type:

colcon_cd basic_mobile_robot
cd config
gedit ekf.yaml

Copy and paste this code inside the YAML file.

Save and close the file.

You can get a complete description of all the parameters on this page. Also you can check out this link on GitHub for a sample ekf.yaml file.

Create a Launch File

Now go to your launch folder. Open a new terminal window, and type:

colcon_cd basic_mobile_robot
cd launch
gedit basic_mobile_bot_v3.launch.py

Copy and paste this code into the file.

If you are using ROS 2 Galactic or newer, your code is here.

Save the file, and close it.

Move to the package.

colcon_cd basic_mobile_robot

Open the package.xml file.

gedit package.xml file.

Copy and paste this code into the file.

Save the file, and close it.

Open the CMakeLists.txt file.

gedit CMakeLists.txt

Copy and paste this code into the file.

Save the file, and close it.

Build the Package

Now build the package by opening a terminal window, and typing the following command:

cd ~/dev_ws
colcon build

Launch the Robot

Open a new terminal, and launch the robot.

cd ~/dev_ws/
ros2 launch basic_mobile_robot basic_mobile_bot_v3.launch.py

It might take a while for Gazebo and RViz to load, so be patient.

To see the active topics, open a terminal window, and type:

ros2 topic list

To see more information about the topics, execute:

ros2 topic info /imu/data
ros2 topic info /wheel/odometry

You should see an output similar to below:

16-ros2-topic-info-imu-wheel

Both topics have 1 publisher and 1 subscriber.

To see the output of the robot localization package (i.e. the Extended Kalman Filter (EKF)), type:

ros2 topic echo /odometry/filtered
16-zz-odometry-filtered

I will move my robot in the reverse direction using the rqt_robot_steering GUI. Open a new terminal window, and type:

rqt_robot_steering

If you are using ROS 2 Galactic or newer, type:

sudo apt-get install ros-galactic-rqt-robot-steering

Where the syntax is:

sudo apt-get install ros-<ros-distribution>-rqt-robot-steering

Then type:

ros2 run rqt_robot_steering rqt_robot_steering --force-discover

Move the sliders to move the robot.

17-move-the-sliders

We can see the output of the odom -> base_footprint transform by typing the following command:

ros2 run tf2_ros tf2_echo odom base_footprint

Let’s see the active nodes.

ros2 node list
18-ros2-node-list

Let’s check out the ekf_node (named ekf_filter_node).

ros2 node info /ekf_filter_node
19-filter-node-info

Let’s check out the ROS node graph.

rqt_graph

Click the blue circular arrow in the upper left to refresh the node graph. Also select “Nodes/Topics (all)”.

20-ros-node-graph

To see the coordinate frames, type the following command in a terminal window.

ros2 run tf2_tools view_frames.py

If you are using ROS 2 Galactic or newer, type:

ros2 run tf2_tools view_frames

In the current working directory, you will have a file called frames.pdf. Open that file.

evince frames.pdf

Here is what my coordinate transform (i.e. tf) tree looks like:

21-view-coordinate-frames

You can see that the parent frame is the odom frame. The odom frame is the initial position and orientation of the robot. Every other frame below that is a child of the odom frame.

Later, we will add a map frame. The map frame will be the parent frame of the odom frame.

Finally, in RViz, under Global Options, change Fixed Frame to odom.

22-fixed-frame-to-odom

Open the steering tool again.

rqt_robot_steering

If you move the robot around using the sliders, you will see the robot move in both RViz and Gazebo.

gazebo-and-rviz-1

That’s it!

In the next tutorial, I will show you how to add LIDAR to your robot so that you can map the environment and detect obstacles.

How to Succeed in Technology: The 10,000 Experiment Rule

Introduction

In a world obsessed with mastery and success, the 10,000-hour rule has long been heralded as the golden standard for achieving expertise in any field. Popularized by Malcolm Gladwell in his book “Outliers,” this rule suggests that with 10,000 hours of dedicated practice, anyone can master a skill. However, in the fast-paced and ever-evolving landscape of technology, a new paradigm is emerging as a more effective blueprint for innovation and success: the 10,000-experiment rule.

Introduced by Michael Simmons in his thought-provoking article on Medium, “Forget The 10,000-Hour Rule; Edison, Bezos, & Zuckerberg Follow The 10,000-Experiment Rule,” this new rule shifts the focus from the quantity of time spent practicing to the number of experiments conducted. This approach champions experimentation, quick learning, and the ability to adapt and pivot as the cornerstones of technological innovation and personal growth. Through this lens, the stories of Thomas Edison’s relentless experimentation, Jeff Bezos’s innovative leadership at Amazon, and Mark Zuckerberg’s rapid iteration at Facebook take on new significance. They exemplify how embracing a culture of experimentation can lead to unprecedented achievements.

As we dive deeper into the essence of the 10,000-experiment rule, we will explore its foundations, its scientific backing, and its application across various domains of technology, with a special focus on robotics. By understanding and applying this rule, individuals and organizations alike can unlock new pathways to innovation and success in the technological realm.

The Evolution from Hours to Experiments

The traditional 10,000-hour rule, while a useful guideline for developing skill through practice, presents a linear approach to mastery that overlooks the complex, non-linear nature of innovation. In contrast, the 10,000-experiment rule represents a paradigm shift, focusing on the iterative process of trial, error, and learning. This approach is particularly resonant in the field of technology, where rapid advancements and unpredictable challenges require a more flexible and adaptive mindset.

In his 2018 CNBC interview, Bezos articulated the essence of this experimental approach:

To be innovative you have to experiment. If you want to have more inventions, you need to do more experiments per week per month per year per decade. It’s that simple. You cannot invent without experimenting. And here’s the other thing about experiments…lots of them fail. If you know it’s going to work in advance it is not an experiment.

Jeff Bezos

Also, keep in mind that when you experiment, you have to be prepared for many failures. Bezos mentions this in his 2016 Annual Letter to Shareholders:

To invent you have to experiment, and if you know in advance that it’s going to work, it’s not an experiment. Most large organizations embrace the idea of invention, but are not willing to suffer the string of failed experiments necessary to get there. Outsized returns often come from betting against conventional wisdom, and conventional wisdom is usually right. Given a ten percent chance of a 100 times payoff, you should take that bet every time. But you’re still going to be wrong nine times out of ten.

Jeff Bezos

Bezos’s emphasis on experimentation as a core strategy highlights the critical role that embracing failure and learning plays in driving innovation. Under his leadership, Amazon has become a prime example of how a culture of experimentation can lead to groundbreaking innovations, from AWS to Alexa.

Thomas Edison, often hailed as one of the greatest inventors in history, exemplifies the 10,000-experiment rule long before it was formally articulated. Edison’s approach to invention was fundamentally experimental, famously remarking, “I have not failed. I’ve just found 10,000 ways that won’t work.” His work on the electric light bulb, phonograph, and motion picture camera, among countless other inventions, showcases the power of persistence and the willingness to embrace failure as a stepping stone to success.

The shift from hours to experiments encourages a mindset of curiosity, resilience, and openness to failure. It suggests that success in technology and innovation is not merely a function of time spent, but rather the quality and quantity of experiments conducted. This approach fosters a culture of continuous learning and adaptation, essential qualities in the fast-evolving tech landscape.

By embracing the 10,000-experiment rule, individuals and organizations can unlock a more dynamic and effective pathway to innovation. This rule champions the idea that through a systematic approach to experimentation, one can navigate the complexities of technology and emerge with novel solutions and breakthroughs.

Scientific Backing: The Meta-Analysis of Deliberate Practice

The limitations of the 10,000-hour rule are further illuminated by a comprehensive meta-analysis on deliberate practice and performance across various domains, including music, games, sports, education, and professions. This study, which examined the effects of deliberate practice on performance, found that while practice is undoubtedly important, its overall contribution to performance varies significantly across disciplines.

In domains where performance is highly predictable and structured, such as classical music and chess, deliberate practice accounted for a substantial portion of variance in performance. However, in less structured and more dynamic fields, such as technology and entrepreneurship, the impact of deliberate practice was notably smaller:

“We found that deliberate practice explained 26% of the variance in performance for games, 21% for music, 18% for sports, 4% for education, and less than 1% for professions.

This finding suggests that while honing specific skills is important, the ability to innovate, adapt, and experiment plays a crucial role in achieving success in these fields.

The implications of this research underscore the value of the 10,000-experiment rule. In the realm of technology, where the landscape is characterized by rapid change and uncertainty, the capacity to learn from experiments—not just from repeated practice—is paramount. This approach aligns with the idea that success in technology hinges on the ability to navigate ambiguity, test hypotheses, and pivot based on feedback and outcomes.

Case Studies of Success Through Experimentation

The power of the 10,000-experiment rule is vividly illustrated in the stories of tech luminaries and companies that have placed experimentation at the heart of their success. These case studies not only demonstrate the rule’s effectiveness but also inspire a culture of innovation and resilience.

Thomas Edison: The Quintessential Experimenter

Thomas Edison’s work is perhaps the most iconic example of success through experimentation. Edison’s philosophy of “genius is one percent inspiration, ninety-nine percent perspiration” reflects his commitment to the iterative process of innovation. His development of the electric light bulb involved over a thousand experiments, a journey marked by setbacks, learning, and eventual triumph. Edison’s persistence and willingness to learn from each experiment laid the groundwork for modern electric lighting and numerous other technologies, embodying the essence of the 10,000-experiment rule.

Amazon: Cultivating an Experimentation Culture

Under Jeff Bezos’s leadership, Amazon has become synonymous with innovation, largely due to its embrace of experimentation. Amazon’s foray into cloud computing with AWS, its development of the Kindle, and its exploration of AI through Alexa are outcomes of its experimental culture. Bezos’s approach—viewing every failure as an opportunity to learn and every experiment as a step toward discovery—has propelled Amazon into new markets and technologies, illustrating the transformative power of the 10,000-experiment rule in corporate innovation.

Facebook: Rapid Iteration and Growth

Mark Zuckerberg’s Facebook has similarly leveraged the power of experimentation to evolve from a college networking site into a global social media platform. Facebook’s rapid iteration on features, constant A/B testing, and openness to pivoting based on user feedback have been instrumental in its growth and adaptability. This commitment to experimentation has enabled Facebook to stay relevant amidst changing social media landscapes and user preferences.

Applying the 10,000 Experiment Rule in Robotics

Robotics is a field that epitomizes the intersection of complex hardware, sophisticated software, and intricate real-world interactions. The 10,000-experiment rule finds a fertile ground in robotics, where practical, hands-on experimentation and iterative learning are key to innovation and breakthroughs. 

Using ROS 2 (Robot Operating System 2) significantly enhances the capacity for such experimentation, providing a unified and flexible framework for robotics development.

Iterative Design and Testing with ROS 2

The iterative design process in robotics is crucial for developing effective and efficient systems. ROS 2 facilitates this process by offering advanced tools for building and testing robotic applications. Its modular architecture allows roboticists to experiment with different configurations and functionalities easily, streamlining the process of learning from each iteration. By leveraging ROS 2’s capabilities, developers can quickly prototype ideas, test hypotheses, and refine their designs based on real-world feedback.

Simulation-Based Testing Enhanced by ROS 2

ROS 2 integrates seamlessly with powerful simulation tools, such as Gazebo and NVIDIA Isaac Sim, enabling developers to conduct thousands of simulated experiments efficiently. These simulations are invaluable for exploring the behavior of robotic systems under a wide range of conditions without the time and cost associated with physical prototypes. By utilizing ROS 2 in simulation-based testing, researchers can accelerate the experimentation process, rapidly iterating on design and software algorithms to identify promising approaches before real-world implementation.

Real-World Experimentation and ROS 2

When transitioning from simulation to real-world testing, ROS 2’s robustness and flexibility become even more beneficial. Its support for diverse hardware and real-time communication allows for extensive real-world experimentation, critical for refining robot designs and ensuring they can handle the complexities of their intended environments. ROS 2’s ecosystem encourages a collaborative approach to experimentation, where developers can share insights, tools, and best practices, further accelerating innovation in robotics.

Leveraging ROS 2 for the 10,000 Experiment Rule

ROS 2 is designed to support the rapid iteration and flexibility required by the 10,000-experiment rule. Its features enable roboticists to:

  • Prototype Quickly: Developers can use ROS 2 to build and test new ideas swiftly, reducing the time from concept to experimentation.
  • Analyze and Iterate: With ROS 2, it’s easier to collect and analyze data from experiments, facilitating a deeper understanding of each trial and informing subsequent iterations.
  • Collaborate and Share: The ROS 2 community encourages sharing of software, tools, and best practices, making it easier for roboticists to learn from each other’s experiments.

Practical Tips for Implementing the 10,000 Experiment Rule

Adopting the 10,000-experiment rule requires a strategic approach to experimentation. Here are some practical tips for individuals and organizations looking to embrace this mindset:

  • Document Everything: Keep detailed records of each experiment, including the hypothesis, methodology, results, and learnings. This documentation is invaluable for tracking progress and informing future experiments.
  • Embrace Failure: View each failed experiment as a learning opportunity. Analyzing why an experiment didn’t work is often more informative than a successful outcome.
  • Foster a Culture of Curiosity: Encourage team members to ask questions, propose experiments, and explore new ideas. A supportive environment that values curiosity and risk-taking is essential for innovation.
  • Leverage Technology: Utilize software and tools designed for managing experiments. These can help organize data, track progress, and analyze results, making the experimentation process more efficient and effective.

Conclusion

The 10,000-experiment rule offers a compelling framework for achieving success in technology and beyond. By shifting the focus from sheer hours of practice to the quality and quantity of experiments, individuals and organizations can foster a culture of innovation, resilience, and continuous learning. The stories of Edison, Bezos, Zuckerberg, and countless others in the field of technology underscore the transformative power of this approach. As we look to the future, embracing the mindset of experimentation will be key to navigating the complexities of technology and unlocking new realms of possibility. Let the journey of 10,000 experiments begin.

How to Create a URDF File of the A0509 by Doosan Robotics – ROS 2

In this tutorial, I’ll show you how to use ROS 2 and RViz to visualize the A0509 robotic arm by Doosan Robotics.

2024-01-09-13.56.52

Prerequisites

  • (Optional) You have completed this tutorial in which I build a URDF file from scratch for the myCobot 280 by Elephant Robotics.

Useful Links

Below are some helpful reference links in case you want to learn more about this robotic arm.

Install the dsr_description2 Package

The first thing we need to do is to install the dsr_description2 package. You can see the official GitHub repository here, but we will use the apt package manager to install everything as shown below.

cd ~/ros2_ws/src
git clone https://github.com/doosan-robotics/doosan-robot2.git

The package is currently made for ROS 2 Foxy, so we need to do a little cleaning of the repository so we don’t have any build errors.

Let’s remove some packages we don’t need right now.

rm -rf doosan-robot2/dsr_control2/
rm -rf doosan-robot2/dsr_msgs2/
rm -rf doosan-robot2/dsr_example2/
cd ~/ros2_ws/
colcon build
source ~/.bashrc
ros2 pkg list

Visualize the URDF File

To visualize the URDF file, open a terminal window, and make sure the urdf_tutorial ROS 2 package is installed.

sudo apt-get install ros-${ROS_DISTRO}-urdf-tutorial

Now visualize the URDF.

ros2 launch urdf_tutorial display.launch.py model:=/home/ubuntu/ros2_ws/src/doosan-robot2/dsr_description2/urdf/a0509.blue.urdf

Under Global Options on the upper left panel of RViz, change the Fixed Frame from base_link to base.

You can untick the TF option to see the URDF more clearly.

a0509-blue-doosan-robotics-rviz

You can use the Joint State Publisher GUI pop-up window to move the links around.

If you want to see the URDF file, go the following link:

cd ~/ros2_ws/src/doosan-robot2/dsr_description2/urdf/a0509.blue.urdf

That’s it!

As I mentioned earlier, the Doosan Robotics ROS 2 GitHub is outdated. ROS 2 Foxy reached the end of life in 2023. 

I’m really hoping they can develop a package for the most recent versions of ROS 2, but only time will tell. 

Keep building!