How to Install and Demo the Webots Robot Simulator for ROS 2

In this tutorial, I will show you how to install, build, and demo the ROS 2 package for Webots, the free open-source 3D robot simulator created by Cyberbotics Ltd. Installing Webots was a bit difficult, so it is important you follow the steps below click-by-click, command-by-command so that everything installs properly. Here is what the demo looks like:

Prerequisites

ROS 2 Foxy Fitzroy installed on Ubuntu Linux 20.04 (if you are using another distribution, you will need to replace ‘foxy’ with the name of your distribution).

Install

Below are the exact steps I took to install, build, and demo Webots successfully.

First, open a new terminal window, and create a new workspace. You can call it any name, but we will call it “webots”. Inside the workspace, we will create a source (i.e. src) directory. This is where your packages will go.

mkdir -p ~/webots/src

Navigate to the workspace.

cd ~/webots/src

To install the Webots software, open a new terminal window, and type the following commands:

sudo apt-get install ros-$ROS_DISTRO-webots-ros2

Type Y and then Enter to continue.

Next, we need to clone the repository into our workspace.

git clone https://github.com/cyberbotics/webots_ros2.git

Let’s see if the package is in the source folder of our workspace.

ls
1-in-our-workspaceJPG

Open a new terminal window, and try to run the demo.

ros2 launch webots_ros2_demos armed_robots.launch.py

You will be asked if you want to install the latest version of Webots.

Type Y and press Enter.

It will take some time to install.

2-install-webotsJPG

You will see this demo pop up.

3-demo-armJPG

Press CTRL+C in all terminal windows to shutdown the demo.

Go to the following directory:

cd webots/src/webots_ros2/webots_ros2_importer/webots_ros2_importer
git clone https://github.com/cyberbotics/urdf2webots.git
cd urdf2webots
pip3 install -r requirements.txt

At this stage, I was having install errors when I tried to build the package. So here is what I did next.

Remove the webots package. Either go to the file explorer in Linux and delete the webots_ros2 folder manually, or you can run the following commands to delete the folder.

cd ~/webots/src
rmdir webots_ros2

Now, download the package again.

cd ~/webots/src
git clone --recurse-submodules -b $ROS_DISTRO https://github.com/cyberbotics/webots_ros2.git webots_ros2

Build

Go to the root of the workspace.

cd ~/webots/

Check the dependencies.

rosdep update
rosdep install --from-paths src --ignore-src --rosdistro $ROS_DISTRO
colcon build

Add the sourcing of the workspace to the bashrc file.

gedit ~/.bashrc

At the bottom of the bash file, add the following line:

source ~/webots/install/setup.bash

Demo

How to Run the Robotic Arm Demo

Open a new terminal window, and type the following command.

ros2 launch webots_ros2_demos armed_robots.launch.py

When you’re done watching the demo, press CTRL + C in all terminal windows.

That’s it. Keep building!

How to Install ROS 2 Navigation (Nav2)

In this tutorial, we will explore Navigation2 (Nav2), which is a collection of tools for ROS 2 that enable a robot to go from point A to point B safely. We will also take a look at a SLAM demo with a robot named Turtlebot 3. Here will be our final output:

Real-World Applications

Navigation is one of the most important tasks for a mobile robot. Navigation is about enabling a mobile robot to move from one location to another without running into any obstacles.

In order to navigate properly, a robot needs to have a map (mapping), know where it is located (localization), and have a plan for getting from point A to point B (path planning).

Prerequisites

Install and Build Nav2

***Note: The official instructions for installing Nav2 are here. Please check out that link to get the latest instructions. The steps below are valid as of the date of this blog post and will likely be different by the time you read this.***

Once you’re done with this tutorial, you can head over to my Ultimate Guide to the ROS 2 Navigation Stack.

To install Nav2, open a new terminal window, and type the following commands:

sudo apt install ros-foxy-navigation2

Type Y and then Enter to continue.

sudo apt install ros-foxy-nav2-bringup

Type Y and then Enter to continue.

Install the Turtlebot 3 example.

Open a new terminal window, and type:

sudo apt install ros-foxy-turtlebot3*
sudo apt install ros-foxy-nav2-simple-commander

If you want to build from the source (i.e. get the ROS2 navigation packages directly from GitHub), open a new terminal window, and type the following commands. One right after the other.

mkdir -p ~/nav2_ws/src
cd ~/nav2_ws/src
git clone https://github.com/ros-planning/navigation2.git --branch foxy-devel
cd ~/nav2_ws
rosdep install -y -r -q --from-paths src --ignore-src --rosdistro foxy

Your computer might say something like “executing command [sudo ……”. That is fine. Just wait, and let your system finish doing what it is doing.

colcon build --symlink-install

It took a while to install Nav2 on my machine. Just be patient.

You noticed that if we install Nav2 from the source, we also needed to install it using the Ubuntu package manager first. The reason we had to do both is that the Ubuntu package manager installs some non-ROS dependencies that are necessary for Nav2 to build from the source.

Building Nav2 from the source (using the Github clone command we did above) enables us to customize the packages in Nav2 to our liking (e.g. add new plugins, messages, etc.) that won’t get overwritten during a system upgrade (i.e. sudo apt-get upgrade)

When Nav2 is finished installing, open your bash file.

gedit ~/.bashrc

Add these lines to the bottom of the file. You can get the latest information on what to add on this link.

source ~/nav2_ws/install/setup.bash
export TURTLEBOT3_MODEL=waffle
export GAZEBO_MODEL_PATH=$GAZEBO_MODEL_PATH:/opt/ros/foxy/share/turtlebot3_gazebo/models
source /usr/share/gazebo/setup.sh

Save the file, and close it.

cd ~/nav2_ws

Build it again, just to make sure everything is in order

colcon build

Test Your Installation

Now test your installation.

Open a new terminal window.

cd ~/nav2_ws
ros2 launch nav2_bringup tb3_simulation_launch.py

rviz2 will open.

Gazebo will also open, but it may take a while.

1_gazebo_rviz_openJPG

Move the Robot From Point A to Point B

Now go to the rviz2 screen.

Set the initial pose of the robot by clicking the “2D Pose Estimate” on top of the rviz2 screen. Then click on the map in the estimated position where the robot is in Gazebo.

2b-2d-pose-estimateJPG
2-set-poseJPG
3-robot-poseJPG

Set a goal for the robot to move to. Click “Navigation2 Goal” button and choose a destination. The wheeled robot will move to the goal destination.

4-navigation-goalJPG
5-navigate-to-a-goalJPG

In the bottom left of the screen, you can Pause and Reset.

Press CTRL + C on all terminal windows to close down the programs.

Install the SLAM Toolbox

Now that we know how to navigate the robot from point A to point B with a prebuilt map, let’s see how we can navigate the robot while mapping. This process is known as Simultaneous localization and mapping (SLAM).

Open a new terminal window. Type this command:

sudo apt install ros-foxy-slam-toolbox

Launch the SLAM launch file. Open a new terminal window, and type:

cd ~/nav2_ws
ros2 launch nav2_bringup slam_launch.py

Now launch the robot.

ros2 launch nav2_bringup tb3_simulation_launch.py

Click the 2D Pose Estimate button and click on the rviz screen an estimate position where the robot is in Gazebo.

Then click the Navigation2 Goal button and click on an area of rviz where you want the robot to go.

Press CTRL+C in all terminals to shut everything down.

Here is another command you can run. This command launches Turtlebot3 and the SLAM package in a single command.

ros2 launch nav2_bringup tb3_simulation_launch.py slam:=True

That’s it. Keep building!

How to Determine the Orientation of an Object Using OpenCV

In this tutorial, we will build a program that can determine the orientation of an object (i.e. rotation angle in degrees) using the popular computer vision library OpenCV.

Real-World Applications

One of the most common real-world use cases of the program we will develop in this tutorial is when you want to develop a pick and place system for robotic arms. Determining the orientation of an object on a conveyor belt is key to determining the appropriate way to grasp the object, pick it up, and place it in another location.

Let’s get started!

Prerequisites

Installation and Setup

Before we get started, let’s make sure we have all the software packages installed. Check to see if you have OpenCV installed on your machine. If you are using Anaconda, you can type:

conda install -c conda-forge opencv

Alternatively, you can type:

pip install opencv-python

Install Numpy, the scientific computing library.

pip install numpy

Find an Image File

Find an image. My input image is 1200 pixels in width and 900 pixels in height. The filename of my input image is input_img.jpg.

input_img_600

Write the Code

Here is the code. It accepts an image named input_img.jpg and outputs an annotated image named output_img.jpg. Pieces of the code pull from the official OpenCV implementation.

import cv2 as cv
from math import atan2, cos, sin, sqrt, pi
import numpy as np

def drawAxis(img, p_, q_, color, scale):
  p = list(p_)
  q = list(q_)

  ## [visualization1]
  angle = atan2(p[1] - q[1], p[0] - q[0]) # angle in radians
  hypotenuse = sqrt((p[1] - q[1]) * (p[1] - q[1]) + (p[0] - q[0]) * (p[0] - q[0]))

  # Here we lengthen the arrow by a factor of scale
  q[0] = p[0] - scale * hypotenuse * cos(angle)
  q[1] = p[1] - scale * hypotenuse * sin(angle)
  cv.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), color, 3, cv.LINE_AA)

  # create the arrow hooks
  p[0] = q[0] + 9 * cos(angle + pi / 4)
  p[1] = q[1] + 9 * sin(angle + pi / 4)
  cv.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), color, 3, cv.LINE_AA)

  p[0] = q[0] + 9 * cos(angle - pi / 4)
  p[1] = q[1] + 9 * sin(angle - pi / 4)
  cv.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), color, 3, cv.LINE_AA)
  ## [visualization1]

def getOrientation(pts, img):
  ## [pca]
  # Construct a buffer used by the pca analysis
  sz = len(pts)
  data_pts = np.empty((sz, 2), dtype=np.float64)
  for i in range(data_pts.shape[0]):
    data_pts[i,0] = pts[i,0,0]
    data_pts[i,1] = pts[i,0,1]

  # Perform PCA analysis
  mean = np.empty((0))
  mean, eigenvectors, eigenvalues = cv.PCACompute2(data_pts, mean)

  # Store the center of the object
  cntr = (int(mean[0,0]), int(mean[0,1]))
  ## [pca]

  ## [visualization]
  # Draw the principal components
  cv.circle(img, cntr, 3, (255, 0, 255), 2)
  p1 = (cntr[0] + 0.02 * eigenvectors[0,0] * eigenvalues[0,0], cntr[1] + 0.02 * eigenvectors[0,1] * eigenvalues[0,0])
  p2 = (cntr[0] - 0.02 * eigenvectors[1,0] * eigenvalues[1,0], cntr[1] - 0.02 * eigenvectors[1,1] * eigenvalues[1,0])
  drawAxis(img, cntr, p1, (255, 255, 0), 1)
  drawAxis(img, cntr, p2, (0, 0, 255), 5)

  angle = atan2(eigenvectors[0,1], eigenvectors[0,0]) # orientation in radians
  ## [visualization]

  # Label with the rotation angle
  label = "  Rotation Angle: " + str(-int(np.rad2deg(angle)) - 90) + " degrees"
  textbox = cv.rectangle(img, (cntr[0], cntr[1]-25), (cntr[0] + 250, cntr[1] + 10), (255,255,255), -1)
  cv.putText(img, label, (cntr[0], cntr[1]), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0,0,0), 1, cv.LINE_AA)

  return angle

# Load the image
img = cv.imread("input_img.jpg")

# Was the image there?
if img is None:
  print("Error: File not found")
  exit(0)

cv.imshow('Input Image', img)

# Convert image to grayscale
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)

# Convert image to binary
_, bw = cv.threshold(gray, 50, 255, cv.THRESH_BINARY | cv.THRESH_OTSU)

# Find all the contours in the thresholded image
contours, _ = cv.findContours(bw, cv.RETR_LIST, cv.CHAIN_APPROX_NONE)

for i, c in enumerate(contours):

  # Calculate the area of each contour
  area = cv.contourArea(c)

  # Ignore contours that are too small or too large
  if area < 3700 or 100000 < area:
    continue

  # Draw each contour only for visualisation purposes
  cv.drawContours(img, contours, i, (0, 0, 255), 2)

  # Find the orientation of each shape
  getOrientation(c, img)

cv.imshow('Output Image', img)
cv.waitKey(0)
cv.destroyAllWindows()
 
# Save the output image to the current directory
cv.imwrite("output_img.jpg", img)

Output Image

Here is the result:

output_img_600-1

Understanding the Rotation Axes

The positive x-axis of each object is the red line. The positive y-axis of each object is the blue line

The global positive x-axis goes from left to right horizontally across the image. The global positive z-axis points out of this page. The global positive y-axis points from the bottom of the image to the top of the image vertically.

Using the right-hand rule to measure rotation, you stick your four fingers out straight (index finger to pinky finger) in the direction of the global positive x-axis.

right-hand-ruleJPG

You then rotate your four fingers 90 degrees counterclockwise. Your fingertips point towards the positive y-axis, and your thumb points out of this page towards the positive z-axis.

right-hand-rule-curlJPG

Calculate an Orientation Between 0 and 180 Degrees

If we want to calculate the orientation of an object and make sure that the result is always between 0 and 180 degrees, we can use this code:

# This programs calculates the orientation of an object.
# The input is an image, and the output is an annotated image
# with the angle of otientation for each object (0 to 180 degrees)

import cv2 as cv
from math import atan2, cos, sin, sqrt, pi
import numpy as np

# Load the image
img = cv.imread("input_img.jpg")

# Was the image there?
if img is None:
  print("Error: File not found")
  exit(0)

cv.imshow('Input Image', img)

# Convert image to grayscale
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)

# Convert image to binary
_, bw = cv.threshold(gray, 50, 255, cv.THRESH_BINARY | cv.THRESH_OTSU)

# Find all the contours in the thresholded image
contours, _ = cv.findContours(bw, cv.RETR_LIST, cv.CHAIN_APPROX_NONE)

for i, c in enumerate(contours):

  # Calculate the area of each contour
  area = cv.contourArea(c)

  # Ignore contours that are too small or too large
  if area < 3700 or 100000 < area:
    continue

  # cv.minAreaRect returns:
  # (center(x, y), (width, height), angle of rotation) = cv2.minAreaRect(c)
  rect = cv.minAreaRect(c)
  box = cv.boxPoints(rect)
  box = np.int0(box)

  # Retrieve the key parameters of the rotated bounding box
  center = (int(rect[0][0]),int(rect[0][1])) 
  width = int(rect[1][0])
  height = int(rect[1][1])
  angle = int(rect[2])

  	
  if width < height:
    angle = 90 - angle
  else:
    angle = -angle
		
  label = "  Rotation Angle: " + str(angle) + " degrees"
  textbox = cv.rectangle(img, (center[0]-35, center[1]-25), 
    (center[0] + 295, center[1] + 10), (255,255,255), -1)
  cv.putText(img, label, (center[0]-50, center[1]), 
    cv.FONT_HERSHEY_SIMPLEX, 0.7, (0,0,0), 1, cv.LINE_AA)
  cv.drawContours(img,[box],0,(0,0,255),2)

cv.imshow('Output Image', img)
cv.waitKey(0)
cv.destroyAllWindows()
 
# Save the output image to the current directory
cv.imwrite("min_area_rec_output.jpg", img)

Here is the output:

min_area_rec_output

That’s it. Keep building!