Mastering Turtlesim in ROS 2 Foxy Fitzroy

In this post, we will learn the basics of ROS 2 by working with Turtlesim, a ROS 2-based simulator. Follow each step below, one line at a time. Take your time, so you do everything correctly. There is no hurry.

The official tutorial is located in the ROS 2 Foxy documentation, but we’ll run through the entire process step-by-step below.

You Will Need

In order to complete this tutorial, you will need:

Prerequisites

It is helpful if you’ve completed this tutorial on Linux fundamentals, but it is not required.

Install Turtlesim

Install Turtlesim, by typing the following commands:

sudo apt update
sudo apt install ros-foxy-turtlesim

Check that Turtlesim was installed.

ros2 pkg executables turtlesim

Here is what you should see:

1_turtlesim_installedJPG

Launch Turtlesim

To launch Turtlesim, type:

ros2 run turtlesim turtlesim_node

Here is the window that you should see:

2_turtlesim_launch_windowJPG

In the terminal, you will see the position and orientation of the turtle.

3-turtle-poseJPG

Let’s move the turtle around the blue screen.

ros2 run turtlesim turtle_teleop_key

With this same terminal window selected, use the arrow keys to navigate the turtle around the screen.

To close turtlesim, go to all terminal windows and type:

CTRL + C

Common ROS 2 Commands

Open a new terminal. Let’s type some common ROS 2 commands.

To list the active ROS nodes, type the following command. A node in ROS is just a program (e.g. typically a piece of source code made in C++ or Python) that does some computation. 

ros2 node list
4-ros-2-node-listJPG

Robots require a number of programs to work together to achieve a task. You can think of a node as a small single-purpose program within a larger robotic system. 

One way nodes communicate with each other is by using messages. These messages are passed via channels called topics.

To see a list of active topics in ROS 2, type the following command:

ros2 topic list
5-ros2-topic-listJPG

Don’t worry what all those topics mean. In a later post, I’ll dive deeper into ROS topics.

Let’s see what the active services are. A ROS Service consists of a pair of messages: one for the request and one for the reply. A service-providing ROS node (i.e. Service Server) offers a service (e.g. read sensor data). 

ros2 service list
6-ros2-service-listJPG

Let’s see what actions are active now. I explain the difference between a ROS Action and a ROS Service in this post.

ros2 action list
7-ros-action-listJPG

Here is one more command you should know:

  • ros2 node info <node_name> : Shows information about the node named node_name.

Install rqt

Let’s install rqt. rqt is a tool that enables you to see the connections between nodes.

sudo apt update
sudo apt install ~nros-foxy-rqt*

Press Y and Enter to complete the installation.

Run rqt

Launch turtlesim again.

ros2 run turtlesim turtlesim_node

In another terminal tab, type:

ros2 run turtlesim turtle_teleop_key

To run rqt, open a terminal window and type:

rqt

If this is your first time running rqt, go to Plugins > Services > Service Caller in the menu bar at the top.

Here is what my window looks like:

8-service-dropdownJPG

Click the refresh button to the left of the Service dropdown menu. This button looks like this:

9-service-refreshJPG

Here is what my window looks like now:

10-service-refreshJPG

Click the dropdown list in the middle of the window, and select the /spawn service.

11-select-spawn-serviceJPG

Launch Another Turtle Using the Spawn Service

The /spawn service spawns another turtle. Let’s use this service now.

Set the values under the Request area to the following by double-clicking the values under the ‘Expression’ menu.

In this case, we want to launch a new turtle at coordinate x=1.0, y=1.0. The name of this new turtle will be turtle2.

Call this service by clicking the Call button in the upper right of the panel.

13-call-buttonJPG

You should see a new turtle spawned at coordinate x=1.0, y=1.0.

14-new-turtle-is-spawnedJPG

Click the Refresh button.

You now have new services for turtle2.

15-click-refreshJPG

Using the Set Pen Service

Let’s change the color of the pen for the first turtle, turtle1.

Go to the Service dropdown list, and scroll down to /turtle1/set_pen.

16-width-of-penJPG

We want turtle1’s pen to be green, so we set the g (Green) color value to 255. I also want the width of the pen to be 5.

Click Call to call the service.

If you go to the terminal and select the terminal where you typed the “ros2 run turtlesim turtle_teleop_key” command, you can move turtle1 around the screen with your keyboard. You will see a green pen color.

17-green-lineJPG

Move turtle2

Let’s move turtle2. To do that, we need to perform remapping. We need to remap the velocity command for turtle1 to turtle2. Open a new terminal tab, and type (this is all a single command):

ros2 run turtlesim turtle_teleop_key --ros-args --remap turtle1/cmd_vel:=turtle2/cmd_vel

You can use your keyboard to move turtle2 around the screen.

18-move-turtle2JPG

To close turtlesim, go to all terminals and type:

CTRL + C

How to Install ROS 2 Foxy Fitzroy on Ubuntu Linux

In this post, we will install ROS 2 Foxy Fitzroy. As of the date of this tutorial, ROS 2 Foxy Fitzroy is the latest ROS distribution that has long term support. It will be supported until May 2023.

You Will Need

In order to complete this tutorial, you will need:
Ubuntu 20.04 installed and running properly (I’m running my Ubuntu in a VirtualBox on Windows 10).

Prerequisites

It is helpful if you’ve completed this tutorial on Linux fundamentals, but it is not required.

Set the Locale

The official steps for installing ROS are at this link at ROS.org, but I will walk you through the process below so that you can see what each step should look like. We will install ROS 2 Foxy via Debian Packages.

The first thing we are going to do is to set the locale. You can understand what locale means at this link.  

Type this command inside the terminal window.

locale
1_set_localeJPG

Now type the following command.

sudo apt update && sudo apt install locales

If you get some error that looks like this: “Waiting for cache lock: Could not get lock /var/lib/dpkg/lock. It is held by process 3944”, open a new terminal window, and kill that process:

2_errorJPG
sudo kill -9 [process ID]

In my case, I type:

sudo kill -9 3944

Wait for everything to install, and then type:

sudo locale-gen en_US en_US.UTF-8
sudo update-locale LC_ALL=en_US.UTF-8 LANG=en_US.UTF-8
export LANG=en_US.UTF-8
locale

Add the ROS 2 Repositories

Type the following inside the terminal.

sudo apt update && sudo apt install curl gnupg2 lsb-release

At the prompt, type Y and then Enter to install the repositories.

Now type the following command (this is a single command. you can copy and paste all this into your terminal window):

curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add -

Now let’s add the repository to the source list.

sudo sh -c 'echo "deb [arch=$(dpkg --print-architecture)] http://packages.ros.org/ros2/ubuntu $(lsb_release -cs) main" > /etc/apt/sources.list.d/ros2-latest.list'

Install the ROS 2 Packages

Update the apt repository cache.

sudo apt update

Install the Desktop version of ROS 2.

sudo apt install ros-foxy-desktop

Type Y and Enter to download ROS 2.

Set up the Environment Variables

Add foxy to your bash file.

echo "source /opt/ros/foxy/setup.bash" >> ~/.bashrc

To check if it was added, type the following command, and scroll all the way to the bottom.:

gedit ~/.bashrc
3_add_to_bashJPG

If you don’t have gedit installed, type:

sudo apt-get install gedit

Now close the current terminal window, and open a new one.

Type the following command to see what version of ROS you are using.

printenv ROS_DISTRO

Here is what you should see.

4-ros-distroJPG

You can also type:

env |grep ROS

Install some other tools that you will work with in ROS. After you type the command below, press Y and Enter to complete the download process.

sudo apt install -y python3-pip
pip3 install -U argcomplete
sudo apt install python3-argcomplete

Test Your Installation

Open a new terminal window, and launch the talker program. This program is a demo program written in C++ that is publishing messages (i.e. data in the form of a string) to a topic. 

ros2 run demo_nodes_cpp talker
6-talker-nodeJPG

Then type the following command in another terminal window to launch the listener program. This Python program listens (i.e. subscribes) to the topic that the talker program is publishing messages to. 

ros2 run demo_nodes_py listener
7-listener-nodeJPG

To understand how topics, publisher nodes, and subscriber nodes work in ROS, check out this post.

If you saw the output above, everything is working properly. Yay!

That’s it!

Human Pose Estimation Using Deep Learning in OpenCV

In this tutorial, we will implement human pose estimation. Pose estimation means estimating the position and orientation of objects (in this case humans) relative to the camera. By the end of this tutorial, you will be able to generate the following output:

human_pose_gif-1

Real-World Applications

Human pose estimation has a number of real-world applications: 

Let’s get started!

Prerequisites

Installation and Setup

We need to make sure we have all the software packages installed. Check to see if you have OpenCV installed on your machine. If you are using Anaconda, you can type:

conda install -c conda-forge opencv

Alternatively, you can type:

pip install opencv-python

Make sure you have NumPy installed, a scientific computing library for Python.

If you’re using Anaconda, you can type:

conda install numpy

Alternatively, you can type:

pip install numpy

Find Some Videos

The first thing we need to do is find some videos to serve as our test cases.

We want to download videos that contain humans. The video files should be in mp4 format and 1920 x 1080 in dimensions.

I found some good candidates on Pixabay.com and Dreamstime.com

Take your videos and put them inside a directory on your computer.

Download the Protobuf File

Inside the same directory as your videos, download the protobuf file on this page. It is named graph_opt.pb. This file contains the weights of the neural network. The neural network is what we will use to determine the human’s position and orientation (i.e. pose).

Brief Description of OpenPose

We will use the OpenPose application along with OpenCV to do what we need to do in this project. OpenPose is an open source real-time 2D pose estimation application for people in video and images. It was developed by students and faculty members at Carnegie Mellon University. 

You can learn the theory and details of how OpenPose works in this paper and at GeeksforGeeks.

Write the Code

Here is the code. Make sure you put the code in the same directory on your computer where you put the other files.

The only lines you need to change are:

  • Line 14 (name of the input file in mp4 format)
  • Line 15 (input file size)
  • Line 18 (output file name)
# Project: Human Pose Estimation Using Deep Learning in OpenCV
# Author: Addison Sears-Collins
# Date created: February 25, 2021
# Description: A program that takes a video with a human as input and outputs
# an annotated version of the video with the human's position and orientation..

# Reference: https://github.com/quanhua92/human-pose-estimation-opencv

# Import the important libraries
import cv2 as cv # Computer vision library
import numpy as np # Scientific computing library

# Make sure the video file is in the same directory as your code
filename = 'dancing32.mp4'
file_size = (1920,1080) # Assumes 1920x1080 mp4 as the input video file

# We want to save the output to a video file
output_filename = 'dancing32_output.mp4'
output_frames_per_second = 20.0 

BODY_PARTS = { "Nose": 0, "Neck": 1, "RShoulder": 2, "RElbow": 3, "RWrist": 4,
               "LShoulder": 5, "LElbow": 6, "LWrist": 7, "RHip": 8, "RKnee": 9,
               "RAnkle": 10, "LHip": 11, "LKnee": 12, "LAnkle": 13, "REye": 14,
               "LEye": 15, "REar": 16, "LEar": 17, "Background": 18 }

POSE_PAIRS = [ ["Neck", "RShoulder"], ["Neck", "LShoulder"], ["RShoulder", "RElbow"],
               ["RElbow", "RWrist"], ["LShoulder", "LElbow"], ["LElbow", "LWrist"],
               ["Neck", "RHip"], ["RHip", "RKnee"], ["RKnee", "RAnkle"], ["Neck", "LHip"],
               ["LHip", "LKnee"], ["LKnee", "LAnkle"], ["Neck", "Nose"], ["Nose", "REye"],
               ["REye", "REar"], ["Nose", "LEye"], ["LEye", "LEar"] ]

# Width and height of training set
inWidth = 368
inHeight = 368

net = cv.dnn.readNetFromTensorflow("graph_opt.pb")

cap = cv.VideoCapture(filename)

# Create a VideoWriter object so we can save the video output
fourcc = cv.VideoWriter_fourcc(*'mp4v')
result = cv.VideoWriter(output_filename,  
                         fourcc, 
                         output_frames_per_second, 
                         file_size) 
# Process the video
while cap.isOpened():
    hasFrame, frame = cap.read()
    if not hasFrame:
        cv.waitKey()
        break

    frameWidth = frame.shape[1]
    frameHeight = frame.shape[0]
    
    net.setInput(cv.dnn.blobFromImage(frame, 1.0, (inWidth, inHeight), (127.5, 127.5, 127.5), swapRB=True, crop=False))
    out = net.forward()
    out = out[:, :19, :, :]  # MobileNet output [1, 57, -1, -1], we only need the first 19 elements

    assert(len(BODY_PARTS) == out.shape[1])

    points = []
    for i in range(len(BODY_PARTS)):
        # Slice heatmap of corresponging body's part.
        heatMap = out[0, i, :, :]

        # Originally, we try to find all the local maximums. To simplify a sample
        # we just find a global one. However only a single pose at the same time
        # could be detected this way.
        _, conf, _, point = cv.minMaxLoc(heatMap)
        x = (frameWidth * point[0]) / out.shape[3]
        y = (frameHeight * point[1]) / out.shape[2]
        # Add a point if it's confidence is higher than threshold.
        # Feel free to adjust this confidence value.  
        points.append((int(x), int(y)) if conf > 0.2 else None)

    for pair in POSE_PAIRS:
        partFrom = pair[0]
        partTo = pair[1]
        assert(partFrom in BODY_PARTS)
        assert(partTo in BODY_PARTS)

        idFrom = BODY_PARTS[partFrom]
        idTo = BODY_PARTS[partTo]

        if points[idFrom] and points[idTo]:
            cv.line(frame, points[idFrom], points[idTo], (0, 255, 0), 3)
            cv.ellipse(frame, points[idFrom], (3, 3), 0, 0, 360, (255, 0, 0), cv.FILLED)
            cv.ellipse(frame, points[idTo], (3, 3), 0, 0, 360, (255, 0, 0), cv.FILLED)

    t, _ = net.getPerfProfile()
    freq = cv.getTickFrequency() / 1000
    cv.putText(frame, '%.2fms' % (t / freq), (10, 20), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0, 0, 0))

    # Write the frame to the output video file
    result.write(frame)
		
# Stop when the video is finished
cap.release()
	
# Release the video recording
result.release()

Run the Code

To run the code, type:

python openpose.py

Video Output

Here is the output I got:

Further Work

If you would like to do a deep dive into pose estimation, check out the official GitHub for the OpenPose project here.

That’s it. Keep building!