Robot State Publisher vs. Joint State Publisher

In this post, I will explain the difference between the Robot State Publisher and the Joint State Publisher ROS packages. 

In order to understand the difference between the packages, it is important you first understand that every robot is made up of two components:

  1. Joints
  2. Links

Links and Joints in Robotics

Links are the rigid pieces of a robot. They are the “bones”. 

Links are connected to each other by joints. Joints are the pieces of the robot that move, enabling motion between connected links.

Consider the human arm below as an example. The shoulder, elbow, and wrist are joints. The upper arm, forearm and palm of the hand are links.

link_joint

For a robotic arm, links and joints look like this.

link-joint-robotic-arm

You can see that a robotic arm is made of rigid pieces (links) and non-rigid pieces (joints). Servo motors at the joints cause the links of a robotic arm to move.

For a mobile robot with LIDAR, links and joints look like this:

mobile-robot-joints-links

The wheel joints are revolute joints. Revolute joints cause rotational motion. The wheel joints in the photo connect the wheel link to the base link.

Fixed joints have no motion at all. You can see that the LIDAR is connected to the base of the robot via a fixed joint (i.e. this could be a simple screw that connects the LIDAR to the base of the robot).

You can also have prismatic joints. The SCARA robot in this post has a prismatic joint. Prismatic joints cause linear motion between links (as opposed to rotational motion).

Difference Between the Robot State Publisher and the Joint State Publisher

Whenever we want a robot to complete a specific task (e.g. move a certain distance in an environment, pick up an object, etc.), we have to have a way to know the position and velocity of each joint at all times. The Joint State Publisher does exactly this.

The Joint State Publisher package keeps track of the position (i.e. angle in radians for a servo motor or displacement in meters for a linear actuator) and velocity of each joint of a robot and publishes these values to the ROS system as sensor_msgs/JointState messages.

The Robot State Publisher then takes two main inputs:

  1. The sensor_msgs/JointState messages from the Joint State Publisher. 
  2. A model of the robot in URDF file format.

The Robot State Publisher takes that information, outputs the position and orientation of each coordinate frame of the robot, and publishes this data to the tf2 package

The tf2 package is responsible for keeping track of the position and orientation of all coordinate frames of a robot over time. At any given time, you can query the tf2 package to find out the position and orientation of any coordinate frame (i.e. “child frame”) relative to another coordinate frame (i.e. “parent” frame).

For example, if we are using ROS 2 and want to know the position and orientation of the LIDAR link relative to the base of the robot, we would use the following command:

ros2 run tf2_ros tf2_echo base_link lidar_link

The syntax is:

ros2 run tf2_ros tf2_echo <parent frame> <child frame>

Joint State Publisher: Simulation vs. Real World

When you are creating robots in simulation using a tool like Gazebo, you are going to want to use the joint state publisher Gazebo plugin to publish the position and orientation of the joints (i.e. publish the sensor_msgs/JointState messages).

In a real-world robotics project, you will want to write your own joint state publisher. You can find examples of how to do this here, here, and here.

How to Download a ROS Package from GitHub

Let’s say you see a ROS package on GitHub that you’d like to bring into your workspace.

For example, I would like to download the rviz_plugin_tutorials package available on this page on GitHub.

Option 1: Using the Command Line

ROS 1 Package

Open a new terminal window.

Type:

cd ~/catkin_ws/src

Clone the repository:

git clone -b <branch> <address>

For example, if you are looking to download the rviz_plugin_tutorials package available here, you would type the following command (this is a single command all on one line):

git clone -b noetic-devel https://github.com/ros-visualization/visualization_tutorials.git

You could also leave out the ROS branch specification, and type:

git clone <address>

For example:

git clone https://github.com/ros-visualization/visualization_tutorials.git

Then open up a new terminal window, and type:

cd ~/catkin_ws
catkin_make

That’s it!

ROS 2 Package

For a ROS 2 package, you would type the following (assuming the name of your ROS 2 workspace is dev_ws):

cd ~/dev_ws/src
git clone -b foxy-devel https://github.com/ros-visualization/visualization_tutorials.git

Open a new terminal window, and type:

cd ~/dev_ws

Install dependencies:

rosdep install --from-paths src --ignore-src -r -y

To build the packages, type:

colcon build --symlink-install

If that command doesn’t work, try the following command:

colcon build

Option 2: Downloading Manually from GitHub

First, you need to download the files to your computer.

Go to this page and download all the files. I like to download it to my Desktop.

Click the green “Code” button to download a ZIP file of the repository.

Open up the zip file, and go to the rviz_plugin_tutorials folder. The rviz_plugin_tutorials package is the package we need.

Right-click on that folder, and click “Extract”.

Move that folder to your catkin_ws/src folder. I just dragged the folder to my catkin_ws/src folder on my computer, just as I would drag and drop any folder on a Mac or Windows computer.

Open up a terminal window, and type:

cd ~/catkin_ws/src
dir

You should see the rviz_plugin_tutorials package (i.e. a folder) in there.

Now you need to run this command so that your ROS environment knows about the rviz_plugin_tutorials package we just added. Open a new terminal window, and type:

cd ~/catkin_ws/
catkin_make

Open a new terminal window, and type:

rospack find rviz_plugin_tutorials

You should see the path of your new ROS package.

50-path-of-new-ros-packageJPG

To see if our package depends on other packages, type these commands. Ignore any error messages you see in the terminal:

rosdep update
rosdep check rviz_plugin_tutorials

You should see a message that says “All system dependencies have been satisfied.”

51-all-dependencies-satisfiedJPG

How to Load a TensorFlow Model Using OpenCV

In this tutorial, we will load a TensorFlow model (i.e. neural network) using the popular computer vision library known as OpenCV. To make things interesting, we will build an application to detect eating utensils (i.e. forks, knives, and spoons). Here is what the final output will look like:

eating-utensils-final

Our goal is to build an early prototype of a product that can make it easier and faster for a robotic chef arm, like the one created by Moley Robotics, to detect forks, knives, and spoons.

You Will Need

Directions

Open this page for the TensorFlow Object Detection API.

Download a weights and a config file for one of the pretrained object detection models. I will use Inception-SSD v2. You will want to right click and Save As.

2-go-to-this-page
1-two-files-config-weights

Use a program like 7-Zip to open the tar.gz archive file (i.e. click Open archive if using 7-Zip). 

Double click on ssd_inception_v2_coco_2017_11_17.tar.

3-double-click

Double click again on the folder name.

4-frozen-inference-graph

Locate a file inside the folder named frozen_inference_graph.pb. Move this file to your working directory (i.e. the same directory where we will write our Python program later in this tutorial).

I also have a pbtxt file named ssd_inception_v2_coco_2017_11_17.pbtxt that came from the config link. This file is of the form PBTXT. 

To create the ssd_inception_v2_coco_2017_11_17.pbtxt file, I right clicked on config, clicked Save As, saved to my Desktop, and then copied the contents of the pbtxt file on GitHub into this pbtxt file using Notepad++

Make sure the pb and pbtxt files are both in your working directory.

Alternatively, you can just download my pb file and my pbtxt file.

Now create a new Python program in your working directory called utensil_detector.py

Add the following code:

# Project: Eating Utensil Detector Using TensorFlow and OpenCV
# Author: Addison Sears-Collins
# Date created: August 1, 2021
# Description: This program detects forks, spoons, and knives

import cv2 as cv # OpenCV computer vision library
import numpy as np # Scientific computing library 

#  classes = ['person','bicycle','car','motorcycle','airplane' ,'bus','train','truck','boat' ,'traffic light','fire hydrant',
#    'stop sign','parking meter','bench','bird','cat','dog','horse','sheep','cow','elephant','bear','zebra','giraffe' ,
#    'backpack','umbrella','handbag' ,'tie','suitcase','frisbee' ,'skis','snowboard','sports ball' ,'kite',
#    'baseball bat','baseball glove','skateboard','surfboard','tennis rack','bottle','wine glass','cup','fork','knife',
#    'spoon','bowl','banana','apple' ,'sandwich','orange','broccoli','carrot','hot dog','pizza' ,'donut' ,'cake',
#    'chair' ,'couch' ,'potted plant','bed','dining table','toilet','tv','laptop','mouse','remote','keyboard',
#    'cell phone','microwave','oven','toaster','sink','refrigerator','book','clock','vase','scissors' ,'teddy bear',
#    'hair drier','toothbrush']

# Just use a subset of the classes
classes = ["background", "person", "bicycle", "car", "motorcycle",
  "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant",
  "unknown", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse",
  "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "unknown", "backpack",
  "umbrella", "unknown", "unknown", "handbag", "tie", "suitcase", "frisbee", "skis",
  "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard",
  "surfboard", "tennis racket", "bottle", "unknown", "wine glass", "cup", "fork", "knife",
  "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog",
  "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "unknown", "dining table",
  "unknown", "unknown", "toilet", "unknown", "tv", "laptop", "mouse", "remote", "keyboard",
  "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "unknown",
  "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]

# Colors we will use for the object labels
colors = np.random.uniform(0, 255, size=(len(classes), 3))

# Open the webcam
cam = cv.VideoCapture(0)

pb  = 'frozen_inference_graph.pb'
pbt = 'ssd_inception_v2_coco_2017_11_17.pbtxt'

# Read the neural network
cvNet = cv.dnn.readNetFromTensorflow(pb,pbt)   

while True:

  # Read in the frame
  ret_val, img = cam.read()
  rows = img.shape[0]
  cols = img.shape[1]
  cvNet.setInput(cv.dnn.blobFromImage(img, size=(300, 300), swapRB=True, crop=False))

  # Run object detection
  cvOut = cvNet.forward()

  # Go through each object detected and label it
  for detection in cvOut[0,0,:,:]:
    score = float(detection[2])
    if score > 0.3:

      idx = int(detection[1])   # prediction class index. 

      # If you want all classes to be labeled instead of just forks, spoons, and knives, 
      # remove this line below (i.e. remove line 65)
      if classes[idx] == 'fork' or classes[idx] == 'spoon' or classes[idx] == 'knife':			
        left = detection[3] * cols
        top = detection[4] * rows
        right = detection[5] * cols
        bottom = detection[6] * rows
        cv.rectangle(img, (int(left), int(top)), (int(right), int(bottom)), (23, 230, 210), thickness=2)
           
        # draw the prediction on the frame
        label = "{}: {:.2f}%".format(classes[idx],score * 100)
        y = top - 15 if top - 15 > 15 else top + 15
        cv.putText(img, label, (int(left), int(y)),cv.FONT_HERSHEY_SIMPLEX, 0.5, colors[idx], 2)

  # Display the frame
  cv.imshow('my webcam', img)

  # Press ESC to quit
  if cv.waitKey(1) == 27: 
    break 

# Stop filming
cam.release()

# Close down OpenCV
cv.destroyAllWindows()

Save the code.

Here is what my working directory looks like.

here_is_what_my_working_directory

Run the code.

python utensil_detector.py

You should see object detection running in real time. If you place a fork, knife, or spoon in the camera, you will see it labeled accordingly.

That’s it! Keep building!