How to Determine What Torque You Need for Your Servo Motors

In this tutorial, I will show you how to calculate how much torque you need for the servo motors (i.e. joints) on a robotic arm you are trying to build. 

Why is torque so important? When you want to build a robotic arm to perform some task in the world, you have to make sure that each joint of the arm (i.e. servo motor) is strong enough to lift whatever you want it to lift. In robotics jargon, the maximum weight that a robotic arm can lift is referred to as the maximum payload

This six degree of freedom robotic arm we built here can lift a pair of sunglasses, but the servos don’t have enough torque to lift a car, for example.

What is Torque?

Consider this wooden rod below.

2-torque-servo-motorsJPG

Let’s label its center of mass with a light blue circle. Imagine this blue circle is a nail that we have hammered into the wooden rod and a wall.

3-torque-servo-motorsJPG

If we apply a force F on the end of the rod at a distance r from the center of mass, what do you think will happen? 

Note that we are only concerned about the component of the force that is perpendicular to the position vector. The position vector is drawn from the axis of rotation (i.e. the blue nail) to the force vector (in blue below). The position vector must be perpendicular to the force vector.

4-torque-servo-motorsJPG

The rod will rotate in a counterclockwise direction around the axis.

5-torque-servo-motorsJPG

This force that is applied at a position r from the axis of rotation (which is directly out of the page) is known as torque. Torque is often represented by the Greek letter τ. The equation for torque is:

τ = r F 

It is worth repeating, but with torque we are only concerned about the component of the force that is perpendicular to the position vector. Any force component that is parallel to the position vector doesn’t generate torque.

For example, imagine we applied force to the end of the rod at an angle like this:

6-torque-servo-motorsJPG

The torque in this case just takes into account that red line above. Here, the equation for the torque is:

sin(θ) = Fperpendicular / F       … using trigonometry

Fsin(θ) = Fperpendicular               

Therefore,

τ = r F sin(θ)

The official metric (SI) units of torque is the Newton-meter (Nm). However in datasheets for servo motors, you’ll often see ounce-force-inch (oz-in) or kilogram-force centimeter (kg-cm).

In the real world, a servo motor’s axis is the blue nail. This is the robotic arm’s joint. The orange bar is the link of a robotic arm. The force F is the force acting on an object (that the robotic arm is trying to lift) due to gravity. 

7-torque-servo-motorsJPG

F = mass *  g

Where m is the mass the servo motor has to lift, and g is the acceleration due to gravity (9.80665 m/s2). 

The torque for this setup above is:

τ = m * g * r

Example 1

If you see a servo with a torque of 35kg-cm, what does that mean?

8-torque-servo-motorsJPG

Typically, when you see a value like 35 kg printed on a servo motor, what they are referring to is the stall torque, which, in this case, is 35 kg-cm. Stall torque is the torque load that causes a servo motor to “stall” or stop rotating.

A stall torque of 35 kg-cm means that the servo motor will stop rotating when it is trying to move a 35 kg weight at a radial distance of 1.0 cm. 

In the example below, we will assume the orange link has a mass of 0 kg.

9-torque-servo-motorsJPG

In case you’re wondering, what the force F is:

F = m * g = 35 kg * 9.80665 m/s2

F = 35 kg * 9.80665 m/s2

F = 343.2 N

Where N stands for newtons. Note that 1 N = 9.80665 kg.

Example 2

Let’s say the servo of the robotic arm rotates a bit, counterclockwise.

What is the torque in this case?

10-torque-servo-motorsJPG

τ = r F 

τ = (1.0 cm * cos(45°)) * (343.2 N)

τ = (.01 m * cos(45°)) * (343.2 N)

τ = (.00707m) * (343.2 N)

τ = 2.427 Nm

1 Nm is equal to 10.197162129779 kg-cm.

Therefore,

τ = 24.75 kg-cm

So, you can see that when the arm is extended out parallel to the ground surface, the torque is higher than when the arm is bent. 

This makes a lot of sense. Imagine holding your arm stretched out horizontally and trying to hold a bucket of water in place while trying not to bend your elbow. It would be pretty difficult!

Accounting for the Force of Gravity

In the example above, we have accounted for the force of gravity when calculating our torque requirements. However, we assumed that the links have no weight. In reality they do. 

Consider this diagram of a robotic arm below. It has four servo motors and a gripper on the end (at right). Each servo motor (i.e. joint) is connected by a link (typically a metal piece like you see in this six degree of freedom robotic arm here).

11-robotic-armJPG

To calculate the torque requirement, we start at the end effector (i.e. gripper) of the robotic arm and then work our way to the base of the robot. We need to account for all the pieces of the robot that are impacted by gravity:

  • Weight of the link
  • Weight of the joint
  • Weight of the object being lifted (e.g. the box below)
12-robotic-armJPG

Let’s start with joint 4. Joint 4 has to be strong enough to lift the box as well as link 4. 

Box:

  • The center of mass of the box is located at a distance r4 from the axis of rotation of joint 4.
  • The force of gravity acting on the box is equal to mbox * g. (where m means mass).
  • The torque due to the box is therefore r4 * mbox * g.

Link 4:

13-robotic-armJPG
  • The center of mass of the link is that red dot above. It is located at a distance r4/2  from the axis of rotation of joint 4.
  • The force of gravity acting on link 4 is equal to mlink4 * g (note that the product of mass and the gravitational acceleration constant is also known as the “weight”).
  • The torque due to the box is therefore (1/2) * r4 *  mlink4 * g.

Therefore:

Torque4 =  (r4 * mbox * g) + ((1/2) * r4 *  mlink4 * g)

Let’s now do one more. We’ll do joint 3. We place a red dot in the center of mass of link 3.

14-robotic-armJPG

Joint 3 has to be strong enough to lift the following components:

  • Box…………((r3 + r4) * mbox * g)
  • Link 4….…..((r3 + r4/2) * mlink4 * g)
  • Joint 4……..(r3 * mjoint4 * g)
  • Link 3………(r3/2 * mlink4 * g)

Therefore:

Torque3 =  ((r3 + r4) * mbox * g) + ((r3 + r4/2) * mlink4 * g) + (r3 * mjoint4 * g) + (r3/2 * mlink4 * g)

You keep doing this for the other joints.

Accounting for Angular Acceleration

The force of gravity on the links and payload (i.e. the box) is only part of the calculation of the torque requirement for a motor. We want our robotic arm to move and do useful work in the world…not just sit there with its arm stretched out, holding a box in place.

There is a torque required for a joint to move (i.e. generate angular acceleration) a link (or payload) from a rest position. Therefore, the total torque requirement for a servo is:

Torque Required By a Servo Motor = (Torque Due to Force of Gravity on Links and Payload) + (Torque Due to Angular Acceleration of Links and Payload)

This torque due to angular acceleration is calculated using the following equation:

τ = Iα

where I is the rotational inertia (or moment of inertia), and α is the angular acceleration around an axis. 

The value for I will vary depending on what is generating the angular acceleration (e.g. solid cylinder, thin rod, slab, etc.). Inertia is the “resistance an object has to any change in its velocity.” Therefore, rotational inertia in the case of a servo motor is the resistance the motor has to any change in its velocity. 

The units for rotational inertia are kg-m2. The unit for angular acceleration is rad/s2.

Example

Consider this joint below that is connected to a link. In this case, the motor will rotate counterclockwise in a direction that is at a 90 degree angle to the force of gravity (which is straight down). The only torque we have here is the torque required to generate angular acceleration.

τ = Iα

15-linkJPG

I in this case will be the sum of the rotational inertia of the motor and the link. 

I = Imotor + Ilink1

You can find the rotational inertia of the motor (Imotor) in the datasheet for the motor (do a Google search for the datasheet). The rotational inertia of the link (Ilink1) can be described as the rotational inertia of a rod of some length L and mass m, rotating about one end. The equation is as follows:

Ilink1=(1/3)*m*L2

16-linkJPG

α, the angular acceleration, will be the same for both the motor and the link since they are connected to each other. 

How do we find α?

Let’s assume that we want the motor to move 90° in 1.0 second and then stop (e.g. common in pick and place tasks). We can draw a graph of angular velocity ω (radians/second) vs. time t (seconds).

17-angular-velocityJPG

We want to minimize acceleration as much as possible so that we minimize the torque requirements for our servo motor. We could have any velocity curve we want, but the curve that minimizes acceleration is the one that increases linearly from rest, reaches a peak at the halfway point, and then decreases linearly from there.

The reason why this curve below gives the minimum acceleration is because acceleration is the slope of the velocity curve (i.e. change in angular velocity/change in time = angular acceleration). 

You can see that the magnitude of the slope is minimized on the black curve. The red and green curves get steep in some parts, which means the angular acceleration is high (and thus more torque is needed for the servo motor).

18-angular-velocityJPG

What is the required area underneath this curve below? The area of the curve (i.e. triangle) is equal to the distance the servo motor needs to move.

The formula for the area of a triangle (Atriangle) is: 

Atriangle = (1/2)*(base)*(height)

19-angular-velocityJPG

The distance the servo motor needs to move is 90°, which is equal to π/2 radians.

π/2 = (1/2)*(1.0)*(ωmax)

ωmax = π radians/second

Since α is the slope of the curve, we know that it is:

α = (change in y/ change in x) = (π/0.5) = 2π rad/s2

Now, we have all the numbers we need to calculate the required torque:

τ = (Imotor + (1/3)*m*L2) * (2π)

Let’s assume:

  • Imotor = 0 kg-m2 (we’ll assume the rotational inertia of the motor is negligible compared to the link, but you would typically get this value in the motor’s datasheet)
  • L = 0.75 m
  • m = 1.2 kg

τ = ((1/3)*(1.2kg)*(0.75 meters)2) * (2π rad/s2)

τ = 1.414  kg-m2/s2 = 1.414  Nm = 14.410 kg-cm = 200 oz-in

Now, that we’ve identified the torque requirements for the motor in motion (we covered the stationary motor case in the previous section of this post), we need to make sure we select a motor that will be able to exert 14.410 kg-cm of torque at all of the speeds in the curve below.

20-angular-velocityJPG

So the next step is to take a motor that has a stall torque greater than 14.410 kg-cm and plot the torque vs. speed curve. We need to make sure that both the maximum torque and maximum speed (i.e. ωmax) fall under this curve; otherwise, we could damage our motors.

Let’s suppose we searched around on the Internet at various electronics stores and found a motor with a no load speed of 45 RPM and a stall torque of 250 oz-in (18 kg-cm). Let’s draw the torque vs. speed curve. Note that the “no load speed” is the revolutions per minute of a motor when it is running at top speed with nothing attached to it.

21-stall-torqueJPG

From our analysis, we calculated that we need 200 oz-in of torque, so I’ll draw that on the graph as a blue line.

22-stall-torqueJPG

ωmax = π radians/second 

1 Radians Per Second =  9.5493 Revolutions Per Minute

ωmax = π radians/second * 9.5493 rpm/(radians/second) = 30 rpm 

Let’s draw that on the curve.

23-no-loadJPG

It looks like that pink dot is above the curve, so we might need a stronger motor. Let’s double check to see if that point is, in fact, above our torque vs. speed curve.

24-torqueJPG

Torque = (-250/45)*(30) + 250 = 83 oz-in

25-torqueJPG

We see that the motor generates 83 oz-in of torque when the speed is 30rpm. However, we need to have 200 oz-in of torque, so this motor is not strong enough for our project.

Now, let’s see if we found a motor with the following specifications. 

  • No-load speed = 100 rpm
  • Stall Torque = 350 oz-in

Do this work for our purposes? Let’s do the math.

Torque = (-350/100)*(30) + 350 = 245 oz-in

Since 245 > 200, this motor works for our purposes.

However, in our analysis we have not taken into account the rotational inertia of the motor, and we have assumed that there is no payload attached to the link. So, in a real-world use case, we need to account for those by incorporating them into our original calculation of I (i.e. the rotational inertia).

Accounting for Angular Acceleration and the Force of Gravity

Now let’s look at an example where we need to take both angular acceleration and the force of gravity into account in order to calculate the torque requirement.

We will reposition the joint-link combination we worked with in the previous section so that the body of the motor is now parallel to the surface. Here is how that looks:

26-gravityJPG

Here we have two torques:

  • Acceleration of the Link: τ = Iα
  • Gravity: τ = r F 

Adding the two torques we have:

τ = Iα + mass *  g * (r/2) * cos(θ)

Where r/2 is the distance from the axis of rotation to the center of mass of the link. The center of mass of the link is noted with a pink circle below.

27-mass-of-linkJPG

The mass of the link is 1.2 kg, and the link length is 0.75 meters. In the worst case, cos(θ) = 1. Therefore, the amount of torque that the motor needs to have the link overcome the downward force of gravity is:

mass *  g * (r/2) * cos(θ) = (1.2 kg)(9.8 m/s2)(0.75/2) = 4.41 Nm = 624 oz-in

You can see how the torque required to overcome the force of gravity is more than 3x the required torque to accelerate a link from rest (which we calculated as 200 oz-in).

Total torque required = 200 oz-in + 624 oz-in = 824 oz-in

Adding a Payload to the Robotic Arm

Now, let’s assume the robotic arm has something at the end of it that it needs to carry. We call this a payload.

28-payloadJPG

If you go to this list at Wikipedia, you will see that this object can be considered a “point mass”. The equation for rotational inertia for a point mass is:

I = Mr2

Where M = mass of the object, and r is the distance from the rotational axis to the center of mass of the object.

Earlier we found the torque required to produce angular acceleration for the link. That was:

τ = (Imotor + (1/3)*m*L2) * (2π)

τ = ((1/3)*(1.2kg)*(0.75 meters)2) * (2π rad/s2)

τ = 1.414  kg-m2/s2 = 1.414  Nm = 14.410 kg-cm = 200 oz-in

Now we need to add the rotational inertia of the object attached to the link. Let’s assume the object has a mass of 1.2kg (just like the mass of the link). The equation becomes:

τ = ((1/3)*(1.2kg)*(0.75 meters)2 + 1.2 kg * (0.75)2) * (2π rad/s2)

τ = 5.655  kg-m2/s2 = 5.655  Nm = 57.665 kg-cm = 800.8 oz-in

Now let’s go back here:

27-mass-of-linkJPG-1

We had:

τ = Iα + mass *  g * (r/2) * cos(θ)

τ = Iα + 4.41 Nm

We also have to add the torque of the payload due to gravity.

τ = Iα + 4.41 Nm + (1.2 kg)(9.8 m/s2)(0.75) 

τ = Iα + 4.41 Nm + 8.82 Nm

τ = Iα + 13.23 Nm

τ = Iα + 1873.5 oz-in

Therefore,

τ = Total amount of torque required to move the link and payload

  = (torque needed for angular acceleration) + (torque to overcome gravity) 

  = 800.8 oz-in + 1873.5 oz-in

  = 2674.3 oz-in

You now have the fundamentals to expand this calculation to multi-link robotic arms. That’s it for now. Keep building!

References

  • Here is a handy calculator that helps you select the appropriate servo motor for each joint of your robotic arm.
  • Khan Academy has a good comprehensive short course on torque.
  • Professor Angela Sodemann has a great tutorial on her site RoboGrok.com that covers torque requirements for manipulators

Detect the Corners of Objects Using Harris Corner Detector

In this tutorial, we will write a program to detect corners on random objects.

Our goal is to build an early prototype of a vision system that can make it easier and faster for robots to identify potential grasp points on unknown objects. 

Real-World Applications

The first real-world application that comes to mind is Dr. Pieter Abbeel (a famous robotics professor at UC Berkley) and his laundry-folding robot.

Dr. Abbeel used the Harris Corner Detector algorithm as a baseline for evaluating how well the robot was able to identify potential grasp points on a variety of laundry items. You can check out his paper at this link.

laundry-folding-robot
Source: YouTube (Credit to Berkeley AI Research lab)

If the robot can identify where to grasp a towel, for example, inverse kinematics can then be calculated so the robot can move his arm so that the gripper (i.e. end effector) grasps the corner of the towel to begin the folding process.

Prerequisites

Requirements

Here are the requirements:

  • Create a program that can detect corners on an unknown object using the Harris Corner Detector algorithm.

Detecting Corners on Jeans

jeans

We’ll start with this pair of jeans above. Save that image to some folder on your computer.

Now, in the same folder you saved that image above (we’ll call the file jeans.jpg), open up a new Python program.

Name the program harris_corner_detection.py.

Write the following code:

# Project: Detect the Corners of Objects Using Harris Corner Detector
# Author: Addison Sears-Collins
# Date created: October 7, 2020
# Reference: https://stackoverflow.com/questions/7263621/how-to-find-corners-on-a-image-using-opencv/50556944

import cv2 # OpenCV library
import numpy as np # NumPy scientific computing library
import math # Mathematical functions

# The file name of your image goes here
fileName = 'jeans.jpg'

# Read the image file
img = cv2.imread(fileName)

# Convert the image to grayscale 
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)   

######## The code in this block is optional #########
## Turn the image into a black and white image and remove noise
## using opening and closing

#gray = cv2.threshold(gray, 75, 255, cv2.THRESH_BINARY)[1]

#kernel = np.ones((5,5),np.uint8)

#gray = cv2.morphologyEx(gray, cv2.MORPH_OPEN, kernel)
#gray = cv2.morphologyEx(gray, cv2.MORPH_CLOSE, kernel)

## To create a black and white image, it is also possible to use OpenCV's
## background subtraction methods to locate the object in a real-time video stream 
## and remove shadows.
## See the following links for examples 
## (e.g. Absolute Difference method, BackgroundSubtractorMOG2, etc.):
## https://automaticaddison.com/real-time-object-tracking-using-opencv-and-a-webcam/
## https://automaticaddison.com/motion-detection-using-opencv-on-raspberry-pi-4/

############### End optional block ##################

# Apply a bilateral filter. 
# This filter smooths the image, reduces noise, while preserving the edges
bi = cv2.bilateralFilter(gray, 5, 75, 75)

# Apply Harris Corner detection.
# The four parameters are:
#   The input image
#   The size of the neighborhood considered for corner detection
#   Aperture parameter of the Sobel derivative used.
#   Harris detector free parameter 
#   --You can tweak this parameter to get better results 
#   --0.02 for tshirt, 0.04 for washcloth, 0.02 for jeans, 0.05 for contour_thresh_jeans
#   Source: https://docs.opencv.org/3.4/dc/d0d/tutorial_py_features_harris.html
dst = cv2.cornerHarris(bi, 2, 3, 0.02)

# Dilate the result to mark the corners
dst = cv2.dilate(dst,None)

# Create a mask to identify corners
mask = np.zeros_like(gray)

# All pixels above a certain threshold are converted to white         
mask[dst>0.01*dst.max()] = 255

# Convert corners from white to red.
#img[dst > 0.01 * dst.max()] = [0, 0, 255]

# Create an array that lists all the pixels that are corners
coordinates = np.argwhere(mask)

# Convert array of arrays to lists of lists
coordinates_list = [l.tolist() for l in list(coordinates)]

# Convert list to tuples
coordinates_tuples = [tuple(l) for l in coordinates_list]

# Create a distance threshold
thresh = 50

# Compute the distance from each corner to every other corner. 
def distance(pt1, pt2):
    (x1, y1), (x2, y2) = pt1, pt2
    dist = math.sqrt( (x2 - x1)**2 + (y2 - y1)**2 )
    return dist

# Keep corners that satisfy the distance threshold
coordinates_tuples_copy = coordinates_tuples
i = 1    
for pt1 in coordinates_tuples:
    for pt2 in coordinates_tuples[i::1]:
        if(distance(pt1, pt2) < thresh):
            coordinates_tuples_copy.remove(pt2)      
    i+=1

# Place the corners on a copy of the original image
img2 = img.copy()
for pt in coordinates_tuples:
    print(tuple(reversed(pt))) # Print corners to the screen
    cv2.circle(img2, tuple(reversed(pt)), 10, (0, 0, 255), -1)
cv2.imshow('Image with 4 corners', img2) 
cv2.imwrite('harris_corners_jeans.jpg', img2) 

# Exit OpenCV
if cv2.waitKey(0) & 0xff == 27:
    cv2.destroyAllWindows()

Run the code. Here is what you should see:

harris_corners_jeans

Now, if you uncomment the optional block in the code, you will see a window that pops up that shows a binary black and white image of the jeans. The purpose of the optional block of code is to remove some of the noise that is present in the input image.

harris_corners_contour_thresh_jeans

Notice how when we remove the noise, we get a lot more potential grasp points (i.e. corners).

Detecting Corners on a T-Shirt

To detect corners on a t-shirt, you’ll need to tweak the fourth parameter on this line of the code. I use 0.02 typically, but you can try another value like 0.04:

# 0.02 for tshirt, 0.04 for washcloth, 0.02 for jeans, 0.05 for contour_thresh_jeans
#   Source: https://docs.opencv.org/3.4/dc/d0d/tutorial_py_features_harris.html
dst = cv2.cornerHarris(bi, 2, 3, 0.02)

Let’s detect the corners on an ordinary t-shirt.

Here is the input image (tshirt.jpg):

tshirt

Change the fileName variable in your code so that it is assigned the name of the image (‘tshirt.jpg’).

Here is the output image:

harris_corners_tshirt

Detecting Corners on a Washcloth

Input Image (washcloth.jpg):

washcloth

Output image:

harris_corners_washcloth

That’s it. Keep building!

How To Write a ROS2 Publisher and Subscriber (Python) – Foxy

In this tutorial, we will learn how to create a publisher and a subscriber node in ROS2 (Foxy Fitzroy…the latest version of ROS2) from scratch.

Requirements

We’ll create three separate nodes:

  1. A node that publishes the coordinates of an object detected by a fictitious camera (in reality, we’ll just publish random (x,y) coordinates of an object to a ROS2 topic).
  2. A program that converts the coordinates of the object from the camera reference frame to the (fictitious) robotic arm base frame.
  3. A node that publishes the coordinates of the object with respect to the robotic arm base frame.

I thought this example would be more fun and realistic than the “hello world” example commonly used to teach people how to write ROS subscriber and publisher nodes.

Real-World Applications

2-use-some-tape
Two degree of freedom robotic arm powered by Arduino with an overhead Raspberry Pi camera. With some additions and modifications, this system could be integrated with ROS2 using the code we’ll develop in this tutorial.

A real-world application of what we’ll accomplish in this tutorial would be a pick and place task where a robotic arm needs to pick up an object from one location and place it in another (e.g. factory, warehouse, etc.).

With some modifications to the code in this tutorial, you can create a complete ROS2-powered robotic system that can:

To complete such a project, you would need to write one more node, a node (i.e. a single Arduino sketch) for your Arduino to control the motors of the robotic arm. This node would need to:

  1. Subscribe to the coordinates of the object.
  2. Convert those coordinates into servo motor angles using inverse kinematics.
  3. Move the robotic arm to the location of the object to pick it up.

Prerequisites

Source Your ROS2 Installation

Open a new terminal window.

Open your .bashrc file.

gedit ~/.bashrc

If you don’t have gedit installed, be sure to install it before you run the command above.

sudo apt-get install gedit

Add this line of code to the very bottom of the file (Note: The line might already be there. If so, please ignore these steps).

source /opt/ros/foxy/setup.bash

You now don’t have to source your ROS2 installation every time you open a new terminal window.

Save the file, and close it.

Let’s create a workspace directory.

mkdir -p ~/camera_to_robot_base_frame_ws/src

Open your .bashrc file.

gedit ~/.bashrc

Add this line of code to the very bottom of the .bashrc file.

source ~/camera_to_robot_base_frame_ws/install/setup.bash

1_add_bashrcJPG

Save the file and close it.

Create a Package

Open a new terminal window, and type:

cd ~/camera_to_robot_base_frame_ws/src

Now let’s create a ROS package.

Type this command:

ros2 pkg create --build-type ament_python my_package

Your package named my_package has now been created.

2-package-createdJPG

Build Your Package

Return to the root of your workspace:

cd ~/camera_to_robot_base_frame_ws/

Build all packages in the workspace.

colcon build
3-build-packageJPG

Write Node(s)

Open a new terminal window.

Move to the camera_to_robot_base_frame_ws/src/my_package/my_package folder.

cd camera_to_robot_base_frame_ws/src/my_package/my_package

Write this program, and add it to this folder you’re currently in. The name of this file is camera_publisher.py.

This code generates random object coordinates (i.e. x and y location). In a real-world scenario, you would be detecting an object with a camera and then publishing those coordinates to a ROS2 topic.

gedit camera_publisher.py
''' ####################
    Detect an object in a video stream and publish its coordinates
	from the perspective of the camera (i.e. camera reference frame)
	Note: You don't need a camera to run this node. This node just demonstrates how to create
	a publisher node in ROS2 (i.e. how to publish data to a topic in ROS2).
    -------
	Publish the coordinates of the centroid of an object to a topic:
	  /pos_in_cam_frame – The position of the center of an object in centimeter coordinates
    ==================================
    Author: Addison Sears-Collins
    Date: September 28, 2020
    #################### '''

import rclpy # Import the ROS client library for Python
from rclpy.node import Node # Enables the use of rclpy's Node class
from std_msgs.msg import Float64MultiArray # Enable use of the std_msgs/Float64MultiArray message type
import numpy as np # NumPy Python library
import random # Python library to generate random numbers

class CameraPublisher(Node):
  """
  Create a CameraPublisher class, which is a subclass of the Node class.
  The class publishes the position of an object every 3 seconds.
  The position of the object are the x and y coordinates with respect to 
  the camera frame.
  """
 
  def __init__(self):
    """
	Class constructor to set up the node
    """
  
    # Initiate the Node class's constructor and give it a name
    super().__init__('camera_publisher')
	
    # Create publisher(s)  
	
    # This node publishes the position of an object every 3 seconds.
    # Maximum queue size of 10. 
    self.publisher_position_cam_frame = self.create_publisher(Float64MultiArray, '/pos_in_cam_frame', 10)
	
    # 3 seconds
    timer_period = 3.0	
    self.timer = self.create_timer(timer_period, self.get_coordinates_of_object)
    self.i = 0	# Initialize a counter variable
	
    # Centimeter to pixel conversion factor
    # Assume we measure 36.0 cm across the width of the field of view of the camera.
    # Assume camera is 640 pixels in width and 480 pixels in height
    self.CM_TO_PIXEL = 36.0 / 640
  
  def get_coordinates_of_object(self):
    """
    Callback function.
    This function gets called every 3 seconds.
	We locate an object using the camera and then publish its coordinates to ROS2 topics.
    """	
    # Center of the bounding box that encloses the detected object.
    # This is in pixel coordinates.
    # Since we don't have an actual camera and an object to detect, 
    # we generate random pixel locations.
    # Assume x (width) can go from 0 to 640 pixels, and y (height) can go from 0 to 480 pixels
    x = random.randint(250,450)   # Generate a random integer from 250 to 450 (inclusive)
    y = random.randint(250,450)   # Generate a random integer from 250 to 450 (inclusive)
    
    # Calculate the center of the object in centimeter coordinates
    # instead of pixel coordinates
    x_cm = x * self.CM_TO_PIXEL
    y_cm = y * self.CM_TO_PIXEL
	
    # Store the position of the object in a NumPy array 
    object_position = [x_cm, y_cm]	  
	
    # Publish the coordinates to the topic
    self.publish_coordinates(object_position)
	
    # Increment counter variable
    self.i += 1
   	
  def publish_coordinates(self,position):
    """
    Publish the coordinates of the object to ROS2 topics
    :param: The position of the object in centimeter coordinates [x , y] 
    """
    msg = Float64MultiArray() # Create a message of this type 
    msg.data = position # Store the x and y coordinates of the object
    self.publisher_position_cam_frame.publish(msg) # Publish the position to the topic 	  

def main(args=None):

  # Initialize the rclpy library
  rclpy.init(args=args)

  # Create the node
  camera_publisher = CameraPublisher()

  # Spin the node so the callback function is called.
  # Publish any pending messages to the topics.
  rclpy.spin(camera_publisher)

  # Destroy the node explicitly
  # (optional - otherwise it will be done automatically
  # when the garbage collector destroys the node object)
  camera_publisher.destroy_node()

  # Shutdown the ROS client library for Python
  rclpy.shutdown()

if __name__ == '__main__':
  main()

Save the file and close it.

Now change the permissions on the file.

chmod +x camera_publisher.py

Let’s add a program (this isn’t a ROS2 node…it’s just a helper program that performs calculations) that will be responsible for converting the object’s coordinates from the camera reference frame to the to robot base frame. We’ll call it coordinate_transform.py.

gedit coordinate_transform.py
''' ####################
    Convert camera coordinates to robot base frame coordinates
    ==================================
    Author: Addison Sears-Collins
    Date: September 28, 2020
    #################### '''

import numpy as np
import random # Python library to generate random numbers

class CoordinateConversion(object):
  """
  Parent class for coordinate conversions
  All child classes must implement the convert function.
  Every class in Python is descended from the object class
  class CoordinateConversion == class CoordinateConversion(object)
  This class is a superclass, a general class from which 
  more specialized classes (e.g. CameraToRobotBaseConversion) can be defined.
  """
  def convert(self, frame_coordinates):
    """
    Convert between coordinate frames

    Input
      :param frame_coordinates: Coordinates of the object in a reference frame (x, y, z, 1)

    Output
      :return: new_frame_coordinates: Coordinates of the object in a new reference frame (x, y, z, 1)

    """
    # Any subclasses that inherit this superclass CoordinateConversion must implement this method.	  
    raise NotImplementedError


class CameraToRobotBaseConversion(CoordinateConversion):
  """
  Convert camera coordinates to robot base frame coordinates
  This class is a subclass that inherits the methods from the CoordinateConversion class.
  """

  def __init__(self,rot_angle,x_disp,y_disp,z_disp):
    """
    Constructor for the CameraToRobotBaseConversion class. Sets the properties.

    Input
      :param rot_angle: Angle between axes in degrees
      :param x_disp: Displacement between coordinate frames in the x direction in centimeters
      :param y_disp: Displacement between coordinate frames in the y direction in centimeters
      :param z_disp: Displacement between coordinate frames in the z direction in centimeters

    """
    self.angle = np.deg2rad(rot_angle) # Convert degrees to radians
    self.X = x_disp
    self.Y = y_disp
    self.Z = z_disp

  def convert(self, frame_coordinates):
    """
    Convert camera coordinates to robot base frame coordinates

    Input
      :param frame_coordinates: Coordinates of the object in the camera reference frame (x, y, z, 1) in centimeters

    Output
      :return: new_frame_coordinates: Coordinates of the object in the robot base reference frame (x, y, z, 1) in centimeters

    """
    # Define the rotation matrix from the robotic base frame (frame 0)
    # to the camera frame (frame c).
    rot_mat_0_c = np.array([[1, 0, 0],
                            [0, np.cos(self.angle), -np.sin(self.angle)],
                            [0, np.sin(self.angle), np.cos(self.angle)]])

    # Define the displacement vector from frame 0 to frame c
    disp_vec_0_c = np.array([[self.X],
                             [self.Y], 
                             [self.Z]])

    # Row vector for bottom of homogeneous transformation matrix
    extra_row_homgen = np.array([[0, 0, 0, 1]])

    # Create the homogeneous transformation matrix from frame 0 to frame c
    homgen_0_c = np.concatenate((rot_mat_0_c, disp_vec_0_c), axis=1) # side by side
    homgen_0_c = np.concatenate((homgen_0_c, extra_row_homgen), axis=0) # one above the other
	
    # Coordinates of the object in base reference frame
    new_frame_coordinates = homgen_0_c @ frame_coordinates
	
    return new_frame_coordinates


def main():
  """
  This code is used to test the methods implemented above.
  """

  # Define the displacement from frame base frame of robot to camera frame in centimeters
  x_disp = -17.8
  y_disp = 24.4
  z_disp = 0.0
  rot_angle = 180 # angle between axes in degrees
  
  # Create a CameraToRobotBaseConversion object
  cam_to_robo = CameraToRobotBaseConversion(rot_angle, x_disp, y_disp, z_disp)
  
  # Centimeter to pixel conversion factor
  # I measured 36.0 cm across the width of the field of view of the camera.
  CM_TO_PIXEL = 36.0 / 640

  print(f'Detecting an object for 3 seconds')
  dt = 0.1 # Time interval
  t = 0    # Set starting time
  while t<3:
    t = t + dt
  
    # This is in pixel coordinates.
	# Since we don't have an actual camera and an object to detect, 
	# we generate random pixel locations.
	# Assume x (width) can go from 0 to 640 pixels, and y (height) can go from 0 to 480 pixels
    x = random.randint(250,450)   # Generate a random integer from 250 to 450 (inclusive)
    y = random.randint(250,450)   # Generate a random integer from 250 to 450 (inclusive)

    # Calculate the center of the object in centimeter coordinates
    # instead of pixel coordinates
    x_cm = x * CM_TO_PIXEL
    y_cm = y * CM_TO_PIXEL	

    # Coordinates of the object in the camera reference frame
    cam_ref_coord = np.array([[x_cm],
                              [y_cm],
                              [0.0],
                              [1]])
    
    robot_base_frame_coord = cam_to_robo.convert(cam_ref_coord) # Return robot base frame coordinates 
   
    text = "x: " + str(robot_base_frame_coord[0][0]) + ", y: " + str(robot_base_frame_coord[1][0])
    print(f'{t}:{text}') # Display time stamp and coordinates   

if __name__ == '__main__':
  main()

Save the file and close it.

Now change the permissions on the file.

chmod +x coordinate_transform.py

Now we’ll add one more node. This node will be responsible for receiving the coordinates of an object in the camera reference frame (these coordinates are published to the ‘/pos_in_cam_frame’ topic by camera_publisher.py), using coordinate_transform.py to convert those coordinates to the robotic arm base frame, and finally publishing these transformed coordinates to the ‘/pos_in_robot_base_frame’ topic.

Name this node robotic_arm_publishing_subscriber.py.

gedit robotic_arm_publishing_subscriber.py
''' ####################
    Receive (i.e. subscribe) coordinates of an object in the camera reference frame, and
	publish those coordinates in the robotic arm base frame.
    -------
	Publish the coordinates of the centroid of an object to topic:
	  /pos_in_robot_base_frame – The x and y position of the center of an object in centimeter coordinates
    ==================================
    Author: Addison Sears-Collins
    Date: September 28, 2020
    #################### '''

import rclpy # Import the ROS client library for Python
from rclpy.node import Node # Enables the use of rclpy's Node class
from std_msgs.msg import Float64MultiArray # Enable use of the std_msgs/Float64MultiArray message type
import numpy as np # NumPy Python library
from .coordinate_transform import CameraToRobotBaseConversion

class PublishingSubscriber(Node):
  """
  Create a PublishingSubscriber class, which is a subclass of the Node class.
  This class subscribes to the x and y coordinates of an object in the camera reference frame.
  The class then publishes the x and y coordinates of the object in the robot base frame.  
  """
 
  def __init__(self):
    """
	Class constructor to set up the node
    """
  
    # Initiate the Node class's constructor and give it a name
    super().__init__('publishing_subscriber')
	
    # Create subscriber(s)    
	
    # The node subscribes to messages of type std_msgs/Float64MultiArray, over a topic named:
    #   /pos_in_cam_frame
    # The callback function is called as soon as a message is received.
    # The maximum number of queued messages is 10.
    self.subscription_1 = self.create_subscription(
      Float64MultiArray,
      '/pos_in_cam_frame',
      self.pos_received,
      10)
    self.subscription_1  # prevent unused variable warning
		
    # Create publisher(s)  
	
    # This node publishes the position in robot frame coordinates.
    # Maximum queue size of 10. 
    self.publisher_pos_robot_frame = self.create_publisher(Float64MultiArray, '/pos_in_robot_base_frame', 10)

    # Define the displacement from frame base frame of robot to camera frame in centimeters
    x_disp = -17.8
    y_disp = 24.4
    z_disp = 0.0
    rot_angle = 180 # angle between axes in degrees
	
    # Create a CameraToRobotBaseConversion object
    self.cam_to_robo = CameraToRobotBaseConversion(rot_angle, x_disp, y_disp, z_disp)
  
  def pos_received(self, msg):
    """
    Callback function.
    This function gets called as soon as the position of the object is received.
    :param: msg is of type std_msgs/Float64MultiArray 
    """
    object_position = msg.data
    
    # Coordinates of the object in the camera reference frame in centimeters
    cam_ref_coord = np.array([[object_position[0]],
                              [object_position[1]],
                              [0.0],
                              [1]])
    
    robot_base_frame_coord = self.cam_to_robo.convert(cam_ref_coord) # Return robot base frame coordinates 
	
    # Capture the object's desired position (x, y)
    object_position = [robot_base_frame_coord[0][0], robot_base_frame_coord[1][0]]
	  
    # Publish the coordinates to the topics
    self.publish_position(object_position)
   	
  def publish_position(self,object_position):
    """
    Publish the coordinates of the object with respect to the robot base frame.
    :param: object position [x, y] 
    """
    msg = Float64MultiArray() # Create a message of this type 
    msg.data = object_position # Store the object's position
    self.publisher_pos_robot_frame.publish(msg) # Publish the position to the topic 
	  
def main(args=None):

  # Initialize the rclpy library
  rclpy.init(args=args)

  # Create the node
  publishing_subscriber = PublishingSubscriber()

  # Spin the node so the callback function is called.
  # Pull messages from any topics this node is subscribed to.
  # Publish any pending messages to the topics.
  rclpy.spin(publishing_subscriber)

  # Destroy the node explicitly
  # (optional - otherwise it will be done automatically
  # when the garbage collector destroys the node object)
  publishing_subscriber.destroy_node()

  # Shutdown the ROS client library for Python
  rclpy.shutdown()

if __name__ == '__main__':
  main()

Save the file and close it.

Now change the permissions on the file.

chmod +x robotic_arm_publishing_subscriber.py

Add Dependencies

Navigate one level back.

cd ..

You are now in the camera_to_robot_base_frame_ws/src/my_package/ directory. 

Type 

ls

You should see the setup.py, setup.cfg, and package.xml files.

Open package.xml with your text editor.

gedit package.xml

Fill in the <description>, <maintainer>, and <license> tags.

  <name>my_package</name>
  <version>0.0.0</version>
  <description>Converts camera coordinates to robotic arm base frame coordinates</description>
  <maintainer email="example@automaticaddison.com">Addison Sears-Collins</maintainer>
  <license>All Rights Reserved.</license>

Add a new line after the ament_python build_type dependency, and add the following dependencies which will correspond to other packages the packages in this workspace needs (this will be in your node’s import statements):

<exec_depend>rclpy</exec_depend>
<exec_depend>std_msgs</exec_depend>

This let’s the system know that this package needs the rclpy and std_msgs packages when its code is executed. 

Save the file and close it.

Add an Entry Point

Now we need to add the entry points for the node(s) by opening the setup.py file.

gedit setup.py

Make sure to modify the entry_points block of code so that it looks like this:

entry_points={
        'console_scripts': [
                'camera_publisher = my_package.camera_publisher:main',
                'robotic_arm_publishing_subscriber = my_package.robotic_arm_publishing_subscriber:main',
        ],
},

Save setup.py and close it.

Check for Missing Dependencies

Check for any missing dependencies before you build the package.

Move to your workspace folder.

cd ~/camera_to_robot_base_frame_ws

Run the following command:

rosdep install -i --from-path src --rosdistro foxy -y

I get a message that says:

“#All required rosdeps installed successfully”

Build and Run

Now, build your package by first moving to the src folder.

cd src

Then build the package.

colcon build

You should get a message that says:

“Summary: 1 package finished [<time it took in seconds>]”

If you get errors, go to your Python programs and check for errors. Python is sensitive to indentation (tabs and spaces), so make sure the spacing of the code is correct (go one line at a time through the code).

The programs are located in this folder.

cd camera_to_robot_base_frame_ws/src/my_package/my_package

After you make changes, build the package again.

cd ~/camera_to_robot_base_frame_ws
colcon build

You should see this message when everything build successfully:

4-build-successfulJPG

Open a new terminal tab.

Run the nodes:

ros2 run my_package camera_publisher

In another terminal tab:

ros2 run robotic_arm_publishing_subscriber

Open a new terminal tab, and move to the root directory of the workspace.

cd ~/camera_to_robot_base_frame_ws

List the active topics.

ros2 topic list -t
5-ros2-topic-listJPG

To listen to any topic, type:

ros2 topic echo /topic_name

Where you replace “/topic_name” with the name of the topic. 

For example:

ros2 topic echo /pos_in_cam_frame

You should see the (x, y) coordinates of the object (in the camera reference frame) printed to the screen.

In another terminal, you can type:

ros2 topic echo /pos_in_robot_base_frame

You should see the (x, y) coordinates of the object (in the robot base frame) printed to the screen.

Here is how my terminal windows look. The camera coordinates in centimeters are on top, and the robot base frame coordinates are on the bottom.

7-camera-frame-above-robot-base-frame-belowJPG

When you’ve had enough, type Ctrl+C in each terminal tab to stop the nodes from spinning.

Zip the Workspace for Distribution

If you ever want to zip the whole workspace and send it to someone, open a new terminal window.

Move to the directory containing your workspace.

cd ~/camera_to_robot_base_frame_ws
cd ..
zip -r camera_to_robot_base_frame_ws.zip camera_to_robot_base_frame_ws

The syntax is:

zip -r <filename.zip> <foldername>