How to Download a ROS Package from GitHub

Let’s say you see a ROS package on GitHub that you’d like to bring into your workspace.

For example, I would like to download the rviz_plugin_tutorials package available on this page on GitHub.

Option 1: Using the Command Line

ROS 1 Package

Open a new terminal window.

Type:

cd ~/catkin_ws/src

Clone the repository:

git clone -b <branch> <address>

For example, if you are looking to download the rviz_plugin_tutorials package available here, you would type the following command (this is a single command all on one line):

git clone -b noetic-devel https://github.com/ros-visualization/visualization_tutorials.git

You could also leave out the ROS branch specification, and type:

git clone <address>

For example:

git clone https://github.com/ros-visualization/visualization_tutorials.git

Then open up a new terminal window, and type:

cd ~/catkin_ws
catkin_make

That’s it!

ROS 2 Package

For a ROS 2 package, you would type the following (assuming the name of your ROS 2 workspace is dev_ws):

cd ~/dev_ws/src
git clone -b foxy-devel https://github.com/ros-visualization/visualization_tutorials.git

Open a new terminal window, and type:

cd ~/dev_ws

Install dependencies:

rosdep install --from-paths src --ignore-src -r -y

To build the packages, type:

colcon build --symlink-install

If that command doesn’t work, try the following command:

colcon build

Option 2: Downloading Manually from GitHub

First, you need to download the files to your computer.

Go to this page and download all the files. I like to download it to my Desktop.

Click the green “Code” button to download a ZIP file of the repository.

Open up the zip file, and go to the rviz_plugin_tutorials folder. The rviz_plugin_tutorials package is the package we need.

Right-click on that folder, and click “Extract”.

Move that folder to your catkin_ws/src folder. I just dragged the folder to my catkin_ws/src folder on my computer, just as I would drag and drop any folder on a Mac or Windows computer.

Open up a terminal window, and type:

cd ~/catkin_ws/src
dir

You should see the rviz_plugin_tutorials package (i.e. a folder) in there.

Now you need to run this command so that your ROS environment knows about the rviz_plugin_tutorials package we just added. Open a new terminal window, and type:

cd ~/catkin_ws/
catkin_make

Open a new terminal window, and type:

rospack find rviz_plugin_tutorials

You should see the path of your new ROS package.

50-path-of-new-ros-packageJPG

To see if our package depends on other packages, type these commands. Ignore any error messages you see in the terminal:

rosdep update
rosdep check rviz_plugin_tutorials

You should see a message that says “All system dependencies have been satisfied.”

51-all-dependencies-satisfiedJPG

How to Load a TensorFlow Model Using OpenCV

In this tutorial, we will load a TensorFlow model (i.e. neural network) using the popular computer vision library known as OpenCV. To make things interesting, we will build an application to detect eating utensils (i.e. forks, knives, and spoons). Here is what the final output will look like:

eating-utensils-final

Our goal is to build an early prototype of a product that can make it easier and faster for a robotic chef arm, like the one created by Moley Robotics, to detect forks, knives, and spoons.

You Will Need

Directions

Open this page for the TensorFlow Object Detection API.

Download a weights and a config file for one of the pretrained object detection models. I will use Inception-SSD v2. You will want to right click and Save As.

2-go-to-this-page
1-two-files-config-weights

Use a program like 7-Zip to open the tar.gz archive file (i.e. click Open archive if using 7-Zip). 

Double click on ssd_inception_v2_coco_2017_11_17.tar.

3-double-click

Double click again on the folder name.

4-frozen-inference-graph

Locate a file inside the folder named frozen_inference_graph.pb. Move this file to your working directory (i.e. the same directory where we will write our Python program later in this tutorial).

I also have a pbtxt file named ssd_inception_v2_coco_2017_11_17.pbtxt that came from the config link. This file is of the form PBTXT. 

To create the ssd_inception_v2_coco_2017_11_17.pbtxt file, I right clicked on config, clicked Save As, saved to my Desktop, and then copied the contents of the pbtxt file on GitHub into this pbtxt file using Notepad++

Make sure the pb and pbtxt files are both in your working directory.

Alternatively, you can just download my pb file and my pbtxt file.

Now create a new Python program in your working directory called utensil_detector.py

Add the following code:

# Project: Eating Utensil Detector Using TensorFlow and OpenCV
# Author: Addison Sears-Collins
# Date created: August 1, 2021
# Description: This program detects forks, spoons, and knives

import cv2 as cv # OpenCV computer vision library
import numpy as np # Scientific computing library 

#  classes = ['person','bicycle','car','motorcycle','airplane' ,'bus','train','truck','boat' ,'traffic light','fire hydrant',
#    'stop sign','parking meter','bench','bird','cat','dog','horse','sheep','cow','elephant','bear','zebra','giraffe' ,
#    'backpack','umbrella','handbag' ,'tie','suitcase','frisbee' ,'skis','snowboard','sports ball' ,'kite',
#    'baseball bat','baseball glove','skateboard','surfboard','tennis rack','bottle','wine glass','cup','fork','knife',
#    'spoon','bowl','banana','apple' ,'sandwich','orange','broccoli','carrot','hot dog','pizza' ,'donut' ,'cake',
#    'chair' ,'couch' ,'potted plant','bed','dining table','toilet','tv','laptop','mouse','remote','keyboard',
#    'cell phone','microwave','oven','toaster','sink','refrigerator','book','clock','vase','scissors' ,'teddy bear',
#    'hair drier','toothbrush']

# Just use a subset of the classes
classes = ["background", "person", "bicycle", "car", "motorcycle",
  "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant",
  "unknown", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse",
  "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "unknown", "backpack",
  "umbrella", "unknown", "unknown", "handbag", "tie", "suitcase", "frisbee", "skis",
  "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard",
  "surfboard", "tennis racket", "bottle", "unknown", "wine glass", "cup", "fork", "knife",
  "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog",
  "pizza", "donut", "cake", "chair", "couch", "potted plant", "bed", "unknown", "dining table",
  "unknown", "unknown", "toilet", "unknown", "tv", "laptop", "mouse", "remote", "keyboard",
  "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "unknown",
  "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush" ]

# Colors we will use for the object labels
colors = np.random.uniform(0, 255, size=(len(classes), 3))

# Open the webcam
cam = cv.VideoCapture(0)

pb  = 'frozen_inference_graph.pb'
pbt = 'ssd_inception_v2_coco_2017_11_17.pbtxt'

# Read the neural network
cvNet = cv.dnn.readNetFromTensorflow(pb,pbt)   

while True:

  # Read in the frame
  ret_val, img = cam.read()
  rows = img.shape[0]
  cols = img.shape[1]
  cvNet.setInput(cv.dnn.blobFromImage(img, size=(300, 300), swapRB=True, crop=False))

  # Run object detection
  cvOut = cvNet.forward()

  # Go through each object detected and label it
  for detection in cvOut[0,0,:,:]:
    score = float(detection[2])
    if score > 0.3:

      idx = int(detection[1])   # prediction class index. 

      # If you want all classes to be labeled instead of just forks, spoons, and knives, 
      # remove this line below (i.e. remove line 65)
      if classes[idx] == 'fork' or classes[idx] == 'spoon' or classes[idx] == 'knife':			
        left = detection[3] * cols
        top = detection[4] * rows
        right = detection[5] * cols
        bottom = detection[6] * rows
        cv.rectangle(img, (int(left), int(top)), (int(right), int(bottom)), (23, 230, 210), thickness=2)
           
        # draw the prediction on the frame
        label = "{}: {:.2f}%".format(classes[idx],score * 100)
        y = top - 15 if top - 15 > 15 else top + 15
        cv.putText(img, label, (int(left), int(y)),cv.FONT_HERSHEY_SIMPLEX, 0.5, colors[idx], 2)

  # Display the frame
  cv.imshow('my webcam', img)

  # Press ESC to quit
  if cv.waitKey(1) == 27: 
    break 

# Stop filming
cam.release()

# Close down OpenCV
cv.destroyAllWindows()

Save the code.

Here is what my working directory looks like.

here_is_what_my_working_directory

Run the code.

python utensil_detector.py

You should see object detection running in real time. If you place a fork, knife, or spoon in the camera, you will see it labeled accordingly.

That’s it! Keep building!

How to Set Up and Control a Robotic Arm Using MoveIt and ROS

In this tutorial, I will show you how to set up and control a robotic arm using MoveIt and ROS. MoveIt is a motion planning software for robotic arms. Here is the output:

Real-World Applications

This project has a number of real-world applications: 

  • Indoor and Outdoor Delivery Robots
  • Room Service Robots
  • Robot Vacuums
  • Order Fulfillment
  • Manufacturing
  • Factories

Our robot will exist in a world that contains a post office and three houses. The use case for this simulated robot would be picking up packages at a post office and delivering them to houses in a neighborhood. 

Let’s get started!

Prerequisites

  • You have completed this tutorial where you learned how to create a simulated mobile manipulator.
  • You have completed this tutorial where you set up and configured the ROS Navigation Stack for your robot.

All the code you need is located at this link.

Install MoveIt

Let’s install MoveIt

Open a new terminal window, and type the following commands:

sudo apt install ros-noetic-moveit
sudo apt-get install ros-noetic-moveit-setup-assistant
sudo apt-get install ros-noetic-moveit-simple-controller-manager
sudo apt-get install ros-noetic-moveit-fake-controller-manager

Configure the Robotic Arm

Let’s use the MoveIt Setup Assistant to configure our robotic arm.

Open a new terminal window, and type the following command.

roslaunch moveit_setup_assistant setup_assistant.launch
17-launch-moveit-setup-assistant

Click Create New MoveIt Configuration Package.

Load the mobile_manipulator.urdf file. Click Load Files.

18-load-mobile-manipulator

If the URDF file loads properly, you will see a success message.

19-success-message

Click the Self-Collisions tab on the left-hand side.

Click Generate Collision Matrix.

20-click-generate-collision-matrix

If you wish, you can manually enable pairs of links for which you want MoveIt to check for collisions during the motion planning process. I will leave the defaults as-is.

Let’s skip the Virtual Joints tab. We won’t have any virtual joints for our mobile manipulator.

Click the Planning Groups tab.

21-five-joints-robotic-arm

Click Add Group.

Let’s call the group arm.

In the Kinematic Solver dropdown box, select the kdl_kinematics_plugin/KDLKinematicsPlugin.

Keep the resolution and timeout as the default values.

Choose RRTstar as the Group Default Planner.

Add the five joints of the robotic arm.

Click the Robot Poses tab.

22-home-position

Click Add Pose.

Let’s add a pose for the home position.

Pose Name: home_position

  • arm_base_joint: 0.0
  • shoulder_joint: 0.0
  • bottom_wrist_joint: 0.0
  • elbow_joint: 0.0
  • top_wrist_joint: 0.0

Click Save.

Click Add Pose.

Pose Name: goal_position

  • arm_base_joint: 1.5794
  • shoulder_joint: -0.9893
  • bottom_wrist_joint: -0.9893
  • elbow_joint: 0.0
  • top_wrist_joint: -0.6769

Click Save.

23-goal-position

We don’t have a gripper or suction cup on the end of our robotic arm, so we can skip the End Effectors tab.

Click the Passive Joints tab. Here is what my setup assistant looks like. I have added all the wheel joints to the passive joints list.

24-passive-joints

Click the ROS Control tab.

Click the Auto Add FollowJointsTrajectory Controllers For Each Planning Group button.

Click the Author Information tab.

Add your name and email address.

25-name-email-address

Click the Configuration Files tab.

26-generate-configuration-file

Select the destination where you want to save the configuration files. I will choose my ~catkin_ws/src/ folder. The name of the folder will be mobile_manipulator_moveit_config.

Click Generate Package.

27-generate-package

Click Exit Setup Assistant.

Control the Robotic Arm Manually

Let’s install a couple more packages before we get started. Open a terminal window, and type the following commands.

sudo apt-get update
sudo apt install ros-noetic-moveit-resources-prbt-moveit-config
sudo apt install ros-noetic-pilz-industrial-motion-planner

Open your mobile_manipulator_moveit_controller_manager.launch.xml file.

cd ~/catkin_ws/src/mobile_manipulator_moveit_config/launch
gedit mobile_manipulator_moveit_controller_manager.launch.xml

Add this line in the line below the opening <launch> tag:

<arg name="execution_type" default="unused" />

Save the file, and close it.

Now, launch the robotic arm in Gazebo by opening a new terminal window, and typing:

roslaunch mobile_manipulator mobile_manipulator_gazebo.launch

Open a new terminal window, and launch MoveIt.

Launch the move_group.launch file by opening a terminal window, and typing:

roslaunch mobile_manipulator_moveit_config move_group.launch
28-launch-moveit

Open another terminal window, and launch RViz. This command below is a single command.

roslaunch mobile_manipulator mobile_manipulator_move_base.launch
34-moveit-motion-planning-1

Click Add in the bottom left, and add the Motion Planning plugin under moveit_ros_visualization. If you don’t have the Add button, be sure to go to Panels -> Display on the upper menu to make sure it appears.

29-click-add
30-motion-planning-plugin

Click the Planning tab.

31-open-rviz

For Goal State, choose goal_position.

32-goal-position

Click Plan & Execute.

33-click-plan-and-execute

The robotic arm will move to the goal position.

35-trajectory-succeeded

Next Steps

Now you know how to move the robotic arm manually using MoveIt’s RViz interface. If you want to control the robotic arm using code, you can follow this tutorial for C++ or this tutorial for Python.

That’s it! Keep building!