Real-Time Object Tracking Using OpenCV and a Webcam

In this tutorial, we will create a program to track a moving object in real-time using the built-in webcam of a laptop computer. We will use Python and the OpenCV computer vision library for the code.

real-time-object-tracking-webcam-opencv

A real-world application of this is in robotics. Imagine you have a robot arm that needs to continuously pick up moving items from a conveyor belt inside a warehouse. In order for the robot to pick up an object it needs to know the exact coordinates of the object. The program we will create below will give you the basic building block to do just that. It will locate the coordinates of the center of the moving object (often called the “centroid“) in real-time using an ordinary webcam.

Let’s get started!

Prerequisites

  • Python 3.7 or higher

Requirements

Using real-time streaming video from your built-in webcam, create a program that:

  • Draws a bounding box around a moving object
  • Calculates the coordinates of the centroid of the object
  • Tracks the centroid of the object

Directions

Open up your favorite IDE or code editor.

Make sure you have the OpenCV and Numpy libraries installed. There are a number of ways to install both libraries. The most common way is to use pip, which is the standard package manager for Python.

pip install opencv-python
pip install numpy

Copy and paste the code below. This is all you need to run the program.

I put detailed comments inside the code so that you know what is going on. The technique used here is background subtraction, one of the most common ways to detect moving objects in a video stream:

#!/usr/bin/env python

'''
Welcome to the Object Tracking Program!

Using real-time streaming video from your built-in webcam, this program:
  - Creates a bounding box around a moving object
  - Calculates the coordinates of the centroid of the object
  - Tracks the centroid of the object

Author:
  - Addison Sears-Collins
  - https://automaticaddison.com
'''

from __future__ import print_function # Python 2/3 compatibility
import cv2 # Import the OpenCV library
import numpy as np # Import Numpy library

# Project: Object Tracking
# Author: Addison Sears-Collins 
# Website: https://automaticaddison.com
# Date created: 06/13/2020
# Python version: 3.7

def main():
    """
    Main method of the program.
    """

    # Create a VideoCapture object
    cap = cv2.VideoCapture(0)

    # Create the background subtractor object
    # Use the last 700 video frames to build the background
    back_sub = cv2.createBackgroundSubtractorMOG2(history=700, 
        varThreshold=25, detectShadows=True)

    # Create kernel for morphological operation
    # You can tweak the dimensions of the kernel
    # e.g. instead of 20,20 you can try 30,30.
    kernel = np.ones((20,20),np.uint8)

    while(True):

        # Capture frame-by-frame
        # This method returns True/False as well
        # as the video frame.
        ret, frame = cap.read()

        # Use every frame to calculate the foreground mask and update
        # the background
        fg_mask = back_sub.apply(frame)

        # Close dark gaps in foreground object using closing
        fg_mask = cv2.morphologyEx(fg_mask, cv2.MORPH_CLOSE, kernel)

        # Remove salt and pepper noise with a median filter
        fg_mask = cv2.medianBlur(fg_mask, 5) 
        
        # Threshold the image to make it either black or white
        _, fg_mask = cv2.threshold(fg_mask,127,255,cv2.THRESH_BINARY)

        # Find the index of the largest contour and draw bounding box
        fg_mask_bb = fg_mask
        contours, hierarchy = cv2.findContours(fg_mask_bb,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)[-2:]
        areas = [cv2.contourArea(c) for c in contours]

        # If there are no countours
        if len(areas) < 1:

            # Display the resulting frame
            cv2.imshow('frame',frame)

            # If "q" is pressed on the keyboard, 
            # exit this loop
            if cv2.waitKey(1) & 0xFF == ord('q'):
                break

            # Go to the top of the while loop
            continue

        else:
            # Find the largest moving object in the image
            max_index = np.argmax(areas)

        # Draw the bounding box
        cnt = contours[max_index]
        x,y,w,h = cv2.boundingRect(cnt)
        cv2.rectangle(frame,(x,y),(x+w,y+h),(0,255,0),3)

        # Draw circle in the center of the bounding box
        x2 = x + int(w/2)
        y2 = y + int(h/2)
        cv2.circle(frame,(x2,y2),4,(0,255,0),-1)

        # Print the centroid coordinates (we'll use the center of the
        # bounding box) on the image
        text = "x: " + str(x2) + ", y: " + str(y2)
        cv2.putText(frame, text, (x2 - 10, y2 - 10),
			cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
        
        # Display the resulting frame
        cv2.imshow('frame',frame)

        # If "q" is pressed on the keyboard, 
        # exit this loop
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

    # Close down the video stream
    cap.release()
    cv2.destroyAllWindows()

if __name__ == '__main__':
    print(__doc__)
    main()

ROS Noetic Ninjemys Basics – Part 1 of 2

In this tutorial, we will explore the basics of ROS Noetic Ninjemys (ROS Noetic), the latest distribution of ROS.

ROS has a steep learning curve. I remember when I first started learning ROS, my head was spinning. There was all this new vocabulary you have to learn: nodes, packages, subscribers, publishers, etc. It was like learning some obscure foreign language for the first time.

To get the most out of ROS, I recommend going through Part 1 and Part 2 of this tutorial. My advice to you is to not worry if everything seems complicated and doesn’t make sense. Don’t worry if you can’t understand how any of these abstract concepts connect to a real-world robot. 

ROS is built in such a way that you need to work through the boring basics before you can use it to develop actual robotics projects. You have to walk before you learn how to run.

After you work through the basics of ROS, you’ll start applying those basics to actual robotics applications using ROS. It’s at that point that all that abstract stuff that you will learn below will come together and finally make sense. 

If you start to build robots with ROS without learning the basics of ROS, you’ll get super confused. It would be like trying to go to a foreign country and trying to speak their language without ever having learned basic words and phrases.

ROS doesn’t allow you to skip steps in the learning process. You have to build your knowledge of the basics of ROS, brick by boring brick, in order to use it to build fun robots that solve real-world problems.

So be patient in the learning process, and I assure you that you’ll master ROS and will be building cool robots in no time.

Without further ado, let’s get started!

Prerequisites

ROS Noetic Ninjemys Tutorial – Part 1 of 2

When you’ve finished the tutorials above, go on to Part 2.

How to Install ROS Noetic Ninjemys on Ubuntu Linux

In this post, we will install ROS Noetic Ninjemys. As of the date of this tutorial, ROS Noetic Ninjemys is the latest ROS distribution that has long term support. It will be supported until May 2025.

You Will Need

In order to complete this tutorial, you will need:

Directions

The official steps for installing ROS are at this link at ROS.org, but I will walk you through the process below so that you can see what each step should look like.

Select Noetic Ninjemys on this page.

Next, select your platform. I am using Ubuntu, so I will click on the Ubuntu option.

1-land-on-this-pageJPG

Now we need to configure our repositories to allow “restricted”, “universe,” and “multiverse.

Click the 9 white dots at the bottom left of your screen.

2-click-the-9-white-dotsJPG

Search for Software & Updates. Then click on it.

3-search-for-software-and-updatesJPG

Make sure main, universe, restricted, and multiverse are all checked. Then click Close.

4-all-checkedJPG

Now open up a new terminal window, and type (or copy and paste) the following command:

sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
5-type-the-following-commandJPG

The command above sets your computer up to accept software from packages.ros.org. 

Now we need to set up the secure keys so that our system accepts what we are going to download. 

Type or copy & paste the following command.

sudo apt-key adv --keyserver 'hkp://keyserver.ubuntu.com:80' --recv-key C1CF6E31E6BADE8868B172B4F42ED6FBAB17C654

Now, update the package list. This command makes sure you have the most recent list of software packages that can be installed in your Linux system.

sudo apt update

Now do a full desktop install of ROS. The command below installs all the software, tools, algorithms, and robot simulators for ROS.

sudo apt install ros-noetic-desktop-full

After you type the command and press Enter, press Y and hit Enter when asked if you want to continue. It will take a while to download all this stuff, so feel free to take a break while ROS downloads to your system.

6-press-y-to-continueJPG

Set up the environment variables. These variables are necessary in order for ROS to work.

Type the following commands, one right after the other. We are using Bash, so this is what we type:

echo "source /opt/ros/noetic/setup.bash" >> ~/.bashrc
source ~/.bashrc

Check that the environment variables are properly set up. Type the following command:

printenv | grep ROS

Here is what you should see:

7-what-you-should-seeJPG

Let’s also check our .bashrc file to see if the “source /opt/ros/noetic/setup.bash” line was added to it successfully.

gedit .bashrc

There it is at the bottom of the .bashrc file.

8-there-it-isJPG

Now, let’s create a ROS workspace.

A workspace is a folder that contains other folders that store related pieces of ROS code.

The official name for workspaces in ROS is catkin workspaces. The word ‘catkin’ comes from the tail-shaped flower cluster found on willow trees (see photo below) — a reference to Willow Garage, the original developers of ROS. 

All the ROS software packages that you create need to reside inside a catkin workspace. The name of this catkin workspace can be anything, but by convention, it is typically called catkin_ws.

Open a new terminal window, and type the following commands, one right after the other.

First create the workspace.

mkdir -p ~/catkin_ws/src

Move inside the workspace.

cd ~/catkin_ws/

Build the workspace.

catkin_make

You should now have a build, devel, and src folder. Type this command to see those:

dir
9-build-devel-srcJPG

Now, source the new setup.*sh file. This is a file that makes sure your workspace is recognized in your ROS environment.

source devel/setup.bash

Let’s add this code to the .bashrc file so that we don’t have to run it everytime we open a new terminal window.

echo "source ~/catkin_ws/devel/setup.bash" >> ~/.bashrc

We can check to see if that was set up properly by typing this command:

echo $ROS_PACKAGE_PATH
10-setup-properlyJPG

Finally, let’s launch a program to do a final check everything is set up properly. We will launch a simulation of a turtle.

Open a new terminal window, and type this command:

roscore

Open a new terminal tab, and type this command:

rosrun turtlesim turtlesim_node

Here is what you should see:

11-check-the-installationJPG

Congratulations. You have installed ROS!

From here, you can go check out the basics of ROS Noetic Ninjemys. If you’re already familar with ROS, it is often helpful to go through these tutorials to refresh the basics.

Keep building!