Motion Detection Using OpenCV on Raspberry Pi 4

In this tutorial, I will show you how to use background subtraction to detect moving objects. We will use the OpenCV computer vision library on a Raspberry Pi 4.

Prerequisites

What is Background Subtraction?

Background subtraction is a technique that is commonly used to identify moving objects in a video stream. A real-world use case would be video surveillance or in a factory to detect moving objects (i.e. object detection) on a conveyor belt using a stationary video camera.

The idea behind background subtraction is that once you have a model of the background, you can detect objects by examining the difference between the current video frame and the background frame.

Let’s see background subtraction in action using a couple (there are many more than two) of OpenCV’s background subtraction algorithms. I won’t go into the detail and math behind each algorithm, but if you want to learn how each one works, check out this page

If you are building a product like a robot, you don’t need to get bogged down in the details. You just need to be able to know how to use the algorithm to detect objects.

Absolute Difference Method

The idea behind this algorithm is that we first take a snapshot of the background. We then identify changes by taking the absolute difference between the current video frame and that original snapshot of the background.

This algorithm runs really fast, but it is sensitive to noise, like shadows and even the smallest changes in lighting.

Start your Raspberry Pi.

Go to the Python IDE in your Raspberry Pi by clicking the logo -> Programming -> Thonny Python IDE.

Write the following code. I’ll name the file absolute_difference_method.py.

# Author: Addison Sears-Collins
# Description: This algorithm detects objects in a video stream
#   using the Absolute Difference Method. The idea behind this 
#   algorithm is that we first take a snapshot of the background.
#   We then identify changes by taking the absolute difference 
#   between the current video frame and that original 
#   snapshot of the background (i.e. first frame). 

# import the necessary packages
from picamera.array import PiRGBArray # Generates a 3D RGB array
from picamera import PiCamera # Provides a Python interface for the RPi Camera Module
import time # Provides time-related functions
import cv2 # OpenCV library
import numpy as np # Import NumPy library

# Initialize the camera
camera = PiCamera()

# Set the camera resolution
camera.resolution = (640, 480)

# Set the number of frames per second
camera.framerate = 30

# Generates a 3D RGB array and stores it in rawCapture
raw_capture = PiRGBArray(camera, size=(640, 480))

# Wait a certain number of seconds to allow the camera time to warmup
time.sleep(0.1)

# Initialize the first frame of the video stream
first_frame = None

# Create kernel for morphological operation. You can tweak
# the dimensions of the kernel.
# e.g. instead of 20, 20, you can try 30, 30
kernel = np.ones((20,20),np.uint8)

# Capture frames continuously from the camera
for frame in camera.capture_continuous(raw_capture, format="bgr", use_video_port=True):
    
    # Grab the raw NumPy array representing the image
    image = frame.array

    # Convert the image to grayscale
    gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
    
    # Close gaps using closing
    gray = cv2.morphologyEx(gray,cv2.MORPH_CLOSE,kernel)
      
    # Remove salt and pepper noise with a median filter
    gray = cv2.medianBlur(gray,5)
    
    # If first frame, we need to initialize it.
    if first_frame is None:
        
      first_frame = gray
      
      # Clear the stream in preparation for the next frame
      raw_capture.truncate(0)
      
      # Go to top of for loop
      continue
      
    # Calculate the absolute difference between the current frame
    # and the first frame
    absolute_difference = cv2.absdiff(first_frame, gray)

    # If a pixel is less than ##, it is considered black (background). 
    # Otherwise, it is white (foreground). 255 is upper limit.
    # Modify the number after absolute_difference as you see fit.
    _, absolute_difference = cv2.threshold(absolute_difference, 100, 255, cv2.THRESH_BINARY)

    # Find the contours of the object inside the binary image
    contours, hierarchy = cv2.findContours(absolute_difference,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)[-2:]
    areas = [cv2.contourArea(c) for c in contours]
 
    # If there are no countours
    if len(areas) < 1:
 
      # Display the resulting frame
      cv2.imshow('Frame',image)
 
      # Wait for keyPress for 1 millisecond
      key = cv2.waitKey(1) & 0xFF
 
      # Clear the stream in preparation for the next frame
      raw_capture.truncate(0)
    
      # If "q" is pressed on the keyboard, 
      # exit this loop
      if key == ord("q"):
        break
    
      # Go to the top of the for loop
      continue
 
    else:
        
      # Find the largest moving object in the image
      max_index = np.argmax(areas)
      
    # Draw the bounding box
    cnt = contours[max_index]
    x,y,w,h = cv2.boundingRect(cnt)
    cv2.rectangle(image,(x,y),(x+w,y+h),(0,255,0),3)
 
    # Draw circle in the center of the bounding box
    x2 = x + int(w/2)
    y2 = y + int(h/2)
    cv2.circle(image,(x2,y2),4,(0,255,0),-1)
 
    # Print the centroid coordinates (we'll use the center of the
    # bounding box) on the image
    text = "x: " + str(x2) + ", y: " + str(y2)
    cv2.putText(image, text, (x2 - 10, y2 - 10),
      cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
         
    # Display the resulting frame
    cv2.imshow("Frame",image)
    
    # Wait for keyPress for 1 millisecond
    key = cv2.waitKey(1) & 0xFF
 
    # Clear the stream in preparation for the next frame
    raw_capture.truncate(0)
    
    # If "q" is pressed on the keyboard, 
    # exit this loop
    if key == ord("q"):
      break

# Close down windows
cv2.destroyAllWindows()

Run the code.

Here is the background:

1-before

Here is what things look like after we place an object in the field of view:

2-after

You notice that we’ve drawn a bounding box. We have also labeled the center of the object with the pixel coordinates (i.e. centroid).

Feel free to tweak the lower threshold on the _, absolute_difference = cv2.threshold… line to your liking.

BackgroundSubtractorMOG2

Here is another method. I named the file background_subtractor_mog2_method.py

# Author: Addison Sears-Collins
# Description: This algorithm detects objects in a video stream
#   using the Gaussian Mixture Model background subtraction method. 

# import the necessary packages
from picamera.array import PiRGBArray # Generates a 3D RGB array
from picamera import PiCamera # Provides a Python interface for the RPi Camera Module
import time # Provides time-related functions
import cv2 # OpenCV library
import numpy as np # Import NumPy library

# Initialize the camera
camera = PiCamera()

# Set the camera resolution
camera.resolution = (640, 480)

# Set the number of frames per second
camera.framerate = 30

# Generates a 3D RGB array and stores it in rawCapture
raw_capture = PiRGBArray(camera, size=(640, 480))

# Create the background subtractor object
# Feel free to modify the history as you see fit.
back_sub = cv2.createBackgroundSubtractorMOG2(history=150,
  varThreshold=25, detectShadows=True)

# Wait a certain number of seconds to allow the camera time to warmup
time.sleep(0.1)

# Create kernel for morphological operation. You can tweak
# the dimensions of the kernel.
# e.g. instead of 20, 20, you can try 30, 30
kernel = np.ones((20,20),np.uint8)

# Capture frames continuously from the camera
for frame in camera.capture_continuous(raw_capture, format="bgr", use_video_port=True):
    
    # Grab the raw NumPy array representing the image
    image = frame.array

    # Convert to foreground mask
    fg_mask = back_sub.apply(image)
    
    # Close gaps using closing
    fg_mask = cv2.morphologyEx(fg_mask,cv2.MORPH_CLOSE,kernel)
      
    # Remove salt and pepper noise with a median filter
    fg_mask = cv2.medianBlur(fg_mask,5)
      
    # If a pixel is less than ##, it is considered black (background). 
    # Otherwise, it is white (foreground). 255 is upper limit.
    # Modify the number after fg_mask as you see fit.
    _, fg_mask = cv2.threshold(fg_mask, 127, 255, cv2.THRESH_BINARY)

    # Find the contours of the object inside the binary image
    contours, hierarchy = cv2.findContours(fg_mask,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE)[-2:]
    areas = [cv2.contourArea(c) for c in contours]
 
    # If there are no countours
    if len(areas) < 1:
 
      # Display the resulting frame
      cv2.imshow('Frame',image)
 
      # Wait for keyPress for 1 millisecond
      key = cv2.waitKey(1) & 0xFF
 
      # Clear the stream in preparation for the next frame
      raw_capture.truncate(0)
    
      # If "q" is pressed on the keyboard, 
      # exit this loop
      if key == ord("q"):
        break
    
      # Go to the top of the for loop
      continue
 
    else:
        
      # Find the largest moving object in the image
      max_index = np.argmax(areas)
      
    # Draw the bounding box
    cnt = contours[max_index]
    x,y,w,h = cv2.boundingRect(cnt)
    cv2.rectangle(image,(x,y),(x+w,y+h),(0,255,0),3)
 
    # Draw circle in the center of the bounding box
    x2 = x + int(w/2)
    y2 = y + int(h/2)
    cv2.circle(image,(x2,y2),4,(0,255,0),-1)
 
    # Print the centroid coordinates (we'll use the center of the
    # bounding box) on the image
    text = "x: " + str(x2) + ", y: " + str(y2)
    cv2.putText(image, text, (x2 - 10, y2 - 10),
      cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 2)
         
    # Display the resulting frame
    cv2.imshow("Frame",image)
    
    # Wait for keyPress for 1 millisecond
    key = cv2.waitKey(1) & 0xFF
 
    # Clear the stream in preparation for the next frame
    raw_capture.truncate(0)
    
    # If "q" is pressed on the keyboard, 
    # exit this loop
    if key == ord("q"):
      break

# Close down windows
cv2.destroyAllWindows()

This method is more computationally-intensive than the previous method, but it handles shadows better. If you want to detect objects that are moving, this is a good method. If you want to detect objects that enter the field of view and then stay there, use the absolute difference method.

Here is the before:

3-mog-2-before

Here is the after:

4-mog-2-after

You can see that the algorithm detected that pen pretty well.

Unlike the absolute difference method which uses the same initial frame as the background until the program stops execution, with the background subtractor MOG2 method, the background image continually updates based on a certain number of previous frames (i.e. history) that you specify in the code.

That’s it. Keep building!

How to Set Up Real-Time Video Using OpenCV on Raspberry Pi 4

In this tutorial, I will show you how to install OpenCV on Raspberry Pi 4 and then get a real-time video stream going. OpenCV is a library that has a bunch of programming functions that enable us to do real-time computer vision.

rt_video_strea_rpi

Prerequisites

You Will Need

Install the Raspberry Pi Camera Module

Let’s install the Raspberry Pi Camera Module. Here are the official instructions, but I’ll walk through the whole process below.

Grab the Raspberry Pi Camera’s plastic clip. (optional)

Remove the ribbon cable that is currently in there. (optional)

2020-09-04-085435

Replace that small cable with the longer ribbon cable that came with the Flex Extension Cable Set. (optional)

2020-09-04-085651

Open the Camera Serial Interface on the Raspberry Pi by taking your fingers, pinching either side, and pulling up. The Camera Serial Interface is labeled “CAMERA”.

2020-09-04-085126

Push the long piece of ribbon into the interface. The silver pieces of the ribbon should be facing towards the CAMERA label on the Raspberry Pi board.

2020-09-04-090330

Hold the ribbon in place with one finger while you push down on the door. The door should snap into place.

This is how it should look at this stage (Ignore the other stuff in the photo).

2020-09-04-091651

Lightly pull on the ribbon to make sure that it is in their snugly. It shouldn’t come out when you pull on it.

Configure the Raspberry Pi

We now need to make sure the Raspberry Pi is configured properly to use the camera.

Start the Raspberry Pi.

Open a fresh terminal window, and type the following command:

sudo raspi-config

Go to Interfacing Options and press Enter.

1-interfacing-optionsJPG

Select Camera and press Enter to enable the camera.

Press Enter.

2-enable-cameraJPG

Go to Advanced Options and press Enter.

Select Resolution and press Enter.

Select a screen resolution. I’m using 1920 x 1080.

Press Enter.

3-screen-resolutionJPG

Go to Finish and press Enter.

Press Enter on <Yes> to reboot the Raspberry Pi.

Test the Raspberry Pi Camera Module

Open up a new terminal window. Let’s take a test photo by typing the following command:

raspistill -o Desktop/image.jpg

Your photo should be on your Desktop.

image

Here is how the camera looks when it is right side up. The black bar needs to be on top above the camera lens.

2020-09-04-101908

Install OpenCV

Let’s install OpenCV on our Raspberry Pi. I will largely be following this excellent tutorial. The process has a lot of steps, so go slow.

Update the packages by typing this command:

sudo apt-get update
sudo apt-get upgrade

Install these packages that assist in the compilation of the OpenCV code:

sudo apt install cmake build-essential pkg-config git

These packages provide the support that enable OpenCV to use different formats for both images and videos.

sudo apt install libjpeg-dev libtiff-dev libjasper-dev libpng-dev libwebp-dev libopenexr-dev
sudo apt install libavcodec-dev libavformat-dev libswscale-dev libv4l-dev libxvidcore-dev libx264-dev libdc1394-22-dev libgstreamer-plugins-base1.0-dev libgstreamer1.0-dev

Install some more OpenCV dependencies: You can learn about each of these packages by doing a search here at the Debian website

sudo apt install libgtk-3-dev libqtgui4 libqtwebkit4 libqt4-test python3-pyqt5

Install these packages that help OpenCV run quickly on your Raspberry Pi.

sudo apt install libatlas-base-dev liblapacke-dev gfortran

Install the HDF5 packages that OpenCV will use to manage the data.

sudo apt install libhdf5-dev libhdf5-103

Install Python support packages.

sudo apt install python3-dev python3-pip python3-numpy

Increase the swap space. The swap space is the space that your operating system uses when RAM has reached its limit. 

sudo nano /etc/dphys-swapfile

Locate this line:

CONF_SWAPSIZE=100

Change that to:

CONF_SWAPSIZE=2048

Save the file and close out using the following keystrokes, one after the other:

CTRL+X 

Y

Enter

Regenerate the swap file:

sudo systemctl restart dphys-swapfile

Get the latest version of OpenCV from GitHub.

git clone https://github.com/opencv/opencv.git
git clone https://github.com/opencv/opencv_contrib.git

Both of these commands above will take a while to finish executing, so be patient.

Now, we need to compile OpenCV. We’ll create a new directory for this purpose.

mkdir ~/opencv/build
cd ~/opencv/build

Generate the makefile. Copy and paste this entire command below into your terminal and then press Enter.

cmake -D CMAKE_BUILD_TYPE=RELEASE \
    -D CMAKE_INSTALL_PREFIX=/usr/local \
    -D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
    -D ENABLE_NEON=ON \
    -D ENABLE_VFPV3=ON \
    -D BUILD_TESTS=OFF \
    -D INSTALL_PYTHON_EXAMPLES=OFF \
    -D OPENCV_ENABLE_NONFREE=ON \
    -D CMAKE_SHARED_LINKER_FLAGS=-latomic \
    -D BUILD_EXAMPLES=OFF ..

Compile OpenCV using the command below. -j$(nproc) makes sure that all of the processors that are available to us are used for the compilation. This speeds things up:

make -j$(nproc)

Wait for OpenCV to compile. It will take a while, so you can go to lunch or do something else, and then return. My Raspberry Pi took about an hour to finish everything.

Once that compilation process has completed, you need to make sure the files get installed.

sudo make install

Run this command so that Raspberry Pi can find OpenCV.

sudo ldconfig

Now, let’s go back to the swapfile and reset the size to something smaller.

sudo nano /etc/dphys-swapfile

Locate this line:

CONF_SWAPSIZE=2048

Change that to:

CONF_SWAPSIZE=100

Save the file and close out using the following keystrokes:

CTRL+X 

Y

Enter

Restart swap.

sudo systemctl restart dphys-swapfile

Now, make sure that picamera is installed. picamera is a Python package that enables your camera to interface with your Python code. This second array module that we’re installing enables us to use OpenCV.

pip3 install picamera
pip3 install "picamera[array]"

Test OpenCV

Launch python.

python3

Import OpenCV

import cv2

Check to see which version of OpenCV you have installed.

cv2.__version__

Here is what you should see:

4-here-is-what-you-should-seeJPG

To close out the window, type:

exit()

Capture Real-Time Video

Now, go to the Python IDE in your Raspberry Pi by clicking the logo -> Programming -> Thonny Python IDE.

Write the following code to capture a live video feed. Credit to Dr. Adrian Rosebrock for this source code. I’ll name the file test_video_capture.py.

Here is the code:

# Credit: Adrian Rosebrock
# https://www.pyimagesearch.com/2015/03/30/accessing-the-raspberry-pi-camera-with-opencv-and-python/

# import the necessary packages
from picamera.array import PiRGBArray # Generates a 3D RGB array
from picamera import PiCamera # Provides a Python interface for the RPi Camera Module
import time # Provides time-related functions
import cv2 # OpenCV library

# Initialize the camera
camera = PiCamera()

# Set the camera resolution
camera.resolution = (640, 480)

# Set the number of frames per second
camera.framerate = 32

# Generates a 3D RGB array and stores it in rawCapture
raw_capture = PiRGBArray(camera, size=(640, 480))

# Wait a certain number of seconds to allow the camera time to warmup
time.sleep(0.1)

# Capture frames continuously from the camera
for frame in camera.capture_continuous(raw_capture, format="bgr", use_video_port=True):
    
    # Grab the raw NumPy array representing the image
    image = frame.array
    
    # Display the frame using OpenCV
    cv2.imshow("Frame", image)
    
    # Wait for keyPress for 1 millisecond
    key = cv2.waitKey(1) & 0xFF
    
    # Clear the stream in preparation for the next frame
    raw_capture.truncate(0)
    
    # If the `q` key was pressed, break from the loop
    if key == ord("q"):
        break

When you’re ready, click Run. Here is what my output was:

5_camera_video_streamJPG

That’s it for this tutorial. Keep building!

How to Install and Launch ROS2 Using Docker

In this tutorial, we will install and launch ROS2 using Docker within Ubuntu Linux 20.04. To test our installation, we will launch and run the popular Turtlebot3 software program.

What is Docker?

You can think of a Docker container on your PC as a computer inside your computer.

Docker is a service that provides portable stand-alone containers that come bundled with pre-installed software packages. These containers  make your life easier because they come with the right software versions and dependencies for whatever application you are trying to run.

Docker makes a lot more sense when you actually use it, so let’s get started.

Install Docker

Go to this link to download the appropriate Docker for your system.

Install Docker on your computer. Since I’m using Ubuntu 20.04 in Virtual Box on a Windows 10 machine, I’ll follow the Ubuntu instructions for “Install using the repository.”

If you’re using a Mac or Windows, follow those instructions. I actually find it easier to use Ubuntu, so if you’re using Windows, I highly suggest you do this tutorial first to get Ubuntu up and running, and then come back to this page.

Pull and Start the Docker Container With ROS2

Open a new terminal window, and create a new folder.

mkdir new_folder

Let’s pull a docker container. This one below comes with ROS2 already installed. This docker container comes from this GitHub repository.

sudo docker pull /tiryoh/ros2-desktop-vnc:foxy

Now we need to start the docker container and mount the folder on it (the command below is all a single command).

sudo docker run -it -p 6080:80 -v /new_folder --name ros2_new_folder tiryoh/ros2-desktop-vnc:foxy

Note that ros2_new_folder is the name of the container.

Type the following in any browser: http://127.0.0.1:6080/

Here is the screen you should see:

1-screen-you-should-seeJPG

Install TurtleBot3

Click the menu icon in the very bottom left of the container.

Go to System Tools -> LX Terminal.

Download TurtleBot3.

mkdir -p ~/turtlebot3_ws/src
cd ~/turtlebot3_ws
wget https://raw.githubusercontent.com/ROBOTIS-GIT/turtlebot3/ros2/turtlebot3.repos
vcs import src < turtlebot3.repos

Wait a while while TurtleBot3 downloads into the container.

2-wait-a-whileJPG

Compile the source code.

colcon build --symlink-install

Wait for the source code to compile.

3-source-code-compileJPG

Let’s set the environment variables. Type these commands, one right after the other.

echo 'source ~/turtlebot3_ws/install/setup.bash' >> ~/.bashrc
echo 'export GAZEBO_MODEL_PATH=$GAZEBO_MODEL_PATH:~/turtlebot3_ws/src/turtlebot3/turtlebot3_simulations/turtlebot3_gazebo/models' >> ~/.bashrc
echo 'export TURTLEBOT3_MODEL=waffle_pi' >> ~/.bashrc
source ~/.bashrc

Launch an Empty World in Gazebo in ROS2

Now we will launch the simulation using the ros2 launch command.

ros2 launch turtlebot3_gazebo empty_world.launch.py

It will take a while to launch the first time, so be patient. You might see a bunch of warning messages, just ignore those.

When everything is done launching, you should see an empty world like this.

4-empty-gazebo-worldJPG

Go back to the terminal window by clicking the icon at the bottom of the screen.

Press CTRL+C to close it out.

For more simulations, check out this link at the TurtleBot3 website.

How to Stop the Docker Container

To stop the Docker container, go back to Ubuntu Linux and open a new terminal window. Type the following code.

sudo docker stop [name_of_container]

For example:

sudo docker stop ros2_new_folder

How to Restart the Docker Container

If you ever want to restart it in the future, you type:

sudo docker stop [name_of_container]

For example:

sudo docker restart ros2_new_folder

Type the following in any browser. http://127.0.0.1:6080/

That’s it.