How to Set Up a Camera for NVIDIA Jetson Nano

In this tutorial, we will connect a camera to an NVIDIA Jetson Nano.

Prerequisites

You Will Need

This section is the complete list of components you will need for this project (#ad).

2021-04-07-09.40.12

Disclosure (#ad): As an Amazon Associate I earn from qualifying purchases.

Connect the Camera to the Jetson Nano

Make sure the Jetson Nano is completely off, and no power is connected to it.

Grab the camera.

2021-04-07-09.40.46

Lift the plastic tabs of the CSI connector that is closest to the barrel jack (Camera 0). 

Slide the ribbon cable fully into the connector so that it is not tilted. The blue marking should face towards the outside of the board, away from the heat sink. The ribbon cable contacts need to face towards the heat sink.

Hold the ribbon cable still while carefully and gently push down on the plastic tabs to fasten the camera into place.

You should be able to pull gently on the camera without the camera popping out of the latch.

Test Your Camera

Turn on your Jetson Nano.

Open a new terminal window, and type:

ls /dev/video0

If you see output like this, it means your camera is connected.

2021-04-07-09.50.35

Take a Photo

Now open a new terminal window, and move it to the edge of your desktop.

Type the following command. You can change the value of orientation (0-3) to get the right orientation of your camera.

nvgstcapture-1.0 --orientation=2
2021-04-07-10.17.27

Press CTRL+C exit.

Test Python Setup (Optional)

Let’s see if we can run a Python script that uses the camera.

Open a new terminal window, and clone the repository created by JetsonHacksNano.

git clone https://github.com/JetsonHacksNano/CSI-Camera.git

Change to the new directory.

cd CSI-Camera

Type this command. This is all a single command:

gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! 'video/x-raw(memory:NVMM),width=3280, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=2 ! 'video/x-raw, width=816, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e

Press CTRL + C to shut it down.

Install NumPy.

sudo apt-get update
sudo apt install python3-numpy

Install libcanberra.

sudo apt install libcanberra-gtk-module

Run the face detection python script.

python3 face_detect.py

Here is my output:

2021-04-07-10.42.11-1

If the output doesn’t turn out like you want it to, open the face_detect.py script.

gedit face_detect.py

Note: if you don’t have gedit installed, type:

sudo apt-get install gedit

Edit the flip-method parameter:

def gstreamer_pipeline(
    …
    framerate=21,
    flip_method=2,

Here are the options for the flip_method parameter:

  • Default: 0, “none”
  • (0): none – Identity (no rotation)
  • (1): counterclockwise – Rotate counter-clockwise 90 degrees
  • (2): rotate-180 – Rotate 180 degrees
  • (3): clockwise – Rotate clockwise 90 degrees
  • (4): horizontal-flip – Flip horizontally
  • (5): upper-right-diagonal – Flip across upper right/lower left diagonal
  • (6): vertical-flip – Flip vertically
  • (7): upper-left-diagonal – Flip across upper left/low

Here is the full code for face_detect.py (credit: JetsonHacks)

# MIT License
# Copyright (c) 2019 JetsonHacks
# See LICENSE for OpenCV license and additional information

# https://docs.opencv.org/3.3.1/d7/d8b/tutorial_py_face_detection.html
# On the Jetson Nano, OpenCV comes preinstalled
# Data files are in /usr/sharc/OpenCV
import numpy as np
import cv2

# gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
# Defaults to 1280x720 @ 30fps
# Flip the image by setting the flip_method (most common values: 0 and 2)
# display_width and display_height determine the size of the window on the screen


def gstreamer_pipeline(
    capture_width=3280,
    capture_height=2464,
    display_width=820,
    display_height=616,
    framerate=21,
    flip_method=2,
):
    return (
        "nvarguscamerasrc ! "
        "video/x-raw(memory:NVMM), "
        "width=(int)%d, height=(int)%d, "
        "format=(string)NV12, framerate=(fraction)%d/1 ! "
        "nvvidconv flip-method=%d ! "
        "video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
        "videoconvert ! "
        "video/x-raw, format=(string)BGR ! appsink"
        % (
            capture_width,
            capture_height,
            framerate,
            flip_method,
            display_width,
            display_height,
        )
    )


def face_detect():
    face_cascade = cv2.CascadeClassifier(
        "/usr/share/opencv4/haarcascades/haarcascade_frontalface_default.xml"
    )
    eye_cascade = cv2.CascadeClassifier(
        "/usr/share/opencv4/haarcascades/haarcascade_eye.xml"
    )
    cap = cv2.VideoCapture(gstreamer_pipeline(), cv2.CAP_GSTREAMER)
    if cap.isOpened():
        cv2.namedWindow("Face Detect", cv2.WINDOW_AUTOSIZE)
        while cv2.getWindowProperty("Face Detect", 0) >= 0:
            ret, img = cap.read()
            gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
            faces = face_cascade.detectMultiScale(gray, 1.3, 5)

            for (x, y, w, h) in faces:
                cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
                roi_gray = gray[y : y + h, x : x + w]
                roi_color = img[y : y + h, x : x + w]
                eyes = eye_cascade.detectMultiScale(roi_gray)
                for (ex, ey, ew, eh) in eyes:
                    cv2.rectangle(
                        roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 2
                    )

            cv2.imshow("Face Detect", img)
            keyCode = cv2.waitKey(30) & 0xFF
            # Stop the program on the ESC key
            if keyCode == 27:
                break

        cap.release()
        cv2.destroyAllWindows()
    else:
        print("Unable to open camera")


if __name__ == "__main__":
    face_detect()

Press CTRL+C to stop the script.

Test C++ Setup (Optional)

Open simple_camera.cpp and edit the flip_method in the main function if necessary.

gedit simple_camera.cpp

int main()
{
…
    int framerate = 60 ;
    int flip_method = 2;

Save the file and close it.

Compile simple_camera.cpp. You can copy and paste this command.

g++ -std=c++11 -Wall -I/usr/include/opencv4 simple_camera.cpp -L/usr/lib/aarch64-linux-gnu -lopencv_core -lopencv_highgui -lopencv_videoio -o simple_camera

Run simple_camera.cpp

./simple_camera
2021-04-07-10.50.51

Press ESC when you’re done.

To shutdown the computer, type:

sudo shutdown -h now

Here is the full code for simple_camera.cpp:

// simple_camera.cpp
// MIT License
// Copyright (c) 2019 JetsonHacks
// See LICENSE for OpenCV license and additional information
// Using a CSI camera (such as the Raspberry Pi Version 2) connected to a 
// NVIDIA Jetson Nano Developer Kit using OpenCV
// Drivers for the camera and OpenCV are included in the base image

// #include <iostream>
#include <opencv2/opencv.hpp>
// #include <opencv2/videoio.hpp>
// #include <opencv2/highgui.hpp>

std::string gstreamer_pipeline (int capture_width, int capture_height, int display_width, int display_height, int framerate, int flip_method) {
    return "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)" + std::to_string(capture_width) + ", height=(int)" +
           std::to_string(capture_height) + ", format=(string)NV12, framerate=(fraction)" + std::to_string(framerate) +
           "/1 ! nvvidconv flip-method=" + std::to_string(flip_method) + " ! video/x-raw, width=(int)" + std::to_string(display_width) + ", height=(int)" +
           std::to_string(display_height) + ", format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink";
}

int main()
{
    int capture_width = 1280 ;
    int capture_height = 720 ;
    int display_width = 1280 ;
    int display_height = 720 ;
    int framerate = 60 ;
    int flip_method = 2 ;

    std::string pipeline = gstreamer_pipeline(capture_width,
	capture_height,
	display_width,
	display_height,
	framerate,
	flip_method);
    std::cout << "Using pipeline: \n\t" << pipeline << "\n";
 
    cv::VideoCapture cap(pipeline, cv::CAP_GSTREAMER);
    if(!cap.isOpened()) {
	std::cout<<"Failed to open camera."<<std::endl;
	return (-1);
    }

    cv::namedWindow("CSI Camera", cv::WINDOW_AUTOSIZE);
    cv::Mat img;

    std::cout << "Hit ESC to exit" << "\n" ;
    while(true)
    {
    	if (!cap.read(img)) {
		std::cout<<"Capture read error"<<std::endl;
		break;
	}
	
	cv::imshow("CSI Camera",img);
	int keycode = cv::waitKey(30) & 0xff ; 
        if (keycode == 27) break ;
    }

    cap.release();
    cv::destroyAllWindows() ;
    return 0;
}

That’s it. Keep building!

References

This tutorial was especially helpful.

How to Install OpenCV 4.5 on NVIDIA Jetson Nano

In this tutorial, we will install OpenCV 4.5 on the NVIDIA Jetson Nano. The reason I will install OpenCV 4.5 is because the OpenCV that comes pre-installed on the Jetson Nano does not have CUDA support. CUDA support will enable us to use the GPU to run deep learning applications.

The terminal command to check which OpenCV version you have on your computer is:

python -c 'import cv2; print(cv2.__version__)'

Prerequisites

Install Dependencies

Type the following command.

sudo sh -c "echo '/usr/local/cuda/lib64' >> /etc/ld.so.conf.d/nvidia-tegra.conf"

Create the links and caching to the shared libraries

sudo ldconfig

Install the relevant third party libraries. Play close attention to the line wrapping below. Each command begins with “sudo apt-get install”.

sudo apt-get install build-essential cmake git unzip pkg-config
sudo apt-get install libjpeg-dev libpng-dev libtiff-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install libgtk2.0-dev libcanberra-gtk*
sudo apt-get install python3-dev python3-numpy python3-pip
sudo apt-get install libxvidcore-dev libx264-dev libgtk-3-dev
sudo apt-get install libtbb2 libtbb-dev libdc1394-22-dev
sudo apt-get install libv4l-dev v4l-utils
sudo apt-get install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev
sudo apt-get install libavresample-dev libvorbis-dev libxine2-dev
sudo apt-get install libfaac-dev libmp3lame-dev libtheora-dev
sudo apt-get install libopencore-amrnb-dev libopencore-amrwb-dev
sudo apt-get install libopenblas-dev libatlas-base-dev libblas-dev
sudo apt-get install liblapack-dev libeigen3-dev gfortran
sudo apt-get install libhdf5-dev protobuf-compiler
sudo apt-get install libprotobuf-dev libgoogle-glog-dev libgflags-dev

Download OpenCV

Now that we’ve installed the third-party libraries, let’s install OpenCV itself. The latest release is listed here. Again, pay attention to the line wrapping. The https://github… was too long to fit on one line.

cd ~
wget -O opencv.zip https://github.com/opencv/opencv/archive/4.5.1.zip
wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.5.1.zip
unzip opencv.zip
unzip opencv_contrib.zip

Now rename the directories. Type each command below, one after the other.

mv opencv-4.5.1 opencv
mv opencv_contrib-4.5.1 opencv_contrib
rm opencv.zip
rm opencv_contrib.zip

Build OpenCV

Let’s build the OpenCV files.

Create a directory.

cd ~/opencv
mkdir build
cd build

Set the compilation directives. You can copy and paste this entire block of commands below into your terminal.

cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D EIGEN_INCLUDE_PATH=/usr/include/eigen3 \
-D WITH_OPENCL=OFF \
-D WITH_CUDA=ON \
-D CUDA_ARCH_BIN=5.3 \
-D CUDA_ARCH_PTX="" \
-D WITH_CUDNN=ON \
-D WITH_CUBLAS=ON \
-D ENABLE_FAST_MATH=ON \
-D CUDA_FAST_MATH=ON \
-D OPENCV_DNN_CUDA=ON \
-D ENABLE_NEON=ON \
-D WITH_QT=OFF \
-D WITH_OPENMP=ON \
-D WITH_OPENGL=ON \
-D BUILD_TIFF=ON \
-D WITH_FFMPEG=ON \
-D WITH_GSTREAMER=ON \
-D WITH_TBB=ON \
-D BUILD_TBB=ON \
-D BUILD_TESTS=OFF \
-D WITH_EIGEN=ON \
-D WITH_V4L=ON \
-D WITH_LIBV4L=ON \
-D OPENCV_ENABLE_NONFREE=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D INSTALL_PYTHON_EXAMPLES=OFF \
-D BUILD_NEW_PYTHON_SUPPORT=ON \
-D BUILD_opencv_python3=TRUE \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D BUILD_EXAMPLES=OFF ..

I got this message when everything was done building.

done_building

Build OpenCV. This command below will take a long time (1-2 hours), so you can go do something else and come back later.

make -j4

If the building process stops before it reaches 100%, repeat the cmake command I showed earlier, and run the ‘make -j4’ command again.

One other thing. A lot of times I had the installation stall. To avoid that happening, I moved the mouse cursor every few minutes so that the screen saver for the Jetson Nano didn’t turn on.

Finish the install.

cd ~
sudo rm -r /usr/include/opencv4/opencv2
cd ~/opencv/build
sudo make install
sudo ldconfig
make clean
sudo apt-get update

Verify Your OpenCV Installation

Let’s verify that everything is working correctly.

python3
import cv2
cv2.__version__
exit()
2021-04-04-20.43.34

Final Housekeeping

Delete the original OpenCV and OpenCV_Contrib folders.

sudo rm -rf ~/opencv
sudo rm -rf ~/opencv_contrib

Install jtop, a system monitoring software for Jetson Nano.

cd ~
sudo -H pip3 install -U jetson-stats
2021-04-04-20.47.13

Reboot your machine.

sudo reboot
jtop
2021-04-04-20.49.11

Press q to quit.

Verify the installation of OpenCV one last time.

python3
import cv2
cv2.__version__
exit()

That’s it. Keep building!

References

This article over at Q-engineering was really helpful.

How to Determine the Orientation of an Object Using OpenCV

In this tutorial, we will build a program that can determine the orientation of an object (i.e. rotation angle in degrees) using the popular computer vision library OpenCV.

Real-World Applications

One of the most common real-world use cases of the program we will develop in this tutorial is when you want to develop a pick and place system for robotic arms. Determining the orientation of an object on a conveyor belt is key to determining the appropriate way to grasp the object, pick it up, and place it in another location.

Let’s get started!

Prerequisites

Installation and Setup

Before we get started, let’s make sure we have all the software packages installed. Check to see if you have OpenCV installed on your machine. If you are using Anaconda, you can type:

conda install -c conda-forge opencv

Alternatively, you can type:

pip install opencv-python

Install Numpy, the scientific computing library.

pip install numpy

Find an Image File

Find an image. My input image is 1200 pixels in width and 900 pixels in height. The filename of my input image is input_img.jpg.

input_img_600

Write the Code

Here is the code. It accepts an image named input_img.jpg and outputs an annotated image named output_img.jpg. Pieces of the code pull from the official OpenCV implementation.

import cv2 as cv
from math import atan2, cos, sin, sqrt, pi
import numpy as np

def drawAxis(img, p_, q_, color, scale):
  p = list(p_)
  q = list(q_)

  ## [visualization1]
  angle = atan2(p[1] - q[1], p[0] - q[0]) # angle in radians
  hypotenuse = sqrt((p[1] - q[1]) * (p[1] - q[1]) + (p[0] - q[0]) * (p[0] - q[0]))

  # Here we lengthen the arrow by a factor of scale
  q[0] = p[0] - scale * hypotenuse * cos(angle)
  q[1] = p[1] - scale * hypotenuse * sin(angle)
  cv.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), color, 3, cv.LINE_AA)

  # create the arrow hooks
  p[0] = q[0] + 9 * cos(angle + pi / 4)
  p[1] = q[1] + 9 * sin(angle + pi / 4)
  cv.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), color, 3, cv.LINE_AA)

  p[0] = q[0] + 9 * cos(angle - pi / 4)
  p[1] = q[1] + 9 * sin(angle - pi / 4)
  cv.line(img, (int(p[0]), int(p[1])), (int(q[0]), int(q[1])), color, 3, cv.LINE_AA)
  ## [visualization1]

def getOrientation(pts, img):
  ## [pca]
  # Construct a buffer used by the pca analysis
  sz = len(pts)
  data_pts = np.empty((sz, 2), dtype=np.float64)
  for i in range(data_pts.shape[0]):
    data_pts[i,0] = pts[i,0,0]
    data_pts[i,1] = pts[i,0,1]

  # Perform PCA analysis
  mean = np.empty((0))
  mean, eigenvectors, eigenvalues = cv.PCACompute2(data_pts, mean)

  # Store the center of the object
  cntr = (int(mean[0,0]), int(mean[0,1]))
  ## [pca]

  ## [visualization]
  # Draw the principal components
  cv.circle(img, cntr, 3, (255, 0, 255), 2)
  p1 = (cntr[0] + 0.02 * eigenvectors[0,0] * eigenvalues[0,0], cntr[1] + 0.02 * eigenvectors[0,1] * eigenvalues[0,0])
  p2 = (cntr[0] - 0.02 * eigenvectors[1,0] * eigenvalues[1,0], cntr[1] - 0.02 * eigenvectors[1,1] * eigenvalues[1,0])
  drawAxis(img, cntr, p1, (255, 255, 0), 1)
  drawAxis(img, cntr, p2, (0, 0, 255), 5)

  angle = atan2(eigenvectors[0,1], eigenvectors[0,0]) # orientation in radians
  ## [visualization]

  # Label with the rotation angle
  label = "  Rotation Angle: " + str(-int(np.rad2deg(angle)) - 90) + " degrees"
  textbox = cv.rectangle(img, (cntr[0], cntr[1]-25), (cntr[0] + 250, cntr[1] + 10), (255,255,255), -1)
  cv.putText(img, label, (cntr[0], cntr[1]), cv.FONT_HERSHEY_SIMPLEX, 0.5, (0,0,0), 1, cv.LINE_AA)

  return angle

# Load the image
img = cv.imread("input_img.jpg")

# Was the image there?
if img is None:
  print("Error: File not found")
  exit(0)

cv.imshow('Input Image', img)

# Convert image to grayscale
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)

# Convert image to binary
_, bw = cv.threshold(gray, 50, 255, cv.THRESH_BINARY | cv.THRESH_OTSU)

# Find all the contours in the thresholded image
contours, _ = cv.findContours(bw, cv.RETR_LIST, cv.CHAIN_APPROX_NONE)

for i, c in enumerate(contours):

  # Calculate the area of each contour
  area = cv.contourArea(c)

  # Ignore contours that are too small or too large
  if area < 3700 or 100000 < area:
    continue

  # Draw each contour only for visualisation purposes
  cv.drawContours(img, contours, i, (0, 0, 255), 2)

  # Find the orientation of each shape
  getOrientation(c, img)

cv.imshow('Output Image', img)
cv.waitKey(0)
cv.destroyAllWindows()
 
# Save the output image to the current directory
cv.imwrite("output_img.jpg", img)

Output Image

Here is the result:

output_img_600-1

Understanding the Rotation Axes

The positive x-axis of each object is the red line. The positive y-axis of each object is the blue line

The global positive x-axis goes from left to right horizontally across the image. The global positive z-axis points out of this page. The global positive y-axis points from the bottom of the image to the top of the image vertically.

Using the right-hand rule to measure rotation, you stick your four fingers out straight (index finger to pinky finger) in the direction of the global positive x-axis.

right-hand-ruleJPG

You then rotate your four fingers 90 degrees counterclockwise. Your fingertips point towards the positive y-axis, and your thumb points out of this page towards the positive z-axis.

right-hand-rule-curlJPG

Calculate an Orientation Between 0 and 180 Degrees

If we want to calculate the orientation of an object and make sure that the result is always between 0 and 180 degrees, we can use this code:

# This programs calculates the orientation of an object.
# The input is an image, and the output is an annotated image
# with the angle of otientation for each object (0 to 180 degrees)

import cv2 as cv
from math import atan2, cos, sin, sqrt, pi
import numpy as np

# Load the image
img = cv.imread("input_img.jpg")

# Was the image there?
if img is None:
  print("Error: File not found")
  exit(0)

cv.imshow('Input Image', img)

# Convert image to grayscale
gray = cv.cvtColor(img, cv.COLOR_BGR2GRAY)

# Convert image to binary
_, bw = cv.threshold(gray, 50, 255, cv.THRESH_BINARY | cv.THRESH_OTSU)

# Find all the contours in the thresholded image
contours, _ = cv.findContours(bw, cv.RETR_LIST, cv.CHAIN_APPROX_NONE)

for i, c in enumerate(contours):

  # Calculate the area of each contour
  area = cv.contourArea(c)

  # Ignore contours that are too small or too large
  if area < 3700 or 100000 < area:
    continue

  # cv.minAreaRect returns:
  # (center(x, y), (width, height), angle of rotation) = cv2.minAreaRect(c)
  rect = cv.minAreaRect(c)
  box = cv.boxPoints(rect)
  box = np.int0(box)

  # Retrieve the key parameters of the rotated bounding box
  center = (int(rect[0][0]),int(rect[0][1])) 
  width = int(rect[1][0])
  height = int(rect[1][1])
  angle = int(rect[2])

  	
  if width < height:
    angle = 90 - angle
  else:
    angle = -angle
		
  label = "  Rotation Angle: " + str(angle) + " degrees"
  textbox = cv.rectangle(img, (center[0]-35, center[1]-25), 
    (center[0] + 295, center[1] + 10), (255,255,255), -1)
  cv.putText(img, label, (center[0]-50, center[1]), 
    cv.FONT_HERSHEY_SIMPLEX, 0.7, (0,0,0), 1, cv.LINE_AA)
  cv.drawContours(img,[box],0,(0,0,255),2)

cv.imshow('Output Image', img)
cv.waitKey(0)
cv.destroyAllWindows()
 
# Save the output image to the current directory
cv.imwrite("min_area_rec_output.jpg", img)

Here is the output:

min_area_rec_output

That’s it. Keep building!