Calculate Pulses per Revolution for a DC Motor With Encoder

In this tutorial, we will learn how to calculate the number of pulses per 360 degree revolution for a DC motor with a built-in encoder. The motor that we will work with looks like the following image, however you can use any motor that looks similar to it:

1_dc_motor_encoder_wheels

Real-World Applications

When we know the number of pulses that an encoder outputs for each 360-degree turn of a motor, we can use that information to calculate the angular velocity of the wheels (in radians per second). 

When we know the angular velocity of the wheels on a robot and the radius of the wheels, we can calculate how fast the robot is moving (i.e. speed) as well as the distance a robot has traveled in a given unit of time. This information is important for helping us determine where a robot is in a particular environment.

Prerequisites

  • You have the Arduino IDE (Integrated Development Environment) installed on either your PC (Windows, MacOS, or Linux).

You Will Need

This section is the complete list of components you will need for this project (#ad).

OR

  • Self-Balancing Car Kit (which includes everything above and more…Elegoo and Osoyoo are good brands you can find on Amazon.com)

Disclosure (#ad): As an Amazon Associate I earn from qualifying purchases.

What is a Pulse?

When a motor with a built-in encoder rotates, it generates pulses, which are alternating electrical signals of high voltage and low voltage. Each time the signal goes from low to high (i.e. rising), we count that as a single pulse.

time-intervalJPG

Our goal is to take our motor and measure the number of encoder pulses (often referred to as “ticks”) it generates in a single 360 degree turn of the motor.

Set Up the Hardware

The first thing we need to do is set up the hardware.

Here is the wiring diagram:

jgb37_dc_motor_with_encoder-1
  • The Ground pin of the motor connects to GND of the Arduino.
  • Encoder A (sometimes labeled C1) of the motor connects to pin 2 of the Arduino. Pin 2 of the Arduino will record every time there is a rising digital signal from Encoder A.
  • Encoder B (sometimes labeled C2) of the motor connects to pin 4 of the Arduino. The signal that is read off pin 4 on the Arduino will determine if the motor is moving forward or in reverse. We’re not going to use this pin in this tutorial, but we will use it in a future tutorial.
  • The VCC pin of the motor connects to the 5V pin of the Arduino. This pin is responsible for providing power to the encoder.
  • For this project, you don’t need to connect the motor pins (+ and – terminals) to anything since you will be turning the motor manually with your hand.

Write and Load the Code

Now we’re ready to calculate the number of encoder pulses per revolution. Open the Arduino IDE, and write the following program. The name of my program is pulses_per_revolution_counter.ino.

/*
 * Author: Automatic Addison
 * Website: https://automaticaddison.com
 * Description: Count the number of encoder pulses per revolution.  
 */

// Encoder output to Arduino Interrupt pin. Tracks the pulse count.
#define ENC_IN_RIGHT_A 2

// Keep track of the number of right wheel pulses
volatile long right_wheel_pulse_count = 0;

void setup() {

  // Open the serial port at 9600 bps
  Serial.begin(9600); 

  // Set pin states of the encoder
  pinMode(ENC_IN_RIGHT_A , INPUT_PULLUP);

  // Every time the pin goes high, this is a pulse
  attachInterrupt(digitalPinToInterrupt(ENC_IN_RIGHT_A), right_wheel_pulse, RISING);
  
}

void loop() {
 
    Serial.print(" Pulses: ");
    Serial.println(right_wheel_pulse_count);  
}

// Increment the number of pulses by 1
void right_wheel_pulse() {
  right_wheel_pulse_count++;
}

Compile the code by clicking the green checkmark in the upper-left of the IDE window.

Connect the Arduino board to your personal computer using the USB cord.

Load the code we just wrote to your Arduino board.

Now, follow the following steps in the image below.

open-serial-monitorJPG

When you open the Serial Monitor, the pulse count should be 0.

2_starting_pulses

Using your hand, rotate the motor a complete 360-degree turn.

Here is the output. We can see that there were 620 pulses generated. 

3-ending-pulses

Thus, this motor generates 620 pulses per revolution.

That’s it. Keep building!

How to Set Up a Camera for NVIDIA Jetson Nano

In this tutorial, we will connect a camera to an NVIDIA Jetson Nano.

Prerequisites

You Will Need

This section is the complete list of components you will need for this project (#ad).

2021-04-07-09.40.12

Disclosure (#ad): As an Amazon Associate I earn from qualifying purchases.

Connect the Camera to the Jetson Nano

Make sure the Jetson Nano is completely off, and no power is connected to it.

Grab the camera.

2021-04-07-09.40.46

Lift the plastic tabs of the CSI connector that is closest to the barrel jack (Camera 0). 

Slide the ribbon cable fully into the connector so that it is not tilted. The blue marking should face towards the outside of the board, away from the heat sink. The ribbon cable contacts need to face towards the heat sink.

Hold the ribbon cable still while carefully and gently push down on the plastic tabs to fasten the camera into place.

You should be able to pull gently on the camera without the camera popping out of the latch.

Test Your Camera

Turn on your Jetson Nano.

Open a new terminal window, and type:

ls /dev/video0

If you see output like this, it means your camera is connected.

2021-04-07-09.50.35

Take a Photo

Now open a new terminal window, and move it to the edge of your desktop.

Type the following command. You can change the value of orientation (0-3) to get the right orientation of your camera.

nvgstcapture-1.0 --orientation=2
2021-04-07-10.17.27

Press CTRL+C exit.

Test Python Setup (Optional)

Let’s see if we can run a Python script that uses the camera.

Open a new terminal window, and clone the repository created by JetsonHacksNano.

git clone https://github.com/JetsonHacksNano/CSI-Camera.git

Change to the new directory.

cd CSI-Camera

Type this command. This is all a single command:

gst-launch-1.0 nvarguscamerasrc sensor_id=0 ! 'video/x-raw(memory:NVMM),width=3280, height=2464, framerate=21/1, format=NV12' ! nvvidconv flip-method=2 ! 'video/x-raw, width=816, height=616' ! nvvidconv ! nvegltransform ! nveglglessink -e

Press CTRL + C to shut it down.

Install NumPy.

sudo apt-get update
sudo apt install python3-numpy

Install libcanberra.

sudo apt install libcanberra-gtk-module

Run the face detection python script.

python3 face_detect.py

Here is my output:

2021-04-07-10.42.11-1

If the output doesn’t turn out like you want it to, open the face_detect.py script.

gedit face_detect.py

Note: if you don’t have gedit installed, type:

sudo apt-get install gedit

Edit the flip-method parameter:

def gstreamer_pipeline(
    …
    framerate=21,
    flip_method=2,

Here are the options for the flip_method parameter:

  • Default: 0, “none”
  • (0): none – Identity (no rotation)
  • (1): counterclockwise – Rotate counter-clockwise 90 degrees
  • (2): rotate-180 – Rotate 180 degrees
  • (3): clockwise – Rotate clockwise 90 degrees
  • (4): horizontal-flip – Flip horizontally
  • (5): upper-right-diagonal – Flip across upper right/lower left diagonal
  • (6): vertical-flip – Flip vertically
  • (7): upper-left-diagonal – Flip across upper left/low

Here is the full code for face_detect.py (credit: JetsonHacks)

# MIT License
# Copyright (c) 2019 JetsonHacks
# See LICENSE for OpenCV license and additional information

# https://docs.opencv.org/3.3.1/d7/d8b/tutorial_py_face_detection.html
# On the Jetson Nano, OpenCV comes preinstalled
# Data files are in /usr/sharc/OpenCV
import numpy as np
import cv2

# gstreamer_pipeline returns a GStreamer pipeline for capturing from the CSI camera
# Defaults to 1280x720 @ 30fps
# Flip the image by setting the flip_method (most common values: 0 and 2)
# display_width and display_height determine the size of the window on the screen


def gstreamer_pipeline(
    capture_width=3280,
    capture_height=2464,
    display_width=820,
    display_height=616,
    framerate=21,
    flip_method=2,
):
    return (
        "nvarguscamerasrc ! "
        "video/x-raw(memory:NVMM), "
        "width=(int)%d, height=(int)%d, "
        "format=(string)NV12, framerate=(fraction)%d/1 ! "
        "nvvidconv flip-method=%d ! "
        "video/x-raw, width=(int)%d, height=(int)%d, format=(string)BGRx ! "
        "videoconvert ! "
        "video/x-raw, format=(string)BGR ! appsink"
        % (
            capture_width,
            capture_height,
            framerate,
            flip_method,
            display_width,
            display_height,
        )
    )


def face_detect():
    face_cascade = cv2.CascadeClassifier(
        "/usr/share/opencv4/haarcascades/haarcascade_frontalface_default.xml"
    )
    eye_cascade = cv2.CascadeClassifier(
        "/usr/share/opencv4/haarcascades/haarcascade_eye.xml"
    )
    cap = cv2.VideoCapture(gstreamer_pipeline(), cv2.CAP_GSTREAMER)
    if cap.isOpened():
        cv2.namedWindow("Face Detect", cv2.WINDOW_AUTOSIZE)
        while cv2.getWindowProperty("Face Detect", 0) >= 0:
            ret, img = cap.read()
            gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
            faces = face_cascade.detectMultiScale(gray, 1.3, 5)

            for (x, y, w, h) in faces:
                cv2.rectangle(img, (x, y), (x + w, y + h), (255, 0, 0), 2)
                roi_gray = gray[y : y + h, x : x + w]
                roi_color = img[y : y + h, x : x + w]
                eyes = eye_cascade.detectMultiScale(roi_gray)
                for (ex, ey, ew, eh) in eyes:
                    cv2.rectangle(
                        roi_color, (ex, ey), (ex + ew, ey + eh), (0, 255, 0), 2
                    )

            cv2.imshow("Face Detect", img)
            keyCode = cv2.waitKey(30) & 0xFF
            # Stop the program on the ESC key
            if keyCode == 27:
                break

        cap.release()
        cv2.destroyAllWindows()
    else:
        print("Unable to open camera")


if __name__ == "__main__":
    face_detect()

Press CTRL+C to stop the script.

Test C++ Setup (Optional)

Open simple_camera.cpp and edit the flip_method in the main function if necessary.

gedit simple_camera.cpp

int main()
{
…
    int framerate = 60 ;
    int flip_method = 2;

Save the file and close it.

Compile simple_camera.cpp. You can copy and paste this command.

g++ -std=c++11 -Wall -I/usr/include/opencv4 simple_camera.cpp -L/usr/lib/aarch64-linux-gnu -lopencv_core -lopencv_highgui -lopencv_videoio -o simple_camera

Run simple_camera.cpp

./simple_camera
2021-04-07-10.50.51

Press ESC when you’re done.

To shutdown the computer, type:

sudo shutdown -h now

Here is the full code for simple_camera.cpp:

// simple_camera.cpp
// MIT License
// Copyright (c) 2019 JetsonHacks
// See LICENSE for OpenCV license and additional information
// Using a CSI camera (such as the Raspberry Pi Version 2) connected to a 
// NVIDIA Jetson Nano Developer Kit using OpenCV
// Drivers for the camera and OpenCV are included in the base image

// #include <iostream>
#include <opencv2/opencv.hpp>
// #include <opencv2/videoio.hpp>
// #include <opencv2/highgui.hpp>

std::string gstreamer_pipeline (int capture_width, int capture_height, int display_width, int display_height, int framerate, int flip_method) {
    return "nvarguscamerasrc ! video/x-raw(memory:NVMM), width=(int)" + std::to_string(capture_width) + ", height=(int)" +
           std::to_string(capture_height) + ", format=(string)NV12, framerate=(fraction)" + std::to_string(framerate) +
           "/1 ! nvvidconv flip-method=" + std::to_string(flip_method) + " ! video/x-raw, width=(int)" + std::to_string(display_width) + ", height=(int)" +
           std::to_string(display_height) + ", format=(string)BGRx ! videoconvert ! video/x-raw, format=(string)BGR ! appsink";
}

int main()
{
    int capture_width = 1280 ;
    int capture_height = 720 ;
    int display_width = 1280 ;
    int display_height = 720 ;
    int framerate = 60 ;
    int flip_method = 2 ;

    std::string pipeline = gstreamer_pipeline(capture_width,
	capture_height,
	display_width,
	display_height,
	framerate,
	flip_method);
    std::cout << "Using pipeline: \n\t" << pipeline << "\n";
 
    cv::VideoCapture cap(pipeline, cv::CAP_GSTREAMER);
    if(!cap.isOpened()) {
	std::cout<<"Failed to open camera."<<std::endl;
	return (-1);
    }

    cv::namedWindow("CSI Camera", cv::WINDOW_AUTOSIZE);
    cv::Mat img;

    std::cout << "Hit ESC to exit" << "\n" ;
    while(true)
    {
    	if (!cap.read(img)) {
		std::cout<<"Capture read error"<<std::endl;
		break;
	}
	
	cv::imshow("CSI Camera",img);
	int keycode = cv::waitKey(30) & 0xff ; 
        if (keycode == 27) break ;
    }

    cap.release();
    cv::destroyAllWindows() ;
    return 0;
}

That’s it. Keep building!

References

This tutorial was especially helpful.

How to Install OpenCV 4.5 on NVIDIA Jetson Nano

In this tutorial, we will install OpenCV 4.5 on the NVIDIA Jetson Nano. The reason I will install OpenCV 4.5 is because the OpenCV that comes pre-installed on the Jetson Nano does not have CUDA support. CUDA support will enable us to use the GPU to run deep learning applications.

The terminal command to check which OpenCV version you have on your computer is:

python -c 'import cv2; print(cv2.__version__)'

Prerequisites

Install Dependencies

Type the following command.

sudo sh -c "echo '/usr/local/cuda/lib64' >> /etc/ld.so.conf.d/nvidia-tegra.conf"

Create the links and caching to the shared libraries

sudo ldconfig

Install the relevant third party libraries. Play close attention to the line wrapping below. Each command begins with “sudo apt-get install”.

sudo apt-get install build-essential cmake git unzip pkg-config
sudo apt-get install libjpeg-dev libpng-dev libtiff-dev
sudo apt-get install libavcodec-dev libavformat-dev libswscale-dev
sudo apt-get install libgtk2.0-dev libcanberra-gtk*
sudo apt-get install python3-dev python3-numpy python3-pip
sudo apt-get install libxvidcore-dev libx264-dev libgtk-3-dev
sudo apt-get install libtbb2 libtbb-dev libdc1394-22-dev
sudo apt-get install libv4l-dev v4l-utils
sudo apt-get install libgstreamer1.0-dev libgstreamer-plugins-base1.0-dev
sudo apt-get install libavresample-dev libvorbis-dev libxine2-dev
sudo apt-get install libfaac-dev libmp3lame-dev libtheora-dev
sudo apt-get install libopencore-amrnb-dev libopencore-amrwb-dev
sudo apt-get install libopenblas-dev libatlas-base-dev libblas-dev
sudo apt-get install liblapack-dev libeigen3-dev gfortran
sudo apt-get install libhdf5-dev protobuf-compiler
sudo apt-get install libprotobuf-dev libgoogle-glog-dev libgflags-dev

Download OpenCV

Now that we’ve installed the third-party libraries, let’s install OpenCV itself. The latest release is listed here. Again, pay attention to the line wrapping. The https://github… was too long to fit on one line.

cd ~
wget -O opencv.zip https://github.com/opencv/opencv/archive/4.5.1.zip
wget -O opencv_contrib.zip https://github.com/opencv/opencv_contrib/archive/4.5.1.zip
unzip opencv.zip
unzip opencv_contrib.zip

Now rename the directories. Type each command below, one after the other.

mv opencv-4.5.1 opencv
mv opencv_contrib-4.5.1 opencv_contrib
rm opencv.zip
rm opencv_contrib.zip

Build OpenCV

Let’s build the OpenCV files.

Create a directory.

cd ~/opencv
mkdir build
cd build

Set the compilation directives. You can copy and paste this entire block of commands below into your terminal.

cmake -D CMAKE_BUILD_TYPE=RELEASE \
-D CMAKE_INSTALL_PREFIX=/usr \
-D OPENCV_EXTRA_MODULES_PATH=~/opencv_contrib/modules \
-D EIGEN_INCLUDE_PATH=/usr/include/eigen3 \
-D WITH_OPENCL=OFF \
-D WITH_CUDA=ON \
-D CUDA_ARCH_BIN=5.3 \
-D CUDA_ARCH_PTX="" \
-D WITH_CUDNN=ON \
-D WITH_CUBLAS=ON \
-D ENABLE_FAST_MATH=ON \
-D CUDA_FAST_MATH=ON \
-D OPENCV_DNN_CUDA=ON \
-D ENABLE_NEON=ON \
-D WITH_QT=OFF \
-D WITH_OPENMP=ON \
-D WITH_OPENGL=ON \
-D BUILD_TIFF=ON \
-D WITH_FFMPEG=ON \
-D WITH_GSTREAMER=ON \
-D WITH_TBB=ON \
-D BUILD_TBB=ON \
-D BUILD_TESTS=OFF \
-D WITH_EIGEN=ON \
-D WITH_V4L=ON \
-D WITH_LIBV4L=ON \
-D OPENCV_ENABLE_NONFREE=ON \
-D INSTALL_C_EXAMPLES=OFF \
-D INSTALL_PYTHON_EXAMPLES=OFF \
-D BUILD_NEW_PYTHON_SUPPORT=ON \
-D BUILD_opencv_python3=TRUE \
-D OPENCV_GENERATE_PKGCONFIG=ON \
-D BUILD_EXAMPLES=OFF ..

I got this message when everything was done building.

done_building

Build OpenCV. This command below will take a long time (1-2 hours), so you can go do something else and come back later.

make -j4

If the building process stops before it reaches 100%, repeat the cmake command I showed earlier, and run the ‘make -j4’ command again.

One other thing. A lot of times I had the installation stall. To avoid that happening, I moved the mouse cursor every few minutes so that the screen saver for the Jetson Nano didn’t turn on.

Finish the install.

cd ~
sudo rm -r /usr/include/opencv4/opencv2
cd ~/opencv/build
sudo make install
sudo ldconfig
make clean
sudo apt-get update

Verify Your OpenCV Installation

Let’s verify that everything is working correctly.

python3
import cv2
cv2.__version__
exit()
2021-04-04-20.43.34

Final Housekeeping

Delete the original OpenCV and OpenCV_Contrib folders.

sudo rm -rf ~/opencv
sudo rm -rf ~/opencv_contrib

Install jtop, a system monitoring software for Jetson Nano.

cd ~
sudo -H pip3 install -U jetson-stats
2021-04-04-20.47.13

Reboot your machine.

sudo reboot
jtop
2021-04-04-20.49.11

Press q to quit.

Verify the installation of OpenCV one last time.

python3
import cv2
cv2.__version__
exit()

That’s it. Keep building!

References

This article over at Q-engineering was really helpful.