How to Set Up MQTT for a Robotics Project

MQTT (Message Queuing Telemetry Transport) is a lightweight, publish-subscribe network protocol that transports messages between devices. It is particularly useful in robotics projects, including those using ROS 2 (Robot Operating System 2), due to its efficiency in handling real-time data streams and its ability to work well in low-bandwidth, high-latency environments.

In a ROS 2 robotics project, MQTT can complement the existing communication framework by:

  • Enabling easy integration with IoT devices and sensors
  • Providing a standardized way to communicate with cloud services
  • Facilitating communication between robots and external systems
  • Offering a lightweight alternative for certain types of messages

This tutorial will guide you through setting up MQTT for your ROS 2 robotics project.

Setting Up the MQTT Broker (Mosquitto)

We’ll use Mosquitto, a popular open-source MQTT broker. Here’s how to set it up on an Intel NUC, Raspberry Pi or Jetson board:

Update your system:

sudo apt-get update
sudo apt-get upgrade

Install Mosquitto broker and clients:

sudo apt-get install mosquitto
sudo apt-get install mosquitto-clients

Check the status of Mosquitto:

systemctl status mosquitto

Manage Mosquitto services:

Stop:

sudo systemctl stop mosquitto or sudo service mosquitto stop

Start:

sudo systemctl start mosquitto or sudo service mosquitto start

Restart:

sudo systemctl restart mosquitto or sudo service mosquitto restart

Verify that Mosquitto is listening on the default port (1883):

netstat -an | grep 1883 

Configuring Mosquitto for Network Access

By default, Mosquitto only listens on localhost. To allow connections from other devices:

Create a new configuration file:

sudo nano /etc/mosquitto/conf.d/default_listener.conf

Add the following lines:

listener 1883
allow_anonymous true

Save the file and exit the editor.

Restart Mosquitto:

sudo systemctl restart mosquitto

Verify the new settings:

sudo netstat -tuln | grep 1883

Note: Allowing anonymous access may not be secure for all use cases. Consider setting up authentication for production environments.

Testing MQTT Communication

To test MQTT communication, open four terminals:

Replace localhost with the IP address of your MQTT broker if running on a different machine.

Terminal 1 (Subscriber for kitchen):

mosquitto_sub -h localhost -t /home/sensors/temp/kitchen

Terminal 2 (Publisher for kitchen):

mosquitto_pub -h localhost -t /home/sensors/temp/kitchen -m "Kitchen Temperature: 26°C"

Installing the Paho MQTT Python Client

To interact with MQTT from your Python scripts:

Install pip for Python 3:

sudo apt install python3-pip

Install the Paho MQTT client:

pip3 install paho-mqtt

With these steps completed, you’re now ready to integrate MQTT into your ROS 2 robotics project. You can use the Paho MQTT client in your Python scripts to publish and subscribe to topics, allowing your robot to communicate with various devices and services using the MQTT protocol.

Your Best Career Decision of the Next 50 Years

Why Investing in Robotics Skills Will Be Your Best Career Decision of the Next 50 Years

The Secret Sauce of Tech Billionaires

Ever wonder what makes tech companies like Uber, Google, and Airbnb so successful? Well, there’s a guy named Evan Williams who has a philosophy on what it takes. You might not know his name, but you’ve definitely used his creations. He’s the billionaire co-founder of Twitter and Medium, and he’s got a pretty simple recipe for success in the tech world.

Here’s Williams’ secret sauce: identify basic human desires that have existed for a long time, then use modern technology to fulfill those desires more conveniently than ever before. That’s it. 

At its core, his idea is simple yet profound. Williams argues that the Internet is “a giant machine designed to give people what they want.” The key to building successful tech companies, he suggests, is to remove steps from common activities, making them faster and cognitively easier for users.

Sounds simple, right? But this idea has made Williams and many others filthy rich. Making money in technology is all about providing speed and cognitive ease…don’t make me think, don’t make me wait.

Let’s break it down with some examples:

1. Google: Remember when finding information online meant clicking through a bunch of confusing directories? Google said, “Nah, just type what you want in this box.” Boom! Instant answers. Before Google, you’d spend ages navigating Yahoo! directories or trying to guess the right website URL. Now? You can find the capital of Uzbekistan in seconds.

2. Uber: Calling a cab used to be a pain. You’d stand on the street corner, arm raised, hoping to catch a driver’s attention. Or you’d call a dispatcher and pray they’d actually send someone. Uber made it as easy as tapping a button on your phone. No more waiting on hold or explaining where you are.

3. Airbnb: Booking a place to stay meant dealing with hotels and their prices. You’d have to call around, compare rates, and often settle for a cookie-cutter room. Airbnb lets you rent someone’s spare room or entire house with a few clicks. It opened up a whole new world of unique, often cheaper accommodations.

4. Amazon: Remember when shopping meant driving to the mall, fighting for parking, and hoping the store had what you wanted? Amazon brought the entire mall (and then some) to your fingertips. From books to electronics to groceries, it’s all just a click away.

5. Netflix: Back in the day, watching a movie meant driving to Blockbuster, hoping they had the film you wanted, and rushing to return it to avoid late fees. Netflix said, “How about we bring thousands of movies and shows right to your TV?” No more late fees, no more limited selection.

6. Spotify: Music lovers used to spend a fortune building CD collections or downloading individual songs. Spotify made virtually all the world’s music available for a small monthly fee. No more storage issues, no more buying albums for just one good song.

7. Instagram: Remember lugging around a camera, then waiting to develop film to share photos? Instagram made it possible to snap, edit, and share photos instantly. It turned everyone into a potential photographer and created a whole new way of visual communication.

8. LinkedIn: Networking used to mean attending stuffy events and exchanging business cards. LinkedIn brought professional networking online, making it easier to connect with colleagues, find jobs, and showcase your skills.

9. Zoom: Before Zoom, video conferencing was often a clunky, unreliable experience reserved for big corporations. Zoom made it so easy that grandparents could use it. It’s changed how we work, learn, and stay in touch with loved ones.

10. DoorDash: Remember when getting food delivered meant being limited to pizza or Chinese? DoorDash and similar apps brought virtually every restaurant to your doorstep. No more limited options or dealing with grumpy restaurant staff over the phone.

See the pattern? These companies all took something people were already doing and made it way faster and easier. Williams calls this “removing cognitive overhead.” In plain English, that means making things so simple that you don’t have to think about them.

I saw firsthand as the Chief Financial Officer of 21212 Digital Accelerator, the first tech startup accelerator in Brazil, how the most successful companies in our portfolio made a lot of money just by removing steps from what people were already doing.

Now, you might be wondering, “What does this have to do with robotics?” 

As we’ve seen, Evan Williams’ philosophy of simplifying existing processes has led to remarkable innovations in the digital realm, but this same principle could apply to robotics, potentially transforming our physical world in similar ways.

The Case for Robotics 

Applying Williams’ Principle to Robotics

The core idea of simplifying existing processes will be powerful in robotics:

1. Reducing Complexity: Just as Google simplified information retrieval, robots will simplify complex physical tasks.

2. Increasing Accessibility: Similar to how Uber made transportation more accessible, robots will make certain services or capabilities more widely available.

3. Enhancing Efficiency: In the same way that Amazon streamlined shopping, robots will streamline various industrial and domestic processes.

Fulfilling Age-Old Human Desires

Robots (humanoids in particular) will eventually satisfy numerous long-standing human desires:

  • Assistance: Humans have always sought help with tasks, from manual labor to cognitive work.
  • Safety: People have consistently sought ways to perform dangerous tasks without risking human lives.
  • Efficiency: Who doesn’t want to accomplish more in less time?

Roboticists will play an important role translating these desires into reality through advanced programming and AI integration.

Removing Steps and Cognitive Load

Just as Uber removed steps from the process of getting a ride, humanoid robots programmed by skilled software engineers will simplify countless aspects of daily life:

  • Home management: Robots will eventually handle cleaning, cooking, and organizing without the need for human intervention.
  • Eldercare: Humanoids will provide round-the-clock assistance to the elderly, reducing the cognitive and physical burden on human caregivers.
  • Industrial work: Complex manufacturing processes will be streamlined, with robots handling intricate tasks that currently require extensive human training and concentration.

Investment and Market Trends

The robotics industry is attracting significant investment:

  • Major tech companies like Tesla and Amazon are investing in robotics research and development.
  • Startups focused on specific robotics applications (e.g. humanoids) are securing substantial funding…sometimes in the hundreds of millions of dollars.
  • Government and academic institutions are also contributing to robotics research.

However, it’s important to approach market projections with caution, as the path from research to widespread commercial application can be long and uncertain.

Skills for the Future

Robotics will drive massive productivity gains across industries, from manufacturing to healthcare. Software engineers who can program these robots will be instrumental in unlocking trillions of dollars in economic value.

As robots become more sophisticated and ubiquitous, the need for skilled professionals who can design, program, and maintain these systems will skyrocket. While many jobs may be at risk of automation, robotics software engineers will be the architects of this automated future, making their skills not just valuable, but essential.

In essence, robotics engineering embodies Williams’ philosophy of tech success. It has the potential to remove steps from countless processes, increase speed, reduce cognitive load, and fulfill long-standing human desires. 

As we’ve seen with Google, Airbnb, and Uber, those who can harness these principles to create user-friendly, transformative technologies often end up leading billion-dollar companies.

So, if you’re considering your career options or looking to pivot into a field with immense potential, robotics should be at the top of your list. By investing in these skills now, you’re not just preparing for the job market of tomorrow – you’re positioning yourself to be at the forefront of a technological revolution that will shape the next 50 years and beyond.

For those passionate about technology and problem-solving, robotics offers an exciting career path with the potential to make significant impacts across various sectors over the next 50 years and beyond. As with any emerging technology, the key is to stay informed, continuously learn, and be prepared to adapt as the field evolves.

The robots are coming. The question is: will you be the one programming them?

That’s it. Keep building!

How to Detect ArUco Markers Using OpenCV and Python

In this tutorial, I will show you how to detect an ArUco Marker in a real-time video stream (i.e. my webcam) using OpenCV (Python). I will follow this tutorial

By the end of this tutorial, you will be able to generate output like this:

detect-aruco-marker

Prerequisites

Create the Code

Open your favorite code editor, and write the following code. I will name my program detect_aruco_marker.py. This program detects an ArUco marker in a real-time video stream (we’ll use the built-in webcam).

#!/usr/bin/env python
 
'''
Welcome to the ArUco Marker Detector!
 
This program:
  - Detects ArUco markers using OpenCV and Python
'''
 
from __future__ import print_function # Python 2/3 compatibility
import cv2 # Import the OpenCV library
import numpy as np # Import Numpy library

# Project: ArUco Marker Detector
# Date created: 12/18/2021
# Python version: 3.8
# Reference: https://www.pyimagesearch.com/2020/12/21/detecting-aruco-markers-with-opencv-and-python/

desired_aruco_dictionary = "DICT_ARUCO_ORIGINAL"

# The different ArUco dictionaries built into the OpenCV library. 
ARUCO_DICT = {
  "DICT_4X4_50": cv2.aruco.DICT_4X4_50,
  "DICT_4X4_100": cv2.aruco.DICT_4X4_100,
  "DICT_4X4_250": cv2.aruco.DICT_4X4_250,
  "DICT_4X4_1000": cv2.aruco.DICT_4X4_1000,
  "DICT_5X5_50": cv2.aruco.DICT_5X5_50,
  "DICT_5X5_100": cv2.aruco.DICT_5X5_100,
  "DICT_5X5_250": cv2.aruco.DICT_5X5_250,
  "DICT_5X5_1000": cv2.aruco.DICT_5X5_1000,
  "DICT_6X6_50": cv2.aruco.DICT_6X6_50,
  "DICT_6X6_100": cv2.aruco.DICT_6X6_100,
  "DICT_6X6_250": cv2.aruco.DICT_6X6_250,
  "DICT_6X6_1000": cv2.aruco.DICT_6X6_1000,
  "DICT_7X7_50": cv2.aruco.DICT_7X7_50,
  "DICT_7X7_100": cv2.aruco.DICT_7X7_100,
  "DICT_7X7_250": cv2.aruco.DICT_7X7_250,
  "DICT_7X7_1000": cv2.aruco.DICT_7X7_1000,
  "DICT_ARUCO_ORIGINAL": cv2.aruco.DICT_ARUCO_ORIGINAL
}
 
def main():
  """
  Main method of the program.
  """
  # Check that we have a valid ArUco marker
  if ARUCO_DICT.get(desired_aruco_dictionary, None) is None:
    print("[INFO] ArUCo tag of '{}' is not supported".format(
      args["type"]))
    sys.exit(0)
    
  # Load the ArUco dictionary
  print("[INFO] detecting '{}' markers...".format(
	desired_aruco_dictionary))
  this_aruco_dictionary = cv2.aruco.Dictionary_get(ARUCO_DICT[desired_aruco_dictionary])
  this_aruco_parameters = cv2.aruco.DetectorParameters_create()
  
  # Start the video stream
  cap = cv2.VideoCapture(0)
  
  while(True):
 
    # Capture frame-by-frame
    # This method returns True/False as well
    # as the video frame.
    ret, frame = cap.read()  
    
    # Detect ArUco markers in the video frame
    (corners, ids, rejected) = cv2.aruco.detectMarkers(
      frame, this_aruco_dictionary, parameters=this_aruco_parameters)
      
    # Check that at least one ArUco marker was detected
    if len(corners) > 0:
      # Flatten the ArUco IDs list
      ids = ids.flatten()
      
      # Loop over the detected ArUco corners
      for (marker_corner, marker_id) in zip(corners, ids):
      
        # Extract the marker corners
        corners = marker_corner.reshape((4, 2))
        (top_left, top_right, bottom_right, bottom_left) = corners
        
        # Convert the (x,y) coordinate pairs to integers
        top_right = (int(top_right[0]), int(top_right[1]))
        bottom_right = (int(bottom_right[0]), int(bottom_right[1]))
        bottom_left = (int(bottom_left[0]), int(bottom_left[1]))
        top_left = (int(top_left[0]), int(top_left[1]))
        
        # Draw the bounding box of the ArUco detection
        cv2.line(frame, top_left, top_right, (0, 255, 0), 2)
        cv2.line(frame, top_right, bottom_right, (0, 255, 0), 2)
        cv2.line(frame, bottom_right, bottom_left, (0, 255, 0), 2)
        cv2.line(frame, bottom_left, top_left, (0, 255, 0), 2)
        
        # Calculate and draw the center of the ArUco marker
        center_x = int((top_left[0] + bottom_right[0]) / 2.0)
        center_y = int((top_left[1] + bottom_right[1]) / 2.0)
        cv2.circle(frame, (center_x, center_y), 4, (0, 0, 255), -1)
        
        # Draw the ArUco marker ID on the video frame
        # The ID is always located at the top_left of the ArUco marker
        cv2.putText(frame, str(marker_id), 
          (top_left[0], top_left[1] - 15),
          cv2.FONT_HERSHEY_SIMPLEX,
          0.5, (0, 255, 0), 2)
 
    # Display the resulting frame
    cv2.imshow('frame',frame)
         
    # If "q" is pressed on the keyboard, 
    # exit this loop
    if cv2.waitKey(1) & 0xFF == ord('q'):
      break
 
  # Close down the video stream
  cap.release()
  cv2.destroyAllWindows()
  
if __name__ == '__main__':
  print(__doc__)
  main()

Save the file, and close it.

You need to have opencv-contrib-python installed and not opencv-python. Open a terminal window, and type:

pip uninstall opencv-python
pip3 install opencv-contrib-python==4.6.0.66

To run the program in Linux for example, type the following command:

python3 detect_aruco_marker.py
detect_aruco_marker-1

If you want to restore OpenCV to the previous version after you’re finished creating the ArUco markers, type:

pip uninstall opencv-contrib-python
pip install opencv-python

To set the changes, I recommend rebooting your computer.

That’s it. Keep building!