Queues vs Mailboxes vs Pipes for RTOSs

In this post, I will discuss the differences between message queues, mailboxes, and pipes. Message queues, mailboxes, and pipes are services that are provided by RTOSs that enable tasks to communicate with each other. Tasks need to communicate with each other in order to coordinate activities and to share data.

What is a Mailbox?

urgent_not_urgent_matrix
Image Source: Vegpuff/Wikipedia

If you need strong control over prioritization, mailboxes might be a good choice. You can easily prioritize mailbox messages no matter when they entered the mailbox.

This characteristic provides a definite advantage over other inter-task communication options such as queues which are particularly sensitive to the order in which messages are added and removed from the data structure.

The other benefit of mailboxes is that there is typically no size limit on individual mailboxes. The size limit is typically fixed and is set by the programmer.

What is a Queue?

data_queue
Image Source: Davidjcmorris

If you have an implementation that requires first-in-first-out prioritization, queues are a great choice. They are flexible and relatively easy to implement, making them a common choice in RTOS implementations.

The downside of queues is that, unlike mailboxes, you are often limited in the amount of data that you can write to the queue in any given call. Many RTOSs don’t have a lot of flexibility when it comes to this.

What is a Pipe?

If you need to be able to write messages of varying lengths, pipes are the best choice. Pipes are identical to queues except they are byte-oriented. The main difference between pipes and queues is that  pipes allow you to write messages of any length, while queues do not.

Mutex vs Semaphore Using a Gas Station Bathroom Analogy

Table of Contents

In this post, I will discuss the difference between a mutex and a semaphore using a gas station bathroom analogy.

What is a Mutex?

A mutex (mutual exclusion object) grants exclusive access to shared data or computer hardware. It is like a bathroom key at a gas station. If you have the key, nobody else can enter the bathroom while you are using it. Only one person (task) can use the bathroom (shared resource) at a time.

If a person (task) wants to use the bathroom, he or she must first get the bathroom key (mutex) since there is only one copy. The gas station owner (operating system) is responsible for managing the bathroom key (mutex).

Return to Table of Contents

What is a Semaphore?

Continuing from our example in the previous section, imagine that there are two bathrooms (shared resources) instead of one. Also there are two bathroom keys instead of one. Since bathroom entry is no longer exclusive, this is not a mutex scenario. Instead, the keys are called semaphores.

A semaphore enables two or more (two in this example) tasks (people) to use a shared resource (gas station bathroom) simultaneously.

  • If two keys (semaphores) are available, the value of the semaphore is 2.
  • If one key is available, the value of the semaphore is 1.
  • If no keys are available, that means that two tasks (people) are currently working (in the bathroom). The value of the semaphore is 0. The next task (person) must wait until a semaphore becomes available (i.e. a task finishes, and the semaphore is incremented by 1).

Return to Table of Contents

What Technique of Protecting Shared Data Requires Less Overhead?

Answer: Semaphore

Overhead includes things like memory, bandwidth, and task execution time. If tasks are able to work on different copies of shared data, each individual task will undoubtedly perform its function with less overhead. There is no waiting in line or waiting for a semaphore to be released.

However, the operating system (gas station owner) will have more overhead. It will spend a lot of resources having to manage the different copies of data. In addition, relative to a mutex implementation, copying data and enabling concurrent processing of that data will require more memory and processing power.

Return to Table of Contents

Round-Robin vs Function-Queue-Scheduling | Embedded Software Architecture

Table of Contents

In this post, I will discuss the tradeoffs of using the Round Robin, Round Robin with Interrupts, and Function Queue Scheduling approaches when building an embedded system. Let’s consider that we will use an Arduino to perform tasks such as capturing sensor data and downloading to a host machine (e.g. your personal laptop computer).

Round Robin

round_robin

Definition

The Round Robin architecture is the easiest architecture for embedded systems. The main method consists of a loop that runs again and again, checking each of the I/O devices at each turn in order to see if they need service. No fancy interrupts, no fear of shared data…just a plain single execution thread that gets executed again and again.

Pros

  • Simplest of all the architectures
  • No interrupts
  • No shared data
  • No latency concerns
  • No tight response requirements

Cons

  • A sensor connected to the Arduino that urgently needs service must wait its turn.
  • Fragile. Only as strong as the weakest link. If a sensor breaks or something else breaks, everything breaks.
  • Response time has low stability in the event of changes to the code

Return to Table of Contents

Round Robin with Interrupts

Definition

This Round Robin with Interrupts architecture is similar to the Round Robin architecture, except it has interrupts. When an interrupt is triggered, the main program is put on hold and control shifts to the interrupt service routine. Code that is inside the interrupt service routines has a higher priority than the task code.

Pros

  • Greater control over the priority levels
  • Flexible
  • Fast response time to I/O signals
  • Great for managing sensors that need to be read at prespecified time intervals

Cons

  • Shared data
  • All interrupts could fire off concurrently

Return to Table of Contents

Function Queue Scheduling

Definition

In the Function Queue Scheduling architecture, interrupt routines add function pointers to a queue of function pointers. The main program calls the function pointers one at a time based on their priority in the queue.

Pros

  • Great control over priority
  • Reduces the worst-case response for the high-priority task code
  • Response time has good stability in the event of changes to the code

Cons

  • Shared data
  • Low priority tasks might never execute (a.k.a. starving)

Return to Table of Contents