How to Draw the Letter ‘E’ on an Image Using Scikit-Image

Requirements

Develop a program in Python to draw an E at the center of an input image.

  • Program must be developed using Python 3.x.
  • Program must use scikit-image library — a simple and popular open source library for image processing in Python.
  • The input image must be a color image.
  • The letter must be at the center of the image and must be created by updating pixels, not by using any of the drawing functions.
  • The final output must be a side-by-side image created using matplotlib.
  • Must test the same code with two different images or two different sizes.

You Will Need 

Directions

Find any two images/photos. 

Create a new Jupyter Notebook. 

Here are the critical reference points for the letter E. These points mark the corners of the four rectangles that make up the letter E.

letter_e

Here is the pdf of my Jupyter notebook.

Here is the raw code for the program in Python:

#!/usr/bin/env python
# coding: utf-8

# # Project 1 – Introduction to Python scikit-image
# 
# ## Author
# Addison Sears-Collins
# ## Date Created
# 9/4/2019
# ## Python Version
# 3.7
# ## Description
# This program draws an E at the center of an input image.
# ## Purpose
# The purpose of this assignment is to introduce the basic functions of the Python scikit-image
# library -- a simple and popular open source library for image processing in Python. The scikitimage
# extends scipy.ndimage to provide a set of image processing routines including I/O, color
# and geometric transformations, segmentation, and other basic features.
# ## File Path

# In[1]:


# Move to the directory where the input images are located
get_ipython().run_line_magic('cd', 'D:\\Dropbox\\work')

# List the files in that directory
get_ipython().run_line_magic('ls', '')


# ## Code

# In[2]:


# Import scikit-image
import skimage

# Import module to read and write images in various formats
from skimage import io

# Import matplotlib functionality
import matplotlib.pyplot as plt

# Import numpy
import numpy as np

# Set the color of the E
# [red, green, blue]
COLOR_OF_E = [255, 0, 0]


# In[3]:


# Show the critical points of E
from IPython.display import Image
Image(filename = "e_critical_points.PNG", width = 200, height = 200)


# In[4]:


def e_generator(y_dim, x_dim):
    """
    Generates the coordinates of the E
    :param y_dim int: The y dimensions of the input image
    :param x_dim int: The x dimensions of the input image
    :return: The critical coordinates
    :rtype: list
    """
    # Set all the critical points
    A =  [int(0.407 * y_dim), int(0.423 *  x_dim)]
    B =  [int(0.407 * y_dim), int(0.589 *  x_dim)]
    C =  [int(0.488 * y_dim), int(0.423 *  x_dim)]
    D =  [int(0.488 * y_dim), int(0.589 *  x_dim)]
    E =  [int(0.572 * y_dim), int(0.423 *  x_dim)]
    F =  [int(0.572 * y_dim), int(0.581 *  x_dim)]
    G =  [int(0.657 * y_dim), int(0.423 *  x_dim)]
    H =  [int(0.657 * y_dim), int(0.581 *  x_dim)]
    I =  [int(0.735 * y_dim), int(0.423 *  x_dim)]
    J =  [int(0.735 * y_dim), int(0.589 *  x_dim)]
    K =  [int(0.819 * y_dim), int(0.423 *  x_dim)]
    L =  [int(0.819 * y_dim), int(0.589 *  x_dim)]
    M =  [int(0.407 * y_dim), int(0.47 *  x_dim)]
    N =  [int(0.819 * y_dim), int(0.47 *  x_dim)]
    
    return A,B,C,D,E,F,G,H,I,J,K,L,M,N


# In[5]:


def plot_image_with_e(image, A, B, C, D, E, F, G, H, I, J, K, L, M, N):
    """
    Plots an E on an input image
    :param image: The input image
    :param A, B, etc. list: The coordinates of the critical points
    :return: image_with_e
    :rtype: image
    """
    # Copy the image
    image_with_e = np.copy(image)

    # Top horizontal rectangle
    image_with_e[A[0]:C[0], A[1]:B[1], :] = COLOR_OF_E 

    # Middle horizontal rectangle
    image_with_e[E[0]:G[0], E[1]:F[1], :] = COLOR_OF_E

    # Bottom horizontal rectangle
    image_with_e[I[0]:K[0], I[1]:J[1], :] = COLOR_OF_E

    # Vertical connector rectangle
    image_with_e[A[0]:K[0], A[1]:M[1], :] = COLOR_OF_E

    # Display image
    plt.imshow(image_with_e);

    return image_with_e


# In[6]:


def print_image_details(image):
    """
    Prints the details of an input image
    :param image: The input image
    """
    print("Size: ", image.size)
    print("Shape: ", image.shape)
    print("Type: ", image.dtype)
    print("Max: ", image.max())
    print("Min: ", image.min())


# In[7]:


def compare(original_image, annotated_image):
    """
    Compare two images side-by-side
    :param original_image: The original input image
    :param annotated_image: The annotated-version of the original input image
    """
    # Compare the two images side-by-side
    f, (ax0, ax1) = plt.subplots(1, 2, figsize=(20,10))

    ax0.imshow(original_image)
    ax0.set_title('Original', fontsize = 18)
    ax0.axis('off')

    ax1.imshow(annotated_image)
    ax1.set_title('Annotated', fontsize = 18)
    ax1.axis('off')


# In[8]:


# Load the test image
image = io.imread("test_image.jpg")

# Store the y and x dimensions of the input image
y_dimensions = image.shape[0]
x_dimensions = image.shape[1]

# Print the image details
print_image_details(image)

# Display the image
plt.imshow(image);


# In[9]:


# Set all the critical points of the image
A,B,C,D,E,F,G,H,I,J,K,L,M,N = e_generator(y_dimensions, x_dimensions)

# Plot the image with E and store it
image_with_e = plot_image_with_e(image, A, B, C, D, E, F, G, H, I, J, K, L, M, N)

# Save the output image
plt.imsave('test_image_annotated.jpg', image_with_e)


# In[10]:


compare(image, image_with_e)


# In[11]:


# Load the first image
image = io.imread("architecture_roof_buildings_baked.jpg")

# Store the y and x dimensions of the input image
y_dimensions = image.shape[0]
x_dimensions = image.shape[1]

# Print the image details
print_image_details(image)

# Display the image
plt.imshow(image);


# In[12]:


# Set all the critical points of the image
A,B,C,D,E,F,G,H,I,J,K,L,M,N = e_generator(y_dimensions, x_dimensions)

# Plot the image with E and store it
image_with_e = plot_image_with_e(image, A, B, C, D, E, F, G, H, I, J, K, L, M, N)

# Save the output image
plt.imsave('architecture_roof_buildings_baked_annotated.jpg', image_with_e)


# In[13]:


compare(image, image_with_e)


# In[14]:


# Load the second image
image = io.imread("statue.jpg")

# Store the y and x dimensions of the input image
y_dimensions = image.shape[0]
x_dimensions = image.shape[1]

# Print the image details
print_image_details(image)

# Display the image
plt.imshow(image);


# In[15]:


# Set all the critical points of the image
A,B,C,D,E,F,G,H,I,J,K,L,M,N = e_generator(y_dimensions, x_dimensions)

# Plot the image with E and store it
image_with_e = plot_image_with_e(image, A, B, C, D, E, F, G, H, I, J, K, L, M, N)

# Save the output image
plt.imsave('statue_annotated.jpg', image_with_e)


# In[16]:


compare(image, image_with_e)


# In[ ]:

Example

Before

statue

After

statue_annotated

Biometric Fingerprint Scanner | Image Processing Applications

In this post, I will discuss an application that relies heavily on image processing.

Biometric Fingerprint Scanner

Description

I am currently in an apartment complex that has no doorman. Instead, in order to enter the property, you have to scan your fingerprint on the fingerprint scanner next to the entry gate on the main walkway that enters the complex.

How It Works

In order to use the fingerprint scanner, I had to go to the administration for the apartment complex and have all of my fingerprints scanned. The receptionist took my hand and individually scanned each finger on both hands until clear digital images were produced of all my fingers. There were a number of times when the receptionist tried to scan one of my fingers, and the fingerprints failed to be read by the scanner. In these situations, I had to rotate my finger to the left and to the right until the scanner beeped, indicating that it had registered a clear digital image of my fingerprint. This whole process took about 20 minutes. 

After registering all of my fingerprints, I went to the front entry door to test if I was able to enter using my finger. I typically use my thumb to enter since it provides the biggest fingerprint image and is easier for the machine to read.

Once everything was all set up, I was able enter the building freely, using only my finger.

Strengths

The strength of the fingerprint scanning system is that it is totally keyless entry. I do not need to carry multiple keys in order to enter the building, the swimming pool, and the gym. Traditionally, I had to have separate keys for each of the doors that entered into common areas of the community. Now with the biometric fingerprint scanner all I needed to do was scan my fingerprint on any of the doors, and I could access anywhere in the complex. 

Keyless entry also comes in handy because I often lose my keys, or I forget my keys inside the house. Your fingers, fortunately, go wherever you go.

Another strength of using a biometric fingerprint scanner is that it is more environmentally friendly. For the creation of a key, metal needs to be extracted from the Earth somewhere. 

One final strength of the biometric fingerprint scanner is that it is easy when I have guests come to town. I do not need to create a spare key or give them a copy of my key. All I need to do is take them down to the administration and have their fingerprints registered.

Weaknesses

One of the main weaknesses of this keyless entry is that it is not sanitary. Not everybody has the cleanest hands. When everybody in the entire complex is touching the fingerprint scanner, bacteria can really build up. Facial recognition would be a good alternative to solving this problem because I wouldn’t actually have to touch anything upon entry. 

I’m also not sure how secure the fingerprint scanner is. Imagine somebody who has been evicted out of their apartment for failure to pay rent. There has to be an ironclad management to make sure that as soon as somebody is evicted from their apartment, his or her fingerprints are automatically removed from the system. 

Another weakness is that the fingerprint scanner is not flawless. Often I have to try five or even sometimes six times using different angles of my thumb and forefinger to register a successful reading to enter the door. The fingerprint scanner is highly sensitive to the way in which you put your finger on the scanner. A slight twist to the left or right might not register a successful reading. 

Also, for some odd reason, when I return from a long vacation, the fingerprint scanner never reads my fingerprints accurately. This happens because the administration likes to reset the fingerprint scanner every so often. When this happens, I have to go to the administration to re-register my fingerprints.

While I like the biometric fingerprint scanner, other techniques are a lot more foolproof. For example, typing in a PIN code will work virtually 100% of the time I try to open the door. Whereas with a fingerprint scanner, putting my fingerprint on the scanner opens the door at most 80 to 90% of the time on the first try.

Hierarchical Actions and Reinforcement Learning

One of the issues of reinforcement learning is how it handles hierarchical actions.

What are Hierarchical Actions?

In order to explain hierarchical actions, let us take a look at a real-world example. Consider the task of baking a sweet potato pie. The high-level action of making a sweet potato pie can be broken down into numerous low-level sub steps: cut the sweet potatoes, cook the sweet potatoes, add sugar, add flour, etc.

You will also notice that each of the low-level sub steps mentioned above can further be broken down into even further steps. For example, the task of cutting a sweet potato can be broken down into the following steps: move right arm to the right, orient right arm above the pie, bring arm down, etc.

Each of those sub steps of sub steps can then be further broken down into even smaller steps. For example, “moving right arm to the right” might involve thousands of different muscle contractions. Can you see where we are going here?

Reinforcement learning involves training a software agent to learn by experience through trial and error. A basic reinforcement learning algorithm would need to do a search over thousands of low-level actions in order to execute the task of making a sweet potato pie. Thus, reinforcement learning methods would quickly get inefficient for tasks that require a large number of low-level actions.

How to Solve the Hierarchical Action Problem

One way to solve the hierarchical action problem is to represent a high-level behavior (e.g. making a sweet potato pie) as a small sequence of high-level actions. 

For example, where the solution of making a sweet potato pie might entail 1000 low-level actions, we might condense these actions into 10 high-level actions. We could then have a single master policy that switches between each of the 10 sub-policies (one for each action) every N timesteps. The algorithm explained here is known as meta-learning shared hierarchies and is explained in more detail at OpenAi.com.

We could also integrate supervised learning techniques such as ID3 decision trees. Each sub-policy would be represented as a decision tree where the appropriate action taken is the output of the tree. The input would be a transformed version of the state and reward that was received from the environment. In essence, you would have decisions taken within decisions.