Logistic Regression Algorithm From Scratch

In this post, I will walk you through the Logistic Regression algorithm step-by-step.

  • We will develop the code for the algorithm from scratch using Python.
  • We will run the algorithm on real-world data sets from the UCI Machine Learning Repository.

Table of Contents

What is Logistic Regression?

Logistic regression, contrary to the name, is a classification algorithm. Unlike linear regression which outputs a continuous value (e.g. house price) for the prediction, Logistic Regression transforms the output into a probability value (i.e. a number between 0 and 1) using what is known as the logistic sigmoid function. This function is also known as the squashing function since it maps a line — that can run from negative infinity to positive infinity along the y-axis — to a number between 0 and 1.

Here is what the graph of the sigmoid function looks like:

sigmoid-curve
Source: (Kelleher, Namee, & Arcy, 2015)

The function is called the sigmoid function because it is s-shaped. Here is what the sigmoid function looks like in mathematical notation:

sigmoid-mathematical-equation

where:

  • h(z) is the predicted probability of a given instance (i.e. example) being in the positive class…that is the class represented as 1 in a data set. For example, in an e-mail classification data set, this would be the probability that a given e-mail instance is spam (If h(z) = 0.73, for example, that would mean that the instance has a 73% chance of being spam).
  • 1- h(z) is the probability of an instance being in the negative class, the class represented as 0 (e.g. not spam). h(z) is always a number between 0 and 1. Going back to the example in the bullet point above, this would mean that the instance has a 27% change of being not spam.
  • z is the input (e.g. a weighted sum of the attributes of a given instance)
  • e is Euler’s number

z is commonly expressed as the dot product, w · x, where w is a 1-dimensional vector containing the weights for each attribute, and x is a vector containing the values of each attribute for a specific instance of the data set (i.e. example).

Often the dot product, w · x, is written as matrix multiplication. In that case, z = wTx where T means transpose of the single dimensional weight vector w. The symbol Ɵ is often used in place of w.

So substituting w · x into the sigmoid equation, we getthe following equation:

sigmoid-curve-2

where

  • w is a 1-dimensional vector containing the weights for each attribute.
  • The subscript w on hw means the attributes x are weighted by the weight vector w.
  • hw(x) is the probability (a value between 0 and 1) that an instance is a member of the positive class (i.e. probability an e-mail is spam).
  • x is a vector containing the values of each attribute for a specific instance of the data set.
  • w · x = w0x0 + w1x1 + w2x2 + …. + wdx(analogous to the equation of a line y = mx + b from grade school)
    • d is the number of attributes in the data set
    • x0 = 1 by convention, for all instances. This attribute has to be added by the programmer for all instances. It is known formally as the “bias” term.

As is the case for many machine learning algorithms, the starting point for Logistic Regression is to create a trained model. One we have a trained model, we can use it to make predictions on new, unseen instances.

Training Phase

Creating a trained model entails determining the weight vector w. Once we have the weights, we can make predictions on new unseen examples. All we need are the values of the attributes of those examples (i.e. the x values), and we can weight the x values with the values of w to compute the probabilities h(x) for that example using the sigmoid function.

The rule for making predictions using the sigmoid function is as follows:

  • If hw(x) ≥ 0.5, class = 1 (positive class, e.g. spam)
  • If hw(x) < 0.5, class = 0 (negative class, e.g. not spam)

To determine the weights in linear regression, the sum of the squared error was the cost function (where error = actual values – predicted values by the line). The cost function represents how wrong a prediction is. In linear regression, it represents how wrong a line of best fit is on a set of observed training instances. The lower the sum of the squared error, the better a line fits the training data, and, in theory, the better the line will predict new, unseen instances.

Instead, the cost function in Logistic Regression is called cross-entropy. Without getting too detailed into the mathematics and notation of this particular equation, the cross-entropy equation is the one that we want to minimize. Minimizing this equation will yield us a sigmoid curve that best fits the training data and enables us to make the best classification predictions possible for new, unseen test instances. A minimum of the cost function is attained when the gradient of the cost function is close to zero (i.e. the calculated weights stop changing). The formal term for the gradient of the cost function getting close to zero is called convergence.

In order to minimize the cost function, we need to find its gradient (i.e. derivative, slope, etc.) and determine the values for the weight vector w that make its derivative as close to 0 as possible. We cannot just set the gradient to 0 and then enter x-values and calculate the weights directly. Instead, we have to use a method called gradient descent in order to find the weights.

In the gradient descent algorithm for Logistic Regression, we:

  1. Start off with an empty weight vector (initialized to random values between -0.01 and 0.01). The size of the vector is equal to the number of attributes in the data set.
  2. Initialize an empty weight change vector initialized to all zeros. The size of the vector is equal to the number of attributes in the data set.
  3. For each training instance, one at a time.
    • a. Make a probability prediction by calculating the weighted sum of the attribute values and running that value through the sigmoid function.
    • b. We evaluate the gradient of the cost function by plugging in the actual (i.e. observed) class value and the predicted class value from bullet point 3a above.
    • c. The gradient value from 3b gets added to the weight change vector.
  4. After we finish with the last training instance from 3, we multiply each value in the weight change vector by a learning rate (commonly 0.01).
  5. The vector from 4 gets added to the empty weight vector to update the weights.
  6. We then ask two questions
    • a. Have the weights continued to change (i.e. is the norm (i.e. magnitude) of the weight change vector less than a certain threshold like 0.001)?
    • b. Have we been through the data set less than 10,000 (or whatever we set the maximum iterations to) times?
    • c. If the answer is yes to both 6a and 6b, go back to step 2. Otherwise, we return the final weight vector, exiting the algorithm.

The gradient descent pseudocode for Logistic Regression is provided in Figure 10.6 of Introduction to Machine Learning by Ethem Alpaydin (Alpaydin, 2014).

Testing Phase

Once training is completed, we have the weights and can use these weights, attribute values, and the sigmoid function to make predictions for the set of test instances.

Predictions for a given test instance are made using the aforementioned sigmoid function:

sigmoid-curve-2-1

Where the rule for making predictions using the sigmoid function is as follows:

  • If hw(x) ≥ 0.5, class = 1 (positive class, e.g. spam)
  • If hw(x) < 0.5, class = 0 (negative class, e.g. not spam)

Multi-class Logistic Regression

A Multi-class Logistic Regression problem is a twist on the binary Logistic Regression method presented above. Multi-class Logistic Regression can make predictions on both binary and multi-class classification problems.

In order to make predictions for multi-class datasets, we take the training set and create multiple separate binary classification problems (one for each class in the data set). For each of those training sets that we generated, we set the class values for one class to 1 (representing the positive class), and we set all other classes to 0 (i.e. the negative class).

In other words, if there are k classes in a data set, k separate training sets are generated. In each of those k separate training sets, one class is set to 1 and all other classes are set to 0.

In Multi-class Logistic Regression, the training phase entails creating k different weight vectors, one for each class rather than just a single weight vector (which was the case in binary Logistic Regression). Each weight vector will help to predict the probability of an instance being a member of that class. Thus, in the testing phase, when there is an unseen new instance, three different predictions need to be made. This method is called the one-vs-all strategy, sometimes called one-vs-rest.

The rule for making predictions for a given instance are as follows:

  • For each new test instance,
    • Make k separate probability predictions.
    • Pick the class that has the highest probability (i.e. the class that is the most enthusiastic about that instance being a member of its class)

Other multi-class Logistic Regression algorithms include Softmax Regression and the one-vs-one strategy. The one-vs-all strategy was selected due to its popularity as being the default strategy used in practice for many of the well-known machine learning libraries for Python (Rebala, Ravi, & Churiwala, 2019)

Video

Here is an excellent video on logistic regression that explains the whole process I described above, step-by-step.

Return to Table of Contents

Logistic Regression Algorithm Design

The Logistic Regression algorithm was implemented from scratch. The Breast Cancer, Glass, Iris, Soybean (small), and Vote data sets were preprocessed to meet the input requirements of the algorithms. I used five-fold stratified cross-validation to evaluate the performance of the models.

Required Data Set Format for Logistic Regression

Columns (0 through N)

  • 0: Instance ID
  • 1: Attribute 1
  • 2: Attribute 2
  • 3: Attribute 3
  • N: Actual Class

The program then adds two additional columns for the testing set.

  • N + 1: Predicted Class
  • N + 2: Prediction Correct? (1 if yes, 0 if no)

Breast Cancer Data Set

This breast cancer data set contains 699 instances, 10 attributes, and a class – malignant or benign (Wolberg, 1992).

Modification of Attribute Values

The actual class value was changed to “Benign” or “Malignant.”

I transformed the attributes into binary numbers so that the algorithms could process the data properly and efficiently. If attribute value was greater than 5, the value was changed to 1, otherwise it was 0.

Missing Data

There were 16 missing attribute values, each denoted with a “?”. I chose a random number between 1 and 10 (inclusive) to fill in the data.

Glass Data Set

This glass data set contains 214 instances, 10 attributes, and 7 classes (German, 1987). The purpose of the data set is to identify the type of glass.

Modification of Attribute Values

If attribute values were greater than the median of the attribute, value was changed to 1, otherwise it was set to 0.

Missing Data

There are no missing values in this data set.

Iris Data Set

This data set contains 3 classes of 50 instances each (150 instances in total), where each class refers to a different type of iris plant (Fisher, 1988).

Modification of Attribute Values

If attribute values were greater than the median of the attribute, value was changed to 1, otherwise it was set to 0.

Missing Data

There were no missing attribute values.

Soybean Data Set (small)

This soybean (small) data set contains 47 instances, 35 attributes, and 4 classes (Michalski, 1980). The purpose of the data set is to determine the disease type.

Modification of Attribute Values

If attribute values were greater than the median of the attribute, value was changed to 1, otherwise it was set to 0.

Missing Data

There are no missing values in this data set.

Vote Data Set

This data set includes votes for each of the U.S. House of Representatives Congressmen (435 instances) on the 16 key votes identified by the Congressional Quarterly Almanac (Schlimmer, 1987). The purpose of the data set is to identify the representative as either a Democrat or Republican.

  • 267 Democrats
  • 168 Republicans

Modification of Attribute Values

I did the following modifications:

  • Changed all “y” to 1 and all “n” to 0.

Missing Data

Missing values were denoted as “?”. To fill in those missing values, I chose random number, either 0 (“No”) or 1 (“Yes”).

Description of Any Tuning Process Applied

Some tuning was performed in this project. The learning rate was set to 0.01 by convention. A higher learning rate (0.5) resulted in poor results for the norm of the gradient (>1).

The stopping criteria for gradient descent was as follows:

  • Maximum iterations = 10,000
  • Euclidean norm of weight change vector < 0.001

When I tried max iterations at 100, the Euclidean norm of the weight change vector returned high values (> 0.2) which indicated that I needed to set a higher max iterations value in order to have a higher chance of convergence (i.e. weights stop changing) based on the norm stopping criteria.

Return to Table of Contents

Logistic Regression Algorithm in Python, Coded From Scratch

Here are the preprocessed data sets:

Here is the driver code. This is where the main method is located:

import pandas as pd # Import Pandas library 
import numpy as np # Import Numpy library
import five_fold_stratified_cv
import logistic_regression

# File name: logistic_regression_driver.py
# Author: Addison Sears-Collins
# Date created: 7/19/2019
# Python version: 3.7
# Description: Driver of the logistic_regression.py program

# Required Data Set Format for Disrete Class Values
# Columns (0 through N)
# 0: Instance ID
# 1: Attribute 1 
# 2: Attribute 2
# 3: Attribute 3 
# ...
# N: Actual Class

# The logistic_regression.py program then adds 2 additional columns 
# for the test set.
# N + 1: Predicted Class
# N + 2: Prediction Correct? (1 if yes, 0 if no)

ALGORITHM_NAME = "Logistic Regression"
SEPARATOR = ","  # Separator for the data set (e.g. "\t" for tab data)

def main():

    print("Welcome to the " +  ALGORITHM_NAME + " Program!")
    print()

    # Directory where data set is located
    data_path = input("Enter the path to your input file: ") 
    #data_path = "iris.txt"

    # Read the full text file and store records in a Pandas dataframe
    pd_data_set = pd.read_csv(data_path, sep=SEPARATOR)

    # Show functioning of the program
    trace_runs_file = input("Enter the name of your trace runs file: ") 
    #trace_runs_file = "iris_logistic_regression_trace_runs.txt"

    # Open a new file to save trace runs
    outfile_tr = open(trace_runs_file,"w") 

    # Testing statistics
    test_stats_file = input("Enter the name of your test statistics file: ") 
    #test_stats_file = "iris_logistic_regression_test_stats.txt"

    # Open a test_stats_file 
    outfile_ts = open(test_stats_file,"w")

    # The number of folds in the cross-validation
    NO_OF_FOLDS = 5 

    # Generate the five stratified folds
    fold0, fold1, fold2, fold3, fold4 = five_fold_stratified_cv.get_five_folds(
        pd_data_set)

    training_dataset = None
    test_dataset = None

    # Create an empty array of length 5 to store the accuracy_statistics 
    # (classification accuracy)
    accuracy_statistics = np.zeros(NO_OF_FOLDS)

    # Run Logistic Regression the designated number of times as indicated by the 
    # number of folds
    for experiment in range(0, NO_OF_FOLDS):

        print()
        print("Running Experiment " + str(experiment + 1) + " ...")
        print()
        outfile_tr.write("Running Experiment " + str(experiment + 1) + " ...\n")
        outfile_tr.write("\n")

        # Each fold will have a chance to be the test data set
        if experiment == 0:
            test_dataset = fold0
            training_dataset = pd.concat([
               fold1, fold2, fold3, fold4], ignore_index=True, sort=False)                
        elif experiment == 1:
            test_dataset = fold1
            training_dataset = pd.concat([
               fold0, fold2, fold3, fold4], ignore_index=True, sort=False) 
        elif experiment == 2:
            test_dataset = fold2
            training_dataset = pd.concat([
               fold0, fold1, fold3, fold4], ignore_index=True, sort=False) 
        elif experiment == 3:
            test_dataset = fold3
            training_dataset = pd.concat([
               fold0, fold1, fold2, fold4], ignore_index=True, sort=False) 
        else:
            test_dataset = fold4
            training_dataset = pd.concat([
               fold0, fold1, fold2, fold3], ignore_index=True, sort=False) 
        
        accuracy, predictions, weights_for_each_class, no_of_instances_test = (
        logistic_regression.logistic_regression(training_dataset,test_dataset))

        # Print the trace runs of each experiment
        print("Accuracy:")
        print(str(accuracy * 100) + "%")
        print()
        print("Classifications:")
        print(predictions)
        print()
        print("Learned Model:")
        print(weights_for_each_class)
        print()
        print("Number of Test Instances:")
        print(str(no_of_instances_test))
        print() 

        outfile_tr.write("Accuracy:")
        outfile_tr.write(str(accuracy * 100) + "%\n\n")
        outfile_tr.write("Classifications:\n")
        outfile_tr.write(str(predictions) + "\n\n")
        outfile_tr.write("Learned Model:\n")
        outfile_tr.write(str(weights_for_each_class) + "\n\n")
        outfile_tr.write("Number of Test Instances:")
        outfile_tr.write(str(no_of_instances_test) + "\n\n")

        # Store the accuracy in the accuracy_statistics array
        accuracy_statistics[experiment] = accuracy

    outfile_tr.write("Experiments Completed.\n")
    print("Experiments Completed.\n")

    # Write to a file
    outfile_ts.write("----------------------------------------------------------\n")
    outfile_ts.write(ALGORITHM_NAME + " Summary Statistics\n")
    outfile_ts.write("----------------------------------------------------------\n")
    outfile_ts.write("Data Set : " + data_path + "\n")
    outfile_ts.write("\n")
    outfile_ts.write("Accuracy Statistics for All 5 Experiments:")
    outfile_ts.write(np.array2string(
        accuracy_statistics, precision=2, separator=',',
        suppress_small=True))
    outfile_ts.write("\n")
    outfile_ts.write("\n")
    accuracy = np.mean(accuracy_statistics)
    accuracy *= 100
    outfile_ts.write("Classification Accuracy : " + str(accuracy) + "%\n")
   
    # Print to the console
    print()
    print("----------------------------------------------------------")
    print(ALGORITHM_NAME + " Summary Statistics")
    print("----------------------------------------------------------")
    print("Data Set : " + data_path)
    print()
    print()
    print("Accuracy Statistics for All 5 Experiments:")
    print(accuracy_statistics)
    print()
    print()
    print("Classification Accuracy : " + str(accuracy) + "%")
    print()

    # Close the files
    outfile_tr.close()
    outfile_ts.close()

main()

Here is the code for logistic regression:

import pandas as pd # Import Pandas library 
import numpy as np # Import Numpy library
 
# File name: logistic_regression.py
# Author: Addison Sears-Collins
# Date created: 7/19/2019
# Python version: 3.7
# Description: Multi-class logistic regression using one-vs-all. 
 
# Required Data Set Format for Disrete Class Values
# Columns (0 through N)
# 0: Instance ID
# 1: Attribute 1 
# 2: Attribute 2
# 3: Attribute 3 
# ...
# N: Actual Class
 
# This program then adds 2 additional columns for the test set.
# N + 1: Predicted Class
# N + 2: Prediction Correct? (1 if yes, 0 if no)

def sigmoid(z):
    """
    Parameters:
        z: A real number
    Returns: 
        1.0/(1 + np.exp(-z))
    """
    return 1.0/(1 + np.exp(-z))

def gradient_descent(training_set):
    """
    Gradient descent for logistic regression. Follows method presented
    in the textbook Introduction to Machine Learning 3rd Edition by 	
    Ethem Alpaydin (pg. 252)

    Parameters:
      training_set: The training instances as a Numpy array
    Returns:
      weights: The vector of weights, commonly called w or THETA
    """   

    no_of_columns_training_set = training_set.shape[1]
    no_of_rows_training_set = training_set.shape[0]

    # Extract the attributes from the training set.
    # x is still a 2d array
    x = training_set[:,:(no_of_columns_training_set - 1)]
    no_of_attributes = x.shape[1]

    # Extract the classes from the training set.
    # actual_class is a 1d array.
    actual_class = training_set[:,(no_of_columns_training_set - 1)]

    # Set a learning rate
    LEARNING_RATE = 0.01

    # Set the maximum number of iterations
    MAX_ITER = 10000

    # Set the iteration variable to 0
    iter = 0

    # Set a flag to determine if we have exceeded the maximum number of
    # iterations
    exceeded_max_iter = False

    # Set the tolerance. When the euclidean norm of the gradient vector 
    # (i.e. magnitude of the changes in the weights) gets below this value, 
    # stop iterating through the while loop
    GRAD_TOLERANCE = 0.001
    norm_of_gradient = None

    # Set a flag to determine if we have reached the minimum of the 
    # cost (i.e. error) function.
    converged = False

    # Create the weights vector with random floats between -0.01 and 0.01
    # The number of weights is equal to the number of attributes
    weights = np.random.uniform(-0.01,0.01,(no_of_attributes))
    changes_in_weights = None

    # Keep running the loop below until convergence on the minimum of the 
    # cost function or we exceed the max number of iterations
    while(not(converged) and not(exceeded_max_iter)):
        
        # Initialize a weight change vector that stores the changes in 
        # the weights at each iteration
        changes_in_weights = np.zeros(no_of_attributes)

        # For each training instance
        for inst in range(0, no_of_rows_training_set):

            # Calculate weighted sum of the attributes for
            # this instance
            output = np.dot(weights, x[inst,:])
                
            # Calculate the sigmoid of the weighted sum
            # This y is the probability that this instance belongs
            # to the positive class
            y =  sigmoid(output)

            # Calculate difference
            difference = (actual_class[inst] - y)

            # Multiply the difference by the attribute vector
            product = np.multiply(x[inst,:], difference)

            # For each attribute, update the weight changes 
            # i.e. the gradient vector
            changes_in_weights = np.add(changes_in_weights,product)
        
        # Calculate the step size
        step_size = np.multiply(changes_in_weights, LEARNING_RATE)

        # Update the weights vector
        weights = np.add(weights, step_size)

        # Test to see if we have converged on the minimum of the error
        # function
        norm_of_gradient = np.linalg.norm(changes_in_weights)

        if (norm_of_gradient < GRAD_TOLERANCE):
            converged = True

        # Update the number of iterations
        iter += 1

        # If we have exceeded the maximum number of iterations
        if (iter > MAX_ITER):
            exceeded_max_iter = True

    #For debugging purposes
    #print("Number of Iterations: " + str(iter - 1))
    #print("Norm of the gradient: " + str(norm_of_gradient))
    #print(changes_in_weights)
    #print()
    return weights


def logistic_regression(training_set, test_set):
    """
    Multi-class one-vs-all logistic regression
    Parameters:
      training_set: The training instances as a Pandas dataframe
      test_set: The test instances as a Pandas dataframe
    Returns:
      accuracy: Classification accuracy as a decimal
      predictions: Classifications of all the test instances as a 
        Pandas dataframe
      weights_for_each_class: The weight vectors for each class (one-vs-all)
      no_of_instances_test: The number of test instances
    """   

    # Remove the instance ID column
    training_set = training_set.drop(
        training_set.columns[[0]], axis=1)
    test_set = test_set.drop(
        test_set.columns[[0]], axis=1)

    # Make a list of the unique classes
    list_of_unique_classes = pd.unique(training_set["Actual Class"])

    # Replace all the class values with numbers, starting from 0
    # in both the test and training sets.
    for cl in range(0, len(list_of_unique_classes)):
        training_set["Actual Class"].replace(
            list_of_unique_classes[cl], cl ,inplace=True)
        test_set["Actual Class"].replace(
            list_of_unique_classes[cl], cl ,inplace=True)

    # Insert a column of 1s in column 0 of both the training
    # and test sets. This is the bias and helps with gradient
    # descent. (i.e. X0 = 1 for all instances)
    training_set.insert(0, "Bias", 1)
    test_set.insert(0, "Bias", 1)

    # Convert dataframes to numpy arrays
    np_training_set = training_set.values
    np_test_set = test_set.values

    # Add 2 additional columns to the testing dataframe
    test_set = test_set.reindex(
        columns=[*test_set.columns.tolist(
        ), 'Predicted Class', 'Prediction Correct?'])

    ############################# Training Phase ##############################

    no_of_columns_training_set = np_training_set.shape[1]
    no_of_rows_training_set = np_training_set.shape[0]

    # Create and store a training set for each unique class
    # to create separate binary classification
    # problems
    trainingsets = []
    for cl in range(0, len(list_of_unique_classes)):

        # Create a copy of the training set
        temp = np.copy(np_training_set)

        # This class becomes the positive class 1
        # and all other classes become the negative class 0
        for row in range(0, no_of_rows_training_set):
            if (temp[row, (no_of_columns_training_set - 1)]) == cl:
                temp[row, (no_of_columns_training_set - 1)] = 1
            else:
                temp[row, (no_of_columns_training_set - 1)] = 0
        
        # Add the new training set to the trainingsets list
        trainingsets.append(temp)

    # Calculate and store the weights for the training set
    # of each class. Execute gradient descent on each training set
    # in order to calculate the weights
    weights_for_each_class = []

    for cl in range(0, len(list_of_unique_classes)):
        weights_for_this_class = gradient_descent(trainingsets[cl])
        weights_for_each_class.append(weights_for_this_class)

    # Used for debugging
    #print(weights_for_each_class[0])
    #print()
    #print(weights_for_each_class[1])
    #print()
    #print(weights_for_each_class[2])

    ########################### End of Training Phase #########################

    ############################# Testing Phase ###############################

    no_of_columns_test_set = np_test_set.shape[1]
    no_of_rows_test_set = np_test_set.shape[0]

    # Extract the attributes from the test set.
    # x is still a 2d array
    x = np_test_set[:,:(no_of_columns_test_set - 1)]
    no_of_attributes = x.shape[1]

    # Extract the classes from the test set.
    # actual_class is a 1d array.
    actual_class = np_test_set[:,(no_of_columns_test_set - 1)]

    # Go through each row (instance) of the test data
    for inst in range(0,  no_of_rows_test_set):

        # Create a scorecard that keeps track of the probabilities of this
        # instance being a part of each class
        scorecard = []

        # Calculate and store the probability for each class in the scorecard
        for cl in range(0, len(list_of_unique_classes)):

            # Calculate weighted sum of the attributes for
            # this instance
            output = np.dot(weights_for_each_class[cl], x[inst,:])

            # Calculate the sigmoid of the weighted sum
            # This is the probability that this instance belongs
            # to the positive class
            this_probability = sigmoid(output)

            scorecard.append(this_probability)

        most_likely_class = scorecard.index(max(scorecard))

        # Store the value of the most likely class in the "Predicted Class" 
        # column of the test_set data frame
        test_set.loc[inst, "Predicted Class"] = most_likely_class

        # Update the 'Prediction Correct?' column of the test_set data frame
        # 1 if correct, else 0
        if test_set.loc[inst, "Actual Class"] == test_set.loc[
            inst, "Predicted Class"]:
            test_set.loc[inst, "Prediction Correct?"] = 1
        else:
            test_set.loc[inst, "Prediction Correct?"] = 0

    # accuracy = (total correct predictions)/(total number of predictions)
    accuracy = (test_set["Prediction Correct?"].sum())/(len(test_set.index))

    # Store the revamped dataframe
    predictions = test_set

    # Replace all the class values with the name of the class
    for cl in range(0, len(list_of_unique_classes)):
        predictions["Actual Class"].replace(
            cl, list_of_unique_classes[cl] ,inplace=True)
        predictions["Predicted Class"].replace(
            cl, list_of_unique_classes[cl] ,inplace=True)

    # Replace 1 with Yes and 0 with No in the 'Prediction 
    # Correct?' column
    predictions['Prediction Correct?'] = predictions[
        'Prediction Correct?'].map({1: "Yes", 0: "No"})

    # Reformat the weights_for_each_class list of arrays
    weights_for_each_class = pd.DataFrame(np.row_stack(weights_for_each_class))
 
    # Rename the row names
    for cl in range(0, len(list_of_unique_classes)):
        row_name = str(list_of_unique_classes[cl] + " weights")        
        weights_for_each_class.rename(index={cl:row_name}, inplace=True)

    # Get a list of the names of the attributes
    training_set_names = list(training_set.columns.values)
    training_set_names.pop() # Remove 'Actual Class'

    # Rename the column names
    for col in range(0, len(training_set_names)):
        col_name = str(training_set_names[col])        
        weights_for_each_class.rename(columns={col:col_name}, inplace=True)

    # Record the number of test instances
    no_of_instances_test = len(test_set.index)

    # Return statement
    return accuracy, predictions, weights_for_each_class, no_of_instances_test

Here is the code for five-fold stratified cross-validation:

import pandas as pd # Import Pandas library 
import numpy as np # Import Numpy library

# File name: five_fold_stratified_cv.py
# Author: Addison Sears-Collins
# Date created: 7/17/2019
# Python version: 3.7
# Description: Implementation of five-fold stratified cross-validation
# Divide the data set into five random groups. Make sure 
# that the proportion of each class in each group is roughly equal to its 
# proportion in the entire data set.

# Required Data Set Format for Disrete Class Values
# Columns (0 through N)
# 0: Instance ID
# 1: Attribute 1 
# 2: Attribute 2
# 3: Attribute 3 
# ...
# N: Actual Class

def get_five_folds(instances):
    """
    Parameters:
        instances: A Pandas data frame containing the instances
    Returns: 
        fold0, fold1, fold2, fold3, fold4
        Five folds whose class frequency distributions are 
        each representative of the entire original data set (i.e. Five-Fold 
        Stratified Cross Validation)
    """
    # Shuffle the data set randomly
    instances = instances.sample(frac=1).reset_index(drop=True)

    # Record the number of columns in the data set
    no_of_columns = len(instances.columns) # number of columns

    # Record the number of rows in the data set
    no_of_rows = len(instances.index) # number of rows

    # Create five empty folds (i.e. Panda Dataframes: fold0 through fold4)
    fold0 = pd.DataFrame(columns=(instances.columns))
    fold1 = pd.DataFrame(columns=(instances.columns))
    fold2 = pd.DataFrame(columns=(instances.columns))
    fold3 = pd.DataFrame(columns=(instances.columns))
    fold4 = pd.DataFrame(columns=(instances.columns))

    # Record the column of the Actual Class
    actual_class_column = no_of_columns - 1

    # Generate an array containing the unique 
    # Actual Class values
    unique_class_list_df = instances.iloc[:,actual_class_column]
    unique_class_list_df = unique_class_list_df.sort_values()
    unique_class_list_np = unique_class_list_df.unique() #Numpy array
    unique_class_list_df = unique_class_list_df.drop_duplicates()#Pandas df

    unique_class_list_np_size = unique_class_list_np.size

    # For each unique class in the unique Actual Class array
    for unique_class_list_np_idx in range(0, unique_class_list_np_size):

        # Initialize the counter to 0
        counter = 0

        # Go through each row of the data set and find instances that
        # are part of this unique class. Distribute them among one
        # of five folds
        for row in range(0, no_of_rows):

            # If the value of the unique class is equal to the actual
            # class in the original data set on this row
            if unique_class_list_np[unique_class_list_np_idx] == (
                instances.iloc[row,actual_class_column]):

                    # Allocate instance to fold0
                    if counter == 0:

                        # Extract data for the new row
                        new_row = instances.iloc[row,:]

                        # Append that entire instance to fold
                        fold0.loc[len(fold0)] = new_row
                                    
                        # Increase the counter by 1
                        counter += 1

                    # Allocate instance to fold1
                    elif counter == 1:

                        # Extract data for the new row
                        new_row = instances.iloc[row,:]

                        # Append that entire instance to fold
                        fold1.loc[len(fold1)] = new_row
                                    
                        # Increase the counter by 1
                        counter += 1

                    # Allocate instance to fold2
                    elif counter == 2:

                        # Extract data for the new row
                        new_row = instances.iloc[row,:]

                        # Append that entire instance to fold
                        fold2.loc[len(fold2)] = new_row
                                    
                        # Increase the counter by 1
                        counter += 1

                    # Allocate instance to fold3
                    elif counter == 3:

                        # Extract data for the new row
                        new_row = instances.iloc[row,:]

                        # Append that entire instance to fold
                        fold3.loc[len(fold3)] = new_row
                                    
                        # Increase the counter by 1
                        counter += 1

                    # Allocate instance to fold4
                    else:

                        # Extract data for the new row
                        new_row = instances.iloc[row,:]

                        # Append that entire instance to fold
                        fold4.loc[len(fold4)] = new_row
                                    
                        # Reset counter to 0
                        counter = 0
        
    return fold0, fold1, fold2, fold3, fold4

Return to Table of Contents

Logistic Regression Output

Here are the trace runs:

Here are the results:

logistic-regression-results

Here are the test statistics for each data set:

Analysis

Breast Cancer Data Set

I hypothesize that performance was high on this algorithm because of the large number of instances (699 in total). This data set had the highest number of instances out of all the data sets.

These results also suggest that the amount of training data has a direct impact on performance. Higher amounts of data can lead to better learning and better classification accuracy on new, unseen instances.

Glass Data Set

I hypothesize that the poor performance on the glass data set is due to the high numbers of classes combined with a relatively smaller data set.

Iris Data Set

Classification accuracy on the iris data set was satisfactory. This data set was small, and more training data would be needed to see if accuracy could be improved by giving the algorithm more data to learn the underlying relationship between the attributes and the flower types.

Soybean Data Set (small)

I hypothesize that the large numbers of attributes in the soybean data set (35) helped balance the relatively small number of training instances. These results suggest that large numbers of relevant attributes can help a machine learning algorithm create more accurate classifications.

Vote Data Set

The results show that classification algorithms like Logistic Regression can have outstanding performance on large data sets that are binary classification problems.

Summary and Conclusions

  • Higher amounts of data can lead to better learning and better classification accuracy on new, unseen instances.
  • Large numbers of relevant attributes can help a machine learning algorithm create more accurate classifications.
  • Classification algorithms like Logistic Regression can achieve excellent classification accuracy on binary classification problems, but performance on multi-class classification algorithms can yield mixed results.

Return to Table of Contents

References

Alpaydin, E. (2014). Introduction to Machine Learning. Cambridge, Massachusetts: The MIT Press.

Fisher, R. (1988, July 01). Iris Data Set. Retrieved from Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/iris

German, B. (1987, September 1). Glass Identification Data Set. Retrieved from UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Glass+Identification

Kelleher, J. D., Namee, B., & Arcy, A. (2015). Fundamentals of Machine Learning for Predictive Data Analytics. Cambridge, Massachusetts: The MIT Press.

Michalski, R. (1980). Learning by being told and learning from examples: an experimental comparison of the two methodes of knowledge acquisition in the context of developing an expert system for soybean disease diagnosis. International Journal of Policy Analysis and Information Systems, 4(2), 125-161.

Rebala, G., Ravi, A., & Churiwala, S. (2019). An Introduction to Machine Learning. Switzerland: Springer.

Schlimmer, J. (1987, 04 27). Congressional Voting Records Data Set. Retrieved from Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records

Wolberg, W. (1992, 07 15). Breast Cancer Wisconsin (Original) Data Set. Retrieved from Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Original%25

Y. Ng, A., & Jordan, M. (2001). On Discriminative vs. Generative Classifiers: A Comparison of Logistic Regression and Naive Bayes. NIPS’01 Proceedings of the 14th International Conference on Neural Information Processing Systems: Natural and Synthetic , 841-848.

Return to Table of Contents

How to Set Up Anaconda for Windows 10

In this post, I will show you how to set up Anaconda. Anaconda is a free, open-source distribution of Python (and R). The goal of Anaconda is to be a free “one-stop-shop” for all your Python data science and machine learning needs. It contains the key packages you need to build cool machine learning projects.

Requirements

Here are the requirements:

  • Set up Anaconda.
  • Set up Jupyter Notebook.
  • Install important libraries.
  • Learn basic Anaconda commands.

Directions

Install Anaconda

Go to the Anaconda website and click “Download.”

setting-up-anaconda-1

Choose the latest version of Python. In my case, that is Python 3.7. Click “Download” to download the Anaconda installer for your operating system type (i.e. Windows, macOS, or Linux). 

setting-up-anaconda-2

Follow the instructions to install the program:

setting-up-anaconda-3
setting-up-anaconda-4
setting-up-anaconda-5
setting-up-anaconda-6

Verify Anaconda is installed by searching for “Anaconda Navigator” on your computer.

Open Anaconda Navigator.

setting-up-anaconda-7

Follow the instructions here for creating a “Hello World” program. You can use Spyder, Jupyter Notebooks, or the Anaconda Prompt (terminal). If you use Jupyter Notebooks, you will need to open the notebooks in Firefox, Google Chrome or another web browser.

Check to make sure that you have IPython installed. Use the following command (in an Anaconda Prompt window) to check:

where ipython
setting-up-anaconda-8

Make sure that you have pip installed. Pip is the package management system for Python.

where pip
setting-up-anaconda-9

Make sure that you have conda installed. Conda is Anaconda’s package management system.

where conda
setting-up-anaconda-10

Install Some Libraries

Install OpenCV

To install OpenCV, use the following command in the Anaconda Prompt:

pip install opencv-contrib-python

Type the following command to get a list of packages and make sure opencv-contrib-python is installed.

conda list

Install dlib

Install cmake.

pip install cmake

Install the C++ library called dlib

pip install dlib
setting-up-anaconda-11

Type the following command and take a look at the list to see if dlib is successfully installed:

conda list

Install Tesseract

Go to Tesseract at UB Mannheim.

Download the Tesseract for your system.

Set it up by following the prompts.

setting-up-anaconda-12

Once Tesseract OCR is downloaded, find it on your system.

Copy the name of the file it is located in. In my case, that is:

C:\Program Files\Tesseract-OCR

Search for “Environment Variables” on your computer.

setting-up-anaconda-13

Under “System Variables,” click “Path,” and then click Edit.

Add the path: C:\Program Files\Tesseract-OCR

setting-up-anaconda-14

Click OK a few times to close all windows.

Open up the Anaconda Prompt.

Type this command to see if tesseract is installed on your system.

where tesseract

Now, apply the Python binding to the packages using the following commands:

pip install tesseract
pip install pytesseract

Install TensorFlow

Type the following command in the Anaconda Prompt:

pip install tensorflow

Install TensorFlow hub using this command:

pip install tensorflow-hub

Now install tflearn.

pip install tflearn

Now, install the Keras neural network library.

pip install keras

Install the Rest of the Libraries

Type the following commands to install the rest of the libraries:

pip install pillow
pip install SimpleITK

Learn Basic Anaconda Commands

Changing Directories

If you ever want to change directories to the D drive instead of the C drive, open Anaconda Prompt on your computer and type the following commands, in order

D:
cd D:\XXXX\XXXX\XXXX\XXXX

where D:\XXXX\XXXX\XXXX\XXXX is the file path.

Listing Directory Contents

Type the dir command to list the contents of a directory.

dir

Creating a Jupyter Notebook

Open Anaconda Prompt on your computer and type the following command:

jupyter notebook

Converting a Jupyter Notebook into Python Files

If you want to convert a Jupyter Notebook into Python files, change to that directory and type the following command:

jupyter nbconvert --to script *.ipynb

Congratulations if you made it this far! You have all the libraries installed that you need to do fundamental image processing and computer vision work in Python.

Iterative Dichotomiser 3 (ID3) Algorithm From Scratch

In this post, I will walk you through the Iterative Dichotomiser 3 (ID3) decision tree algorithm step-by-step. We will develop the code for the algorithm from scratch using Python. We will also run the algorithm on real-world data sets from the UCI Machine Learning Repository.

Table of Contents

What is the Iterative Dichotomiser 3 Algorithm?

Iterative Dichotomiser 3 (ID3)

Unpruned

In the unpruned ID3 algorithm, the decision tree is grown to completion (Quinlan, 1986). The Iterative Dichotomiser 3 (ID3) algorithm is used to create decision trees and was invented by John Ross Quinlan. The decision trees in ID3 are used for classification, and the goal is to create the shallowest decision trees possible. 

For example, consider a decision tree to help us determine if we should play tennis or not based on the weather:

  • The interior nodes of the tree are the decision variables that are based on the attributes in the data set (e.g. weather outlook [attribute]: sunny, overcast, rainy [attribute values])
  • The leaves of the tree store the classifications (e.g. play tennis or don’t play tennis)

Before we get into the details of ID3, let us examine how decision trees work.

How decision trees work

We start with a data set in which each row of the data set corresponds to a single instance. The columns of the data set are the attributes, where the rightmost column is the class label. 

Each attribute can take on a set of values. During training of a decision tree, we split the instances in the data set into subsets (i.e. groups) based on the values of the attributes. For example, the weather outlook attribute can be used to create a subset of ‘sunny’ instances, ‘overcast’, and ‘rainy’ instances. We then split each of those subsets even further based on another attribute (e.g. temperature). 

The objective behind building a decision tree is to use the attribute values to keep splitting the data into smaller and smaller subsets (recursively) until each subset consists of a single class label (e.g. play tennis or don’t play tennis). We can then classify fresh test instances based on the rules defined by the decision tree.

How ID3 Works

Starting from the root of the tree, ID3 builds the decision tree one interior node at a time where at each node we select the attribute which provides the most information gain if we were to split the instances into subsets based on the values of that attribute. How is “most information” determined? It is determined using the idea of entropy reduction which is part of Shannon’s Information Theory. 

Entropy is a measure of the amount of disorder or uncertainty (units of entropy are bits). A data set with a lot of disorder or uncertainty does not provide us a lot of information. 

A good way to think about entropy is how certain we would feel if we were to guess the class of a random training instance. A data set in which there is only one class has 0 entropy (high information here because we know with 100% certainty what the class is given an instance). However, if there is a data set in which each instance is a different class, entropy would be high because you have no certainty when trying to predict the class of a random instance. 

Entropy is thus a measure of how heterogeneous a data set is. If there are many different class types in a data set, and each class type has the same probability of occurring, the entropy (also impurity) of the data set will be high. The greater the entropy is, the higher the uncertainty, and the less information gained. 

Getting back to the running example, in a binary classification problem with positive instances p (play tennis) and negative instances n (don’t play tennis), the entropy contained in a data set is defined mathematically as follows (base 2 log is used as convention in Shannon’s Information Theory).

I(p,n) = entropy of a data set 

= weighted sum of the logs of the probabilities of each possible outcome 

= -[(p/(p+n))log2(p/(p+n)) + (n/(p+n))log2(n/(p+n))]

= -(p/(p+n))log2(p/(p+n)) + -(n/(p+n))log2(n/(p+n))

Where:

  • I = entropy
  • p = number of positive instances (e.g. number of play tennis instances)
  • n = number of negative instances (e.g. number of don’t play tennis instances)

The negative sign at the beginning of the equation is used to change the negative values returned by the log function to positive values (Kelleher, Namee, & Arcy, 2015). This negation ensures that the logarithmic term returns small numbers for high probabilities (low entropy; greater certainty) and large numbers for small probabilities (higher entropy; less certainty).

Notice that the weights in the summation are the actual probabilities themselves. These weights make sure that classes that appear most frequently in a data set make the biggest impact on the entropy of the data set.

Also notice that if there are only instances with class p (i.e. play tennis) and none with class n (i.e. don’t play tennis) in the data set, the entropy will be 0.

I(p,n) = -(p/(p+n))log2(p/(p+n)) +  -(n/(p+n))log2(n/(p+n))

= -(p/(p+0))log2(p/(p+0)) +  -(0/(p+0))log2(0/(p+0))

= -(p/p)log2(1) + 0

= 0 + 0

= 0

It is easiest to explain the full ID3 algorithm using actual numbers, so below I will demonstrate how the ID3 algorithm works using an example.

ID3 Example

Continuing from the example in the previous section, we want to create a decision tree that will help us determine if we should play tennis or not.

We have four attributes in our data set:

  • Weather outlook: sunny, overcast, rain
  • Temperature: hot, mild, cool
  • Humidity: high, normal
  • Wind: weak, strong

We have two classes:

  • p = play tennis class (“Yes”)
  • n = don’t play tennis class (“No”)

Here is the data set to train the tree:

id3-1

Source: (Mitchell, 1997)

Step 1: Calculate the Prior Entropy of the Data Set

We have 14 total instances: 9 instances of p and 5 instances of n. With the frequency counts of each unique class, we can calculate the prior entropy of this data set where:

  • p = number of positive instances (e.g. number of play tennis instances)
  • n = number of negative instances (e.g. number of don’t play tennis instances)

Prior Entropy of Data Set = I(p,n)

= weighted sum of the logs of the probabilities of each possible outcome 

= -[(p/(p+n))log2(p/(p+n)) + (n/(p+n))log2(n/(p+n))]

= -(p/(p+n))log2(p/(p+n)) + -(n/(p+n))log2(n/(p+n)) 

= -(9/(9+5))log2(9/(9+5)) + -(5/(9+5))log2(5/(9+5))

= -(9/(9+5))log2(9/(9+5)) + -(5/(9+5))log2(5/(9+5))

= 0.9403

This value above is indicative of how much entropy is remaining prior to doing any splitting.

Step 2: Calculate the Information Gain for Each Attribute

For each attribute in the data set (e.g. outlook, temperature, humidity, and wind): 

  • Calculate the total number of instances in the data set
  • Initialize a running weighted entropy score sum variable to 0.
  • Partition the data set instances into subsets based on the attribute values. For example, weather outlook is an attribute. We create the “sunny” subset, “overcast” subset, and “rainy” subset. Therefore, this attribute would create three partitions (or subsets).
  • For each attribute value subset:
    • Calculate the number of instances in this subset
    • Calculate the entropy score of this subset using the frequency counts of each unique class.
    • Calculate the weighting of this attribute value by dividing the number of instances in this subset by the total number of instances in the data set
    • Add the weighted entropy score for this attribute value to the running weighted entropy score sum.
  • The final running weighted entropy score sum is indicative of how much entropy would remain if we were to split the current data set using this attribute. It is the remaining entropy for this attribute.
  • Calculate the information gain for this attribute where:
    • Information gain = prior entropy of the data set from Step 1 – remaining entropy for this attribute from Step 2
  • Record the information gain for this attribute and repeat the process above for the other attributes

For example, consider the first attribute (e.g. weather outlook). The remaining entropy for outlook can be calculated as follows:

Remaining Entropy for Weather Outlook  =

= (# of sunny/# of instances)*Isunny(p,n) + (# of overcast/# of instances)*Iovercast(p,n)  + (# of rainy/# of instances)*Irainy(p,n) 

= (5/14)*Isunny(p,n) + (4/14)*Iovercast(p,n)  + (5/14)*Irainy(p,n) 

Where:

Isunny(p,n) = I(number of sunny AND Yes, number of sunny AND No) 

= I(2,3)

= -(p/(p+n))log2(p/(p+n)) + -(n/(p+n))log2(n/(p+n))  

= -(2/(2+3))log2(2/(2+3)) + -(3/(2+3))log2(3/(2+3)) 

= 0.9710

Iovercast(p,n) = I(number of overcast AND Yes, number of overcast AND No) 

= I(4,0)

= -(p/(p+n))log2(p/(p+n)) + -(n/(p+n))log2(n/(p+n))  

= -(4/(4+0))log2(4/(4+0)) + -(0/(4+0))log2(0/(4+0))  

= 0

Irainy(p,n) = I(number of rainy AND Yes, number of rainy AND No) 

= I(3,2)

= -(p/(p+n))log2(p/(p+n)) + -(n/(p+n))log2(n/(p+n))  

= -(3/(3+2))log2(3/(3+2)) + -(2/(3+2))log2(2/(3+2))   

= 0.9710

Remaining Entropy for Weather Outlook =

= Weighted sum of the entropy scores of each attribute value subset 

= (5/14)*Isunny(p,n) + (4/14)*Iovercast(p,n)  + (5/14)*Irainy(p,n) 

= (5/14) * (0.9710) + (4/14) * 0 + (5/14) * (0.9710)

= 0.6936

Info Gain for Weather Outlook   = Prior entropy of the data set – remaining entropy if we split using this attribute

= 0.9403 – 0.6936

= 0.246

Now that we have the information gain for weather outlook, we need to find the remaining entropy and information gain for the other attributes using the same procedure. The remaining entropies for the other attributes are as follows:

  • Remaining Entropy for Temperature = 0.911
  • Remaining Entropy for Humidity = 0.789
  • Remaining Entropy for Wind = 0.892

The information gains are as follows:

  • Information Gain for Temperature = 0.9403 – 0.911 = 0.0293
  • Information Gain for Humidity = 0.9403 – 0.789 = 0.1513
  • Information Gain for Wind = 0.9403 – 0.892 = 0.0483

Step 3: Calculate the Attribute that Had the Maximum Information Gain

The information gain for weather outlook is 0.246, so it provides the most information and becomes the label for the root node.

Step 4: Partition the Data Set Based on the Values of the Attribute that Had the Maximum Information Gain

The data set is then partitioned based on the values this maximum information gain attribute can take (e.g. sunny, overcast, and rain). The partitions become the outgoing branches from the root node. These branches terminate in new unlabeled nodes. The maximum information gain attribute is removed from these partitions and is not used again to make any more splits from these new unlabeled nodes. An attribute can only be used as a decision variable once on a given path in a tree but can be used multiple times in an entire tree.

Step 5: Repeat Steps 1-4 for Each Branch of the Tree Using the Relevant Partition

We stop when:

  1. Every instance in the partition is part of the same class. Return a decision tree that is made up of a leaf node and labeled with the class of the instances.
  2. There are no more attributes left. Return a decision tree that is made up of a leaf node labeled with the class that comprised the majority of the instances.
  3. No instances in the partition. We return a decision tree that is made up of a leaf node and label with the most common class in the parent node.

Here is the final decision tree:

id3-2

Source: (Mitchell, 1997)

Information Gain Ratio

An issue with just using information gain as the selection criteria for the root is that it has a strong bias towards selecting attributes that contain a large number of different values (which could lead to trees that overfit to the data). One way to rectify this is to use what is called the gain ratio. It acts as a kind of normalization factor.

The gain ratio defines an additional term that penalizes attributes that create a large number of partitions. The gain ratio chooses attributes based on the ratio between their gain and their intrinsic information content. It represents the proportion of information generated that is useful for classification. The attribute with the highest gain ratio is selected as the attribute that splits the data at that node.

Gain Ratio of an Attribution = Information Gain of the Attribute / Split Information of the Attribute

The clearest way to demonstrate the information gain is to show an example. Going back to our original data set, suppose we partition the data set based on the outlook attribute. This partition will result in three subsets: “sunny” subset (frequency count of 5), “overcast” subset (frequency count of 4), and “rainy” subset (frequency count of 5). 

Split Information of Outlook = -[(5/14) * log2(5/14) + (4/14) * log2(4/14) + (5/14) * log2(5/14)] = 1.577

The gain was calculated as 0.246, so the gain ratio is 0.246 / 1.577 = 0.156.

For humidity, we can also calculate the gain ratio.

Split Information of Humidity = -[(7/14) * log2(7/14) + (7/14) * log2(7/14)] = 1

The gain was calculated as 0.1513, so the gain ratio is 0.1513 / 1 = 0.1513.

Pruned

One common way to resolve the issue of overfitting is to do reduced error pruning (Mitchell, 1997). 

  • Training, validation, and test sets are used.
  • A portion of the original data set is withheld as a validation set. This validation set is not used during the training phase of the tree (i.e. while the decision tree is getting built).
  • Once training is finished and the decision tree is built, the tree is tested on the validation set.
  • Each node is considered as a candidate for pruning. Pruning means removing the subtree at that node, making it a leaf, and assigning the most common class at the node.
  • A node is removed if the resulting pruned tree has a classification accuracy (as tested against the validation set) that is at least as good as the accuracy of the original decision tree.
    • If replacing the subtree by a leaf results in improved classification accuracy, replace the subtree with that leaf. 
  • Pruning continues until no further improvement in the classification accuracy occurs.
  • This process happens in a depth-first manner, starting from the leaves.

Return to Table of Contents

Iterative Dichotomiser 3 Algorithm Design

The ID3 algorithm was implemented from scratch with and without reduced error pruning. The abalone, car evaluation, and image segmentation data sets were then preprocessed to meet the input requirements of the algorithms.

Five-Fold Cross-Validation

In this project, for the pruned version of ID3, 10% of the data was pulled out of the original data set to be used as a validation set. The remaining 90% of the data was used for five-fold stratified cross-validation to evaluate the performance of the models. For the unpruned version, the entire data set was used for five-fold stratified cross-validation.

validationset

Required Data Set Format for ID3

Required Data Set Format for ID3-Unpruned

  • Columns (0 through N)
    • 0: Class
    • 1: Attribute 1
    • 2: Attribute 2
    • 3: Attribute 3
    • N: Attribute N

Required Data Set Format for ID3-Pruned

  • Columns (0 through N)
    • 0: Class
    • 1: Attribute 1
    • 2: Attribute 2
    • 3: Attribute 3
    • N: Attribute N

Abalone Data Set

The original abalone data set contains 4177 instances, 8 attributes, and 29 classes (Waugh, 1995). The purpose of this data set is to predict the age of an abalone from physical measurements.

Modification of Attribute Values

All attribute values except for the sex attribute were made discrete by putting the continuous values into bins based on the quartile the value fell into. This modification follows the principle of Occam’s Razor.

Some of the classes only had one or two instances, so the class values (ages of the abalones) were converted into three classes of approximately equal size:

  • Young = 0
  • Middle-Aged = 1
  • Old = 2

Missing Data

There were no missing attribute values.

Car Evaluation Data Set

The original car evaluation data set contains 6 attributes and 1728 instances (Bohanec, 1997). The purpose of the data set is to predict car acceptability based on price, comfort, and technical specifications.

Modification of Attribute Values

No modification of attribute values was necessary since all attributes were discrete.

Missing Data

There were no missing attribute values.

Image Segmentation Data Set

This data set is used for classification. It contains 19 attributes, 210 instances, and 7 classes (Brodley, 1990). This data set was created from a database of seven outdoor images.

Modification of Attribute Values

All attribute values were made discrete by putting the continuous values into bins based on the quartile the value fell into.

The class values were already discrete and were left as-is.

Missing Data

There were no missing attribute values.

Return to Table of Contents

Iterative Dichotomiser 3 Algorithm in Python, Coded From Scratch

Here are the preprocessed data sets:

Here is the code that parses the input file. Don’t be scared at how long all this code is. I include a lot of comments so that you know what is going on (just copy and paste all these programs into your favorite IDE, and you are good to go!):

import csv # Library to handle csv-formatted files

# File name: parse.py
# Author: Addison Sears-Collins
# Date created: 7/6/2019
# Python version: 3.7
# Description: Used for parsing the input data file

def parse(filename):
    """
    Parameters: 
        filename: Name of a file
    Returns: 
        data: Information on the attributes and the data as a list of 
              dictionaries. Each instance is a different dictionary.
    """

    # Initialize an empty list named 'data'
    data = []

    # Open the file in READ mode.
    # The file object is named 'file'
    with open(filename, 'r') as file:

        # Convert the file object named file to a csv.reader object. Save the
        # csv.reader object as csv_file
        csv_file = csv.reader(file)

        # Return the current row (first row) and advance the iterator to the
        # next row. Since the first row contains the attribute names (headers),
        # save them in a list called headers
        headers = next(csv_file)

        # Extract each of the remaining data rows one row at a time
        for row in csv_file:
            # append method appends an element to the end of the list
            # The element that is appended is a dictionary.
            # A dictionary contains search key-value pairs, analogous to
            # word-definition in a regular dictionary.
            # In this case, each instance is a separate dictionary.
            # The zip method joins two lists together so that we have
            # attributename(header)-value(row) pairs for each instance
            # in the data set
            data.append(dict(zip(headers, row)))

    return data

##Used for debugging
#name_of_file =  "abalone.txt" 
#data = parse(name_of_file)
#print(*data, sep = "\n")
#print()
#print(str(len(data)))

Here is the code for five-fold stratified cross-validation:

import random
from collections import Counter # Used for counting

# File name: five_fold_stratified_cv.py
# Author: Addison Sears-Collins
# Date created: 7/7/2019
# Python version: 3.7
# Description: Implementation of five-fold stratified cross-validation
# Divide the data set into five random groups. Make sure 
# that the proportion of each class in each group is roughly equal to its 
# proportion in the entire data set.

# Required Data Set Format for Classification Problems:
# Columns (0 through N)
# 0: Class
# 1: Attribute 1 
# 2: Attribute 2
# 3: Attribute 3 
# ...
# N: Attribute N

def get_five_folds(instances):
    """
    Parameters:
        instances: A list of dictionaries where each dictionary is an instance. 
            Each dictionary contains attribute:value pairs 
    Returns: 
        fold0, fold1, fold2, fold3, fold4
        Five folds whose class frequency distributions are 
        each representative of the entire original data set (i.e. Five-Fold 
        Stratified Cross Validation)
    """
    # Create five empty folds
    fold0 = []
    fold1 = []
    fold2 = []
    fold3 = []
    fold4 = []

    # Shuffle the data randomly
    random.shuffle(instances)

    # Generate a list of the unique class values and their counts
    classes = []  # Create an empty list named 'classes'

    # For each instance in the list of instances, append the value of the class
    # to the end of the classes list
    for instance in instances:
        classes.append(instance['Class'])

    # Create a list of the unique classes
    unique_classes = list(Counter(classes).keys())

    # For each unique class in the unique class list
    for uniqueclass in unique_classes:

        # Initialize the counter to 0
        counter = 0
        
        # Go through each instance of the data set and find instances that
        # are part of this unique class. Distribute them among one
        # of five folds
        for instance in instances:

            # If we have a match
            if uniqueclass == instance['Class']:

                # Allocate instance to fold0
                if counter == 0:

                    # Append this instance to the fold
                    fold0.append(instance)

                    # Increase the counter by 1
                    counter += 1

                # Allocate instance to fold1
                elif counter == 1:

                    # Append this instance to the fold
                    fold1.append(instance)

                    # Increase the counter by 1
                    counter += 1

                # Allocate instance to fold2
                elif counter == 2:

                    # Append this instance to the fold
                    fold2.append(instance)

                    # Increase the counter by 1
                    counter += 1

                # Allocate instance to fold3
                elif counter == 3:

                    # Append this instance to the fold
                    fold3.append(instance)

                    # Increase the counter by 1
                    counter += 1

                # Allocate instance to fold4
                else:

                    # Append this instance to the fold
                    fold4.append(instance)

                    # Reset the counter to 0
                    counter = 0

    # Shuffle the folds
    random.shuffle(fold0)
    random.shuffle(fold1)
    random.shuffle(fold2)
    random.shuffle(fold3)
    random.shuffle(fold4)

    # Return the folds
    return  fold0, fold1, fold2, fold3, fold4

Here is the ID3 code:

from node import Node # Import the Node class from the node.py file
from math import log # We need this to compute log base 2
from collections import Counter # Used for counting

# File name: id3.py
# Author: Addison Sears-Collins
# Date created: 7/5/2019
# Python version: 3.7
# Description: Iterative Dichotomiser 3 algorithm

# Required Data Set Format for Classification Problems:
# Columns (0 through N)
# 0: Class
# 1: Attribute 1 
# 2: Attribute 2
# 3: Attribute 3 
# ...
# N: Attribute N

def ID3(instances, default):
    """
    Parameters:
      instances: A list of dictionaries where each dictionary is an instance. 
                 Each dictionary contains attribute:value pairs 
                 e.g.: instances =
                   {'Class':'Play','Outlook':'Sunny','Temperature':'Hot'}
                   {'Class':'Don't Play','Outlook':'Rain','Temperature':'Cold'}
                   {'Class':'Play','Outlook':'Overcast','Temperature':'Hot'}
                   ...
                   etc.
                 The first attribute:value pair is the 
                 target variable (i.e. the class of that instance)
      default: The default class label (e.g. 'Play')
    Returns:
      tree: An object of the Node class
    """    
    # The len method returns the number of items in the list
    # If there are no more instances left, return a leaf that is labeled with 
    # the default class
    if len(instances) == 0:
        return Node(default)

    classes = []  # Create an empty list named 'classes'

    # For each instance in the list of instances, append the value of the class
    # to the end of the classes list
    for instance in instances:
        classes.append(instance['Class'])

    # If all instances have the same class label or there is only one instance
    # remaining, create a leaf node labeled with that class. 
    # Counter(list) creates a tally of each element in the list. This tally is 
    # represented as an element:tally pair.
    if len(Counter(classes)) == 1 or len(classes) == 1:
        tree = Node(mode_class(instances))
        return tree

    # Otherwise, find the best attribute, the attribute that maximizes the gain 
    # ratio of the data set, to be the next decision node.
    else:
        # Find the name of the most informative attribute of the data set
        # e.g. "Outlook"
        best_attribute = most_informative_attribute(instances)

        # Initialize a tree with the most common class
        # e.g. "Play"
        tree = Node(mode_class(instances))

        # The most informative attribute becomes this decision node
        # e.g. "Outlook" becomes this node
        tree.attribute = best_attribute

        best_attribute_values = []

        # The branches of the node are the values of the best_attribute
        # e.g. "Sunny", "Overcast", "Rainy"
        # Go through each instance and create a list of the values of 
        # best_attribute
        for instance in instances:
            try:
                best_attribute_values.append(instance[best_attribute])
            except:
                no_best_attribute = True
        # Create a list of the unique best attribute values
        # Set is like a list except it extracts nonduplicate (unique) 
        # items from a list. 
        # In short, we create a list of the set of unique
        # best attribute values.
        tree.attribute_values = list(set(best_attribute_values))

        # Now we need to split the instances. We will create separate subsets
        # for each best attribute value. These become the child nodes
        # i.e. "Sunny", "Overcast", "Rainy" subsets
        for best_attr_value_i in tree.attribute_values:

            # Generate the subset of instances
            instances_i = []
            # Go through one instance at a time
            for instance in instances:
                # e.g. If this instance has "Sunny" as its best attribute value
                if instance[best_attribute] == best_attr_value_i:
                    instances_i.append(instance) #Add this instance to the list

            # Create a subtree recursively
            subtree = ID3(instances_i, mode_class(instances))

            # Initialize the values of the subtree
            subtree.instances_labeled = instances_i

            # Keep track of the state of the subtree's parent (i.e. tree)
            subtree.parent_attribute = best_attribute # parent node
            subtree.parent_attribute_value = best_attr_value_i # branch name

            # Assign the subtree to the appropriate branch
            tree.children[best_attr_value_i] = subtree

        # Return the decision tree
        return tree


def mode_class(instances):
    """
    Parameters: 
      instances: A list of dictionaries where each dictionary is an instance. 
        Each dictionary contains attribute:value pairs 
    Returns:
      Name of the most common class (e.g. 'Don't Play')
    """

    classes = []  # Create an empty list named 'classes'

    # For each instance in the list of instances, append the value of the class
    # to the end of the classes list
    for instance in instances:
        classes.append(instance['Class'])

    # The 1 ensures that we get the top most common class
    # The [0][0] ensures we get the name of the class label and not the tally
    # Return the name of the most common class of the instances
    return Counter(classes).most_common(1)[0][0]

def prior_entropy(instances):
    """
    Calculate the entropy of the data set with respect to the actual class
    prior to splitting the data.
    Parameters:
      instances: A list of dictionaries where each dictionary is an instance. 
        Each dictionary contains attribute:value pairs 
    Returns:
      Entropy value in bits
    """
    # For each instance in the list of instances, append the value of the class
    # to the end of the classes list    
    classes = []  # Create an empty list named 'classes'

    for instance in instances:
        classes.append(instance['Class'])
    counter = Counter(classes)

    # If all instances have the same class, the entropy is 0
    if len(counter) == 1:
        return 0
    else:
    # Compute the weighted sum of the logs of the probabilities of each 
    # possible outcome 
        entropy = 0
        for c, count_of_c in counter.items():
            probability = count_of_c / len(classes)
            entropy += probability * (log(probability, 2))
        return -entropy

def entropy(instances, attribute, attribute_value):
    """
    Calculate the entropy for a subset of the data filtered by attribute value
    Parameters:
      instances: A list of dictionaries where each dictionary is an instance. 
        Each dictionary contains attribute:value pairs 
      attribute: The name of the attribute (e.g. 'Outlook')
      attribute_value: The value of the attribute (e.g. 'Sunny')
    Returns:
      Entropy value in bits
    """
    # For each instance in the list of instances, append the value of the class
    # to the end of the classes list    
    classes = []  # Create an empty list named 'classes'

    for instance in instances:
        if instance[attribute] == attribute_value:
            classes.append(instance['Class'])
    counter = Counter(classes)

    # If all instances have the same class, the entropy is 0
    if len(counter) == 1:
        return 0
    else:
    # Compute the weighted sum of the logs of the probabilities of each 
    # possible outcome 
        entropy = 0
        for c, count_of_c in counter.items():
            probability = count_of_c / len(classes)
            entropy += probability * (log(probability, 2))
        return -entropy

def gain_ratio(instances, attribute):
    """
    Calculate the gain ratio if we were to split the data set based on the values
    of this attribute.
    Parameters:
      instances: A list of dictionaries where each dictionary is an instance. 
        Each dictionary contains attribute:value pairs 
      attribute: The name of the attribute (e.g. 'Outlook')
    Returns:
      The gain ratio
    """
    # Record the entropy of the combined set of instances
    priorentropy = prior_entropy(instances)

    values = []

    # Create a list of the attribute values for each instance
    for instance in instances:
        values.append(instance[attribute])
    counter = Counter(values) # Store the frequency counts of each attribute value

    # The remaining entropy if we were to split the instances based on this attribute
    # This is a weighted entropy score sum
    remaining_entropy = 0

    # This variable is used for the gain ratio calculation
    split_information = 0

    # items() method returns a list of all dictionary key-value pairs
    for attribute_value, attribute_value_count in counter.items():
        probability = attribute_value_count/len(values)
        remaining_entropy += (probability * entropy(
            instances, attribute, attribute_value))
        split_information += probability * (log(probability, 2))

    information_gain = priorentropy - remaining_entropy

    split_information = -split_information

    gainratio = None

    if split_information != 0:
        gainratio = information_gain / split_information
    else:
        gainratio = -1000

    return gainratio

def most_informative_attribute(instances):
    """
    Choose the attribute that provides the most information if you were to
    split the data set based on that attribute's values. This attribute is the 
    one that has the highest gain ratio.
    Parameters:
      instances: A list of dictionaries where each dictionary is an instance. 
        Each dictionary contains attribute:value pairs 
      attribute: The name of the attribute (e.g. 'Outlook')
    Returns:
      The name of the most informative attribute
    """
    selected_attribute = None
    max_gain_ratio = -1000

    # instances[0].items() extracts the first instance in instances
    # for key, value iterates through each key-value pair in the first
    # instance in instances
    # In short, this code creates a list of the attribute names
    attributes = [key for key, value in instances[0].items()]
    # Remove the "Class" attribute name from the list
    attributes.remove('Class')

    # For every attribute in the list of attributes
    for attribute in attributes:
        # Calculate the gain ratio and store that value
        gain = gain_ratio(instances, attribute)

        # If we have a new most informative attribute
        if gain > max_gain_ratio:
            max_gain_ratio = gain
            selected_attribute = attribute

    return selected_attribute

def accuracy(trained_tree, test_instances):
    """
    Parameters:
        trained_tree: A tree that has already been trained
        test_instances: A set of test instances
    Returns:
        Classification accuracy (# of correct predictions/# of predictions) 
    """
    # Set the counter to 0
    no_of_correct_predictions = 0

    for test_instance in test_instances:
        if predict(trained_tree, test_instance) == test_instance['Class']:
            no_of_correct_predictions += 1

    return no_of_correct_predictions / len(test_instances)

def predict(node, test_instance):
    '''
    Parameters:
        node: A trained tree node
        test_instance: A single test instance
    Returns:
        Class value (e.g. "Play")
    '''
    # Stopping case for the recursive call.
    # If this is a leaf node (i.e. has no children)
    if len(node.children) == 0:
        return node.label
    # Otherwise, we are not yet on a leaf node.
    # Call predict method recursively until we get to a leaf node.
    else:
        # Extract the attribute name (e.g. "Outlook") from the node. 
        # Record the value of the attribute for this test instance into 
        # attribute_value (e.g. "Sunny")
        attribute_value = test_instance[node.attribute]

        # Follow the branch for this attribute value assuming we have 
        # an unpruned tree.
        if attribute_value in node.children and node.children[
            attribute_value].pruned == False:
            # Recursive call
            return predict(node.children[attribute_value], test_instance)

        # Otherwise, return the most common class
        # return the mode label of examples with other attribute values for the current attribute
        else:
            instances = []
            for attr_value in node.attribute_values:
                instances += node.children[attr_value].instances_labeled
            return mode_class(instances)

TREE = None
def prune(node, val_instances):
    """
    Prune the tree recursively, starting from the leaves
    Parameters:
        node: A tree that has already been trained
        val_instances: The validation set        
    """
    global TREE
    TREE = node

    def prune_node(node, val_instances):
        # If this is a leaf node
        if len(node.children) == 0:
            accuracy_before_pruning = accuracy(TREE, val_instances)
            node.pruned = True

            # If no improvement in accuracy, no pruning
            if accuracy_before_pruning >= accuracy(TREE, val_instances):
                node.pruned = False
            return

        for value, child_node in node.children.items():
            prune_node(child_node, val_instances)

        # Prune when we reach the end of the recursion
        accuracy_before_pruning = accuracy(TREE, val_instances)
        node.pruned = True

        if accuracy_before_pruning >= accuracy(TREE, val_instances):
            node.pruned = False

    prune_node(TREE, val_instances)

Here is the code for the nodes. This is needed in order to run ID3 (above).

# File name: node.py
# Author: Addison Sears-Collins
# Date created: 7/6/2019
# Python version: 3.7
# Description: Used for constructing nodes for a tree

class Node:
  
  # Method used to initialize a new node's data fields with initial values
  def __init__(self, label):

    # Declaring variables specific to this node
    self.attribute = None  # Attribute (e.g. 'Outlook')
    self.attribute_values = []  # Values (e.g. 'Sunny')
    self.label = label   # Class label for the node (e.g. 'Play')
    self.children = {}   # Keeps track of the node's children
    
    # References to the parent node
    self.parent_attribute = None
    self.parent_attribute_value = None

    # Used for pruned trees
    self.pruned = False  # Is this tree pruned? 
    self.instances_labeled = []

Here is the code that displays the results. This is the driver program:

import id3
import parse
import random
import five_fold_stratified_cv
from matplotlib import pyplot as plt


# File name: results.py
# Author: Addison Sears-Collins
# Date created: 7/6/2019
# Python version: 3.7
# Description: Results of the Iterative Dichotomiser 3 runs
# This source code is the driver for the entire program

# Required Data Set Format for Classification Problems:
# Columns (0 through N)
# 0: Class
# 1: Attribute 1 
# 2: Attribute 2
# 3: Attribute 3 
# ...
# N: Attribute N

ALGORITHM_NAME = "Iterative Dichotomiser 3"

def main():

    print("Welcome to the " +  ALGORITHM_NAME + " Program!")
    print()

    # Enter the name of your input file
    #file_name = 'car.txt'
    file_name = input("Enter the name of your input file (e.g. car.txt): ") 
    

    # Show functioning of the program
    #trace_runs_file = 'car_id3_trace_runs.txt'
    trace_runs_file = input(
       "Enter the name of your trace runs file (e.g. car_id3_trace_runs.txt): ")     

    # Save the output graph of the results
    #imagefile = 'car_id3_results.png'
    imagefile = input(
        "Enter the name of the graphed results file (e.g. foo.png): ")     

    # Open a new file to save trace runs
    outfile_tr = open(trace_runs_file,"w") 

    outfile_tr.write("Welcome to the " +  ALGORITHM_NAME + " Program!" + "\n")
    outfile_tr.write("\n")

    data = parse.parse(file_name)
    pruned_accuracies_avgs = []
    unpruned_accuracies_avgs = []

    # Shuffle the data randomly
    random.shuffle(data)

    # This variable is used for the final graph. Places
    # upper limit on the x-axis.
    # 10% of is pulled out for the validation set
    # 20% of that set is used for testing in the five-fold
    # stratified cross-validation
    # Round up to the nearest value of 10
    upper_limit = (round(len(data) * 0.9 * 0.8) - round(
        len(data) * 0.9 * 0.8) % 10) + 10
    #print(str(upper_limit)) # Use for debugging
    if upper_limit <= 10:
        upper_limit = 50

    # Get the most common class in the data set.
    default = id3.mode_class(data)

    # Pull out 10% of the data to be used as a validation set
    # The remaining 90% of the data is used for cross validation.
    validation_set = data[: 1*len(data)//10]
    data = data[1*len(data)//10 : len(data)]

    # Generate the five stratified folds
    fold0, fold1, fold2, fold3, fold4 = five_fold_stratified_cv.get_five_folds(
        data)

    # Generate lists to hold the training and test sets for each experiment
    testset = []
    trainset = []

    # Create the training and test sets for each experiment
    # Experiment 0
    testset.append(fold0)
    trainset.append(fold1 + fold2 + fold3 + fold4)

    # Experiment 1
    testset.append(fold1)
    trainset.append(fold0 + fold2 + fold3 + fold4)

    # Experiment 2
    testset.append(fold2)
    trainset.append(fold0 + fold1 + fold3 + fold4)

    # Experiment 3
    testset.append(fold3)
    trainset.append(fold0 + fold1 + fold2 + fold4)
    
    # Experiment 4
    testset.append(fold4)
    trainset.append(fold0 + fold1 + fold2 + fold3)

    step_size = len(trainset[0])//20

    for length in range(10, upper_limit, step_size):
        print('Number of Training Instances:', length)
        outfile_tr.write('Number of Training Instances:' + str(length) +"\n")
        pruned_accuracies = []
        unpruned_accuracies = []

        # Run all 5 experiments for 5-fold stratified cross-validation
        for experiment in range(5):

            # Each experiment has a training and testing set that have been 
            # preassigned.
            train = trainset[experiment][: length]
            test = testset[experiment]

            # Pruned
            tree = id3.ID3(train, default)
            id3.prune(tree, validation_set)
            acc = id3.accuracy(tree, test)
            pruned_accuracies.append(acc)

            # Unpruned
            tree = id3.ID3(train, default)
            acc = id3.accuracy(tree, test)
            unpruned_accuracies.append(acc) 
        
        # Calculate and store the average classification 
        # accuracies for each experiment
        avg_pruned_accuracies = sum(pruned_accuracies) / len(pruned_accuracies)
        avg_unpruned_accuracies = sum(unpruned_accuracies) / len(unpruned_accuracies)

        print("Classification Accuracy for Pruned Tree:", avg_pruned_accuracies) 
        print("Classification Accuracy for Unpruned Tree:", avg_unpruned_accuracies)
        print()
        outfile_tr.write("Classification Accuracy for Pruned Tree:" + str(
            avg_pruned_accuracies) + "\n") 
        outfile_tr.write("Classification Accuracy for Unpruned Tree:" + str(
                avg_unpruned_accuracies) +"\n\n")

        # Record the accuracies, so we can plot them later
        pruned_accuracies_avgs.append(avg_pruned_accuracies)
        unpruned_accuracies_avgs.append(avg_unpruned_accuracies) 
    
    # Close the file
    outfile_tr.close()

    plt.plot(range(10, upper_limit, step_size), pruned_accuracies_avgs, label='pruned tree')
    plt.plot(range(10, upper_limit, step_size), unpruned_accuracies_avgs, label='unpruned tree')
    plt.xlabel('Number of Training Instances')
    plt.ylabel('Classification Accuracy on Test Instances')
    plt.grid(True)
    plt.title("Learning Curve for " +  str(file_name))
    plt.legend()
    plt.savefig(imagefile) 
    plt.show()
    

main()

Return to Table of Contents

Iterative Dichotomiser 3 Output

Here are the trace runs:

Here are the results:

Abalone Data Set

abalone_id3_results

Car Evaluation Data Set

car_id3_results

Image Segmentation Data Set

segmentation_id3_results

Analysis

Abalone Data Set

While the overall classification accuracies on the data set were only ~60%, the data show the impact that pruning a decision tree can have on improving prediction performance. The pruned tree performed better than the unpruned tree on the same set of test instances, irrespective of how many training instances the decision trees were trained on. These results suggest that smaller decision trees can generalize better than larger trees and can more accurately classify new test instances. This pruning impact was more pronounced on this data set than any of the other data sets evaluated in this project.

Car Evaluation Data Set

Classification accuracy for both the pruned and unpruned trees was high on this data set, plateauing at ~92%. This performance was the best of any of the data sets examined in this project.

When the decision trees were trained on a relatively small number of training instances, the pruned trees outperformed the unpruned trees. However, beyond 600 training instances, unpruned trees began to have higher classification accuracy. 

It is not clear if the superior accuracy of the unpruned tree algorithm on large numbers of training instances was due to overfitting on the training data or if there was actually an improvement in performance. More training data would be needed to examine if this effect remains consistent for larger numbers of training instances.

Image Segmentation Data Set

For the segmentation data set, neither the unpruned nor the pruned trees outperformed the other consistently. Overall classification accuracy quickly improved for both ID3 algorithms when the decision trees were trained on more training instances. Performance plateaued at ~72%, which was not as good as the car evaluation data set but was better than the abalone data set.

The results show that creating smaller decision trees by pruning the branches might not lead to improved classification performance on unseen new test instances, in some cases. 

Summary and Conclusions

We can conclude that decision trees can create a convenient set of if-then-else decision rules that can be used to classify new unseen test instances. Classification accuracy for both the pruned and unpruned ID3 algorithms was >60% for all data sets, reaching a peak of ~92% on the car evaluation data set.

Neither type of tree, unpruned and pruned, consistently outperformed the other in terms of classification accuracy. The results suggest the impact of pruning depends highly on the data set being evaluated. On the abalone data set, pruning the tree reduced overfitting and led to improved performance. On the car evaluation data set, the performance was mixed. On the image segmentation data set, performance was relatively the same for both ID3-unpruned and ID3-pruned.

In a real-world setting, the overhead associated with reduced error pruning would need to be balanced against the increased speed and simplicity with which new unseen test instances could be classified. Future work would need to compare the performance of the ID3-pruned and ID3-unpruned algorithms on other classification data sets.

Return to Table of Contents

References

Alpaydin, E. (2014). Introduction to Machine Learning. Cambridge, Massachusetts: The MIT Press.

Bohanec, M. (1997, 6 1). Car Evaluation Data Set. Retrieved from UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Car+Evaluation

Brodley, C. (1990, November 1). Image Segmentation Data Set. Retrieved from UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Image+Segmentation

James, G. (2013). An Introduction to Statistical Learning: With Applications in R. New York: Springer.

Kelleher, J. D., Namee, B., & Arcy, A. (2015). Fundamentals of Machine Learning for Predictive Data Analytics. Cambridge, Massachusetts: The MIT Press.

Mitchell, T. M. (1997). Machine Learning. New York: McGraw-Hill.

Quinlan, J. R. (1986). Induction of Decision Trees. Machine Learning, 81-106.

Waugh, S. (1995, 12 1). Abalone Data Set. Retrieved from UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Abalone

Return to Table of Contents