## How to Choose an Optimal Learning Rate for Gradient Descent

One of the challenges of gradient descent is choosing the optimal value for the learning rate, eta (η). The learning rate is perhaps the most important hyperparameter (i.e. the parameters that need to be chosen by the programmer before executing a machine learning program) that needs to be tuned (Goodfellow 2016).

If you choose a learning rate that is too small, the gradient descent algorithm might take a really long time to find the minimum value of the error function. This defeats the purpose of gradient descent, which was to use a computationally efficient method for finding the optimal solution.

On the other hand, if you choose a learning rate that is too large, you might overshoot the minimum value of the error function, and may even never reach the optimal solution.

Before we get into how to choose an optimal learning rate, it should be emphasized that there is no value of the learning rate that will guarantee convergence to the minimum value of the error function (assuming global) value of a function.

Algorithms like logistic regression are based on gradient descent and are therefore what is known as “hill climber.” Therefore, for non-convex problems (where the graph of the error function contains local minima and a global minimum vs. convex problems where global minimum = local minimum), they will most likely get stuck in a local minimum.

So with that background, we are ready to take a look at some options for selecting the optimal learning rate (eta) for gradient descent.

## Choose a Fixed Learning Rate

The standard gradient descent procedure uses a fixed learning rate (e.g. 0.01) that is determined by trial and error. For example:

“Typical values for a neural network with standardized inputs (or inputs mapped to the (0,1) interval) are less than 1 and greater than 10-6 but these should not be taken as strict ranges and greatly depend on the parametrization of the model. A default value of 0.01 typically works for standard multi-layer neural networks but it would be foolish to rely exclusively on this default value. If there is only time to optimize one hyper-parameter and one uses stochastic gradient descent, then this is the hyper-parameter that is worth tuning.”

Bengio (2013)

## Use Learning Rate Annealing

Learning rate annealing entails starting with a high learning rate and then gradually reducing the learning rate linearly during training. The learning rate can decrease to a value close to 0.

The idea behind this method is to quickly descend to a range of acceptable weights, and then do a deeper dive within this acceptable range.

## Use Cyclical Learning Rates

Cyclical learning rates have been proposed:

“Instead of monotonically decreasing the learning rate, this method lets the learning rate cyclically vary between reasonable boundary values.”

Leslie (2017)

## Use an Adaptive Learning Rate

Another option is to use a learning rate that adapts based on the error output of the model. Here is what the experts say about adaptive learning rates.

Reed (1999) notes on page 72 of his book Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks that an adaptable learning rate is preferred over a fixed learning rate:

“The point is that, in general, it is not possible to calculate the best learning rate a priori. The same learning rate may not even be appropriate in all parts of the network. The general fact that the error is more sensitive to changes in some weights than in others makes it useful to assign different learning rates to each weight”

Reed (1999)

Melin et al. (2010) writes:

“First, the initial network output and error are calculated. At each epoch new weights and biases are calculated using the current learning rate. New outputs and errors are then calculated.

As with momentum, if the new error exceeds the old error by more than a predefined ratio (typically 1.04), the new weights and biases are discarded. In addition, the learning rate is decreased. Otherwise, the new weights, etc. are kept. If the new error is less than the old error, the learning rate is increased (typically by multiplying by 1.05).”

Melin et al. (2010)

## References

Bengio, Y. Practical Recommendations for Gradient-based Training of Deep Architectures. In K.-R. Muller, G. Montavon, and G. B. Orr, editors, Neural Networks: Tricks of the Trade. Springer, 2013.

Goodfellow, Ian, Yoshua Bengio, and Aaron Courville. Deep Learning. Cambridge, Massachusetts: The MIT Press, 2016. Print.

Melin, Patricia, Janusz Kacprzyk, and Witold Pedrycz. Soft Computing for Recognition Based on Biometrics. Berlin Heidelberg: Springer, 2010. Print.

Reed, Russell D., and Robert J. Marks. Neural Smithing: Supervised Learning in Feedforward Artificial Neural Networks. Cambridge, Mass: MIT Press, 1999. Print.

Smith, Leslie N. Cyclical learning rates for training neural networks. In Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on. IEEE, pages 464–472.

## Propositional Rules, Logical Decision Trees, and the Satisfiability Problem

Here is a question I received the other day:

Can the propositional rules generated by logical decision trees be recast as a satisfiability problem, making decision trees NP-complete?

I’m going to break my answer down into segments.

# What are Propositional Rules?

In order to understand what proposition rules are, it is important to understand what a proposition is. A proposition is the fundamental building block of logic.

A proposition is a sentence that is either true or false, but not both. If a proposition is true, it is denoted by the capital letter T. If a proposition is false, it can be denoted by the capital letter F. Below are some examples of propositions:

• The sky is blue. …. T
• 10 + 10 = 20. ….T
• The letter “p” is a vowel…..F

Given two propositions p and q, we can create a “if p then q” statement that is known as an implication. Implications can be written as p → q. Here is an example of an implication:

• p = The relative humidity is greater than 70%
• q = It is raining tonight
• p → q = “If the relative humidity is greater than 70% then it is raining tonight.”

As seen in the real-world example above, propositional rules can be created by combining propositions in different ways.

# What is the Satisfiability Problem?

If I have a propositional formula denoted as a vector of boolean variables, for example x = (x1 = F, x2 = T, x3 = F, x4 = T, x5 = F), can we find an assignment of values for these boolean variables that would evaluate the propositional formula to True? This statement denotes that satisfiability problem.

Consider, for example. The following formula:

• F = x1 AND x2 AND x3 AND x4 AND x5

The formula denoted as F above will evaluate to TRUE if all the variables in the vector x are assigned true. Therefore, the formula is satisfiable.

This satisfiability problem was proven to be NP-complete by Stephen A. Cook.

# What does NP-Complete Mean?

Computational problems that belong to the class of NP-complete problems are those that have no known efficient (polynomial time) solution algorithms. You can beat your head against a wall trying to find a fast solution to these sorts of problems. However, given solution to these NP-complete problems, you can verify that solution in polynomial time (i.e. quickly).

An example of an NP-complete problem is the Vertex Cover Problem some of you might have seen in an algorithms or data structures class.

# Can the Propositional Rules Generated by Logical Decision Trees Be Recast as a Satisfiability Problem, Making Decision Trees NP-Complete?

Yes they can. Once we have the propositional rules, we then need to either find a variable assignment that makes it satisfiable, or we can report that no such assignment exists.

Let’s suppose we are dealing with binary decision trees, and we have three variables, a, b, and c. There are 2^n possible truth assignments that would need to be verified.

By the way, there is a famous paper by Hyafil and Rivest proving that optimal construction of decision trees is NP-complete.

# References

Cook, S. A. and Mitchell, D. G. “Finding Hard Instances of the Satisfiability Problem: A Survey.” In Satisfiability Problem: Theory and Applications (Piscataway, NJ, 1996) (Ed. D. Du, J. Gu, and P. M. Pardalos). Providence, RI: Amer. Math. Soc., pp. 1-17, 1997.

## My Master’s Thesis

Straight from the vault, my Master’s thesis: An Analysis of the Robustness and Relevance of Meteorological Triggers for Catastrophe Bonds.

# Abstract

Each year weather-related catastrophes account for an estimated United States dollars (USO) \$40 billion in damage across the world, although only a fraction of this risk of loss is insured. Losses from hurricanes in the United States have increased over the past several years to the extent that many insurance companies have become increasingly reluctant to insure in certain locations along the coast. Several insurance companies have become insolvent as a result of the active hurricane seasons of 2004 and 2005. In order to cope with this hurricane risk, some insurance and reinsurance firms have shifted part of their risk to the capital markets in the form of catastrophe bonds.

Two problems are observed with catastrophe bonds based on parametric triggers (e.g. Saffir-Simpson scale rating of a hurricane at landfall). First, the trigger mechanisms are measured imprecisely, with the degree of imprecision depending on the choice of trigger mechanism, the available sensor systems, and the methods by which meteorologists analyze the resulting observations. Second, the trigger mechanisms might not relate well to the economic harm caused by the weather phenomena, suggesting that they were not selected on the basis of adequate understanding of relevant meteorology and its relationship to storm damage. Both problems are documented, and perhaps ameliorated in part, by a thorough study of the relevant meteorology and meteorological practices.

Development of a set of robust and relevant triggers for catastrophe bonds for hurricanes is the objective of this study. The real-time and post-landfall accuracy of measured hurricane parameters such as minimum central pressure and maximum sustained surface wind speed were analyzed. Linear regression and neural networks were then employed in order to determine the predictability of storm damage from these measurable hurricane parameters or combination thereof. The damage data set consisted of normalized economic losses for hurricane landfalls along the United States Gulf and Atlantic coasts from 1900 to 2005. The results reveal that single hurricane parameters and combinations of hurricane parameters can be poor indicators of the amount of storm damage.

The results suggest that modeled-loss type catastrophe bonds may be a potentially superior alternative to parametric-type bonds, which are highly sensitive to the accuracy of the measurements of the underlying storm parameters and to the coastal bathymetry, topography, and economic exposure. A procedure for determining the robustness of a risk model for use in modeled-loss type catastrophe bonds is also presented.

## How to Determine What Machine Learning Model to Use

With all of the different machine learning models out there (unsupervised learning, supervised learning, and reinforcement learning), how do you go about deciding which model to use for a particular problem?

One approach is to try every possible machine learning model, and then to examine which model yields the best results. The problem with this approach is it could take a VERY long time. There are dozens of machine learning algorithms, and each one has different run times. Depending on the data set, some algorithms may take hours or even days to complete.

Another risk of doing the “try-all-models” approach is that you also might end up using a machine learning algorithm on a type of problem that is not really a good fit for that particular algorithm. An analogy would be like using a hammer to tighten a screw. Sure, a hammer is a useful tool but only when used for its intended purpose. If you want to tighten a screw, use a screwdriver, not a hammer.

When deciding on what type of machine learning algorithm to use, you have to first understand the problem thoroughly and then decide what you want to achieve. Here is a helpful framework that can be used for algorithm selection:

Are you trying to divide an unlabeled data set into groups such that each group has similar attributes (e.g. customer segmentation)?

If yes, use a clustering algorithm (unsupervised learning) like k-means, hierarchical clustering, or Gaussian Mixture Models.

Are you trying to predict a continuous value given a set of attributes (e.g. house price prediction)?

If yes, use a regression algorithm (supervised learning), like linear regression.

Are we trying to predict discrete classes (e.g. spam/not spam)? Do we have a data set that is already labeled with the classes?

If yes to both questions, use a classification algorithm (supervised learning) like Naive Bayes, K-Nearest Neighbors, logistic regression, ID3, neural networks, or support vector machines.

Are you trying to reduce a large number of attributes to a smaller number of attributes?

Use a dimensionality reduction algorithm, like stepwise forward selection or principal components analysis.

Do you need an algorithm that reacts to its environment, continuously learning from experience, the way humans do (e.g. autonomous vehicles and robots)?

If yes, use reinforcement learning methods.

For each of the questions above, you can ask follow-up questions to hone in on the appropriate algorithm to use on that type.

For example:

• Do we need an algorithm that can be built, trained, and tested quickly?
• Do we need a model that can make fast predictions?
• How accurate does the model need to be?
• Is the number of attributes greater than the number of instances?
• Do we need a model that is easy to interpret?
• How scalable a model do we need?
• What evaluation criteria is important in order to meet business needs?
• How much data preprocessing do we want to do?

Here is a really useful flowchart from Microsoft that presents different ways to help one to decide what algorithm to use when: Source: Microsoft

Here is another useful flowchart from SciKit Learn.

Slide 11 of this link shows the interpretability vs. accuracy tradeoffs for the different machine learning models.

This link provides a quick rundown of the different types of machine learning models.

## Advantages of K-Means Clustering

The K-means clustering algorithm is used to group unlabeled data set instances into clusters based on similar attributes. It has a number of advantages over other types of machine learning models, including the linear models, such as logistic regression and Naive Bayes.

Here are the advantages:

# Unlabeled Data Sets

A lot of real-world data comes unlabeled, without any particular class. The benefit of using an algorithm like K-means clustering is that we often do not know how instances in a data set should be grouped.

For example, consider the problem of trying to group viewers of Netflix into clusters based on similar viewing behavior. We know that there are clusters, but we do not know what those clusters are. Linear models will not help us at all with these sorts of issues.

# Nonlinearly Separable Data

Consider the data set below containing a set of three concentric circles. It is nonlinearly separable. In other words, there is no straight line or plane that we could draw on the graph below that can easily discriminate the colored classes red, blue, and green. Using K-means clustering and converting the coordinate system below from Cartesian coordinates to Polar coordinates, we could use the information about the radius to create concentric clusters.

# Simplicity

The meat of the K-means clustering algorithm is just two steps, the cluster assignment step and the move centroid step. If we’re looking for an unsupervised learning algorithm that is easy to implement and can handle large data sets, K-means clustering is a good starting point.

# Availability

Most of the popular machine learning packages contain an implementation of K-means clustering.

# Speed

Based on my experience using K-means clustering, the algorithm does its work quickly, even for really big data sets.

## Advantages of Decision Trees

Decision tree algorithms such as ID3 provide a convenient way to show all of the possible outcomes of a decision. Decision trees can be used for either classification or regression. Below are some of the advantages of using decision trees as opposed to other types of machine learning models.

# Simplicity

One of the things that I like about decision trees is that you can easily explain the model to somebody who has a non-technical background. Decision trees create straightforward if-then-else rules which could be communicated to a boss, project manager, product manager, or outside stakeholder.

Contrast decision trees with other more black box-like machine learning algorithms such as logistic regression, neural networks, or reinforcement learning method, and you can see that decision trees would provide a refreshing level of transparency not always common in machine learning.

# No Large Data Requirement

If you take a look at an algorithm like the k nearest neighbors algorithm, which classifies an unseen instance based on instances that are most similar to that instance, you need a lot of data in order to get accurate results. The more data you have, the better.

However, there may be certain instances or certain problems when a lot of data is not available. The benefit of using decision trees is that you do not not need a lot of data in order to create something useful.

# Best and Worst Case

In some settings, you want to be able to determine a worst case, a best case, and a management case. With a decision tree, you can easily see all of the possible outcomes. Each test instance gets put into one of the outcomes, so you pretty much know what to expect ahead of time. Even outliers don’t phase a decision tree.

# Continuous and Discrete Data

Decision trees can handle both continuous and discrete data depending on which decision tree model you use. In contrast, many machine learning algorithms can only handle either continuous or discrete data, but not both.

# Non-Linearity

Decision trees can capture nonlinear relationships.

# Fast

Classifying a test instance is fast and just depends on the depth of the tree.

# Irrelevant Attributes

Because of the way decision trees are computed using the Information Gain, irrelevant attributes are handled with ease.

## Iterative Dichotomiser 3 (ID3) Algorithm From Scratch

In this post, I will walk you through the Iterative Dichotomiser 3 (ID3) decision tree algorithm step-by-step. We will develop the code for the algorithm from scratch using Python. We will also run the algorithm on real-world data sets from the UCI Machine Learning Repository.

# What is the Iterative Dichotomiser 3 Algorithm?

## Iterative Dichotomiser 3 (ID3)

### Unpruned

In the unpruned ID3 algorithm, the decision tree is grown to completion (Quinlan, 1986). The Iterative Dichotomiser 3 (ID3) algorithm is used to create decision trees and was invented by John Ross Quinlan. The decision trees in ID3 are used for classification, and the goal is to create the shallowest decision trees possible.

For example, consider a decision tree to help us determine if we should play tennis or not based on the weather:

• The interior nodes of the tree are the decision variables that are based on the attributes in the data set (e.g. weather outlook [attribute]: sunny, overcast, rainy [attribute values])
• The leaves of the tree store the classifications (e.g. play tennis or don’t play tennis)

Before we get into the details of ID3, let us examine how decision trees work.

#### How decision trees work

We start with a data set in which each row of the data set corresponds to a single instance. The columns of the data set are the attributes, where the rightmost column is the class label.

Each attribute can take on a set of values. During training of a decision tree, we split the instances in the data set into subsets (i.e. groups) based on the values of the attributes. For example, the weather outlook attribute can be used to create a subset of ‘sunny’ instances, ‘overcast’, and ‘rainy’ instances. We then split each of those subsets even further based on another attribute (e.g. temperature).

The objective behind building a decision tree is to use the attribute values to keep splitting the data into smaller and smaller subsets (recursively) until each subset consists of a single class label (e.g. play tennis or don’t play tennis). We can then classify fresh test instances based on the rules defined by the decision tree.

#### How ID3 Works

Starting from the root of the tree, ID3 builds the decision tree one interior node at a time where at each node we select the attribute which provides the most information gain if we were to split the instances into subsets based on the values of that attribute. How is “most information” determined? It is determined using the idea of entropy reduction which is part of Shannon’s Information Theory.

Entropy is a measure of the amount of disorder or uncertainty (units of entropy are bits). A data set with a lot of disorder or uncertainty does not provide us a lot of information.

A good way to think about entropy is how certain we would feel if we were to guess the class of a random training instance. A data set in which there is only one class has 0 entropy (high information here because we know with 100% certainty what the class is given an instance). However, if there is a data set in which each instance is a different class, entropy would be high because you have no certainty when trying to predict the class of a random instance.

Entropy is thus a measure of how heterogeneous a data set is. If there are many different class types in a data set, and each class type has the same probability of occurring, the entropy (also impurity) of the data set will be high. The greater the entropy is, the higher the uncertainty, and the less information gained.

Getting back to the running example, in a binary classification problem with positive instances p (play tennis) and negative instances n (don’t play tennis), the entropy contained in a data set is defined mathematically as follows (base 2 log is used as convention in Shannon’s Information Theory).

I(p,n) = entropy of a data set

= weighted sum of the logs of the probabilities of each possible outcome

= -[(p/(p+n))log2(p/(p+n)) + (n/(p+n))log2(n/(p+n))]

= -(p/(p+n))log2(p/(p+n)) + -(n/(p+n))log2(n/(p+n))

Where:

• I = entropy
• p = number of positive instances (e.g. number of play tennis instances)
• n = number of negative instances (e.g. number of don’t play tennis instances)

The negative sign at the beginning of the equation is used to change the negative values returned by the log function to positive values (Kelleher, Namee, & Arcy, 2015). This negation ensures that we the logarithmic term returns small numbers for high probabilities (low entropy; greater certainty) and large numbers for small probabilities (higher entropy; less certainty).

Notice that the weights in the summation are the actual probabilities themselves. These weights make sure that classes that appear most frequently in a data set make the biggest impact on the entropy of the data set.

Also notice that if there are only instances with class p (i.e. play tennis) and none with class n (i.e. don’t play tennis) in the data set, the entropy will be 0.

I(p,n) = -(p/(p+n))log2(p/(p+n)) +  -(n/(p+n))log2(n/(p+n))

= -(p/(p+0))log2(p/(p+0)) +  -(0/(p+0))log2(0/(p+0))

= -(p/p)log2(1) + 0

= 0 + 0

= 0

It is easiest to explain the full ID3 algorithm using actual numbers, so below I will demonstrate how the ID3 algorithm works using an example.

#### ID3 Example

Continuing from the example in the previous section, we want to create a decision tree that will help us determine if we should play tennis or not.

We have four attributes in our data set:

• Weather outlook: sunny, overcast, rain
• Temperature: hot, mild, cool
• Humidity: high, normal
• Wind: weak, strong

We have two classes:

• p = play tennis class (“Yes”)
• n = don’t play tennis class (“No”)

Here is the data set to train the tree:

Source: (Mitchell, 1997)

Step 1: Calculate the Prior Entropy of the Data Set

We have 14 total instances: 9 instances of p and 5 instances of n. With the frequency counts of each unique class, we can calculate the prior entropy of this data set where:

• p = number of positive instances (e.g. number of play tennis instances)
• n = number of negative instances (e.g. number of don’t play tennis instances)

Prior Entropy of Data Set = I(p,n)

= weighted sum of the logs of the probabilities of each possible outcome

= -[(p/(p+n))log2(p/(p+n)) + (n/(p+n))log2(n/(p+n))]

= -(p/(p+n))log2(p/(p+n)) + -(n/(p+n))log2(n/(p+n))

= -(9/(9+5))log2(9/(9+5)) + -(5/(9+5))log2(5/(9+5))

= -(9/(9+5))log2(9/(9+5)) + -(5/(9+5))log2(5/(9+5))

= 0.9403

This value above is indicative of how much entropy is remaining prior to doing any splitting.

Step 2: Calculate the Information Gain for Each Attribute

For each attribute in the data set (e.g. outlook, temperature, humidity, and wind):

• Calculate the total number of instances in the data set
• Initialize a running weighted entropy score sum variable to 0.
• Partition the data set instances into subsets based on the attribute values. For example, weather outlook is an attribute. We create the “sunny” subset, “overcast” subset, and “rainy” subset. Therefore, this attribute would create three partitions (or subsets).
• For each attribute value subset:
• Calculate the number of instances in this subset
• Calculate the entropy score of this subset using the frequency counts of each unique class.
• Calculate the weighting of this attribute value by dividing the number of instances in this subset by the total number of instances in the data set
• Add the weighted entropy score for this attribute value to the running weighted entropy score sum.
• The final running weighted entropy score sum is indicative of how much entropy would remain if we were to split the current data set using this attribute. It is the remaining entropy for this attribute.
• Calculate the information gain for this attribute where:
• Information gain = prior entropy of the data set from Step 1 – remaining entropy for this attribute from Step 2
• Record the information gain for this attribute and repeat the process above for the other attributes

For example, consider the first attribute (e.g. weather outlook). The remaining entropy for outlook can be calculated as follows:

Remaining Entropy for Weather Outlook  =

= (# of sunny/# of instances)*Isunny(p,n) + (# of overcast/# of instances)*Iovercast(p,n)  + (# of rainy/# of instances)*Irainy(p,n)

= (5/14)*Isunny(p,n) + (4/14)*Iovercast(p,n)  + (5/14)*Irainy(p,n)

Where:

Isunny(p,n) = I(number of sunny AND Yes, number of sunny AND No)

= I(2,3)

= -(p/(p+n))log2(p/(p+n)) + -(n/(p+n))log2(n/(p+n))

= -(2/(2+3))log2(2/(2+3)) + -(3/(2+3))log2(3/(2+3))

= 0.9710

Iovercast(p,n) = I(number of overcast AND Yes, number of overcast AND No)

= I(4,0)

= -(p/(p+n))log2(p/(p+n)) + -(n/(p+n))log2(n/(p+n))

= -(4/(4+0))log2(4/(4+0)) + -(0/(4+0))log2(0/(4+0))

= 0

Irainy(p,n) = I(number of rainy AND Yes, number of rainy AND No)

= I(3,2)

= -(p/(p+n))log2(p/(p+n)) + -(n/(p+n))log2(n/(p+n))

= -(3/(3+2))log2(3/(3+2)) + -(2/(3+2))log2(2/(3+2))

= 0.9710

Remaining Entropy for Weather Outlook =

= Weighted sum of the entropy scores of each attribute value subset

= (5/14)*Isunny(p,n) + (4/14)*Iovercast(p,n)  + (5/14)*Irainy(p,n)

= (5/14) * (0.9710) + (4/14) * 0 + (5/14) * (0.9710)

= 0.6936

Info Gain for Weather Outlook   = Prior entropy of the data set – remaining entropy if we split using this attribute

= 0.9403 – 0.6936

= 0.246

Now that we have the information gain for weather outlook, we need to find the remaining entropy and information gain for the other attributes using the same procedure. The remaining entropies for the other attributes are as follows:

• Remaining Entropy for Temperature = 0.911
• Remaining Entropy for Humidity = 0.789
• Remaining Entropy for Wind = 0.892

The information gains are as follows:

• Information Gain for Temperature = 0.9403 – 0.911 = 0.0293
• Information Gain for Humidity = 0.9403 – 0.789 = 0.1513
• Information Gain for Wind = 0.9403 – 0.892 = 0.0483

Step 3: Calculate the Attribute that Had the Maximum Information Gain

The information gain for weather outlook is 0.246, so it provides the most information and becomes the label for the root node.

Step 4: Partition the Data Set Based on the Values of the Attribute that Had the Maximum Information Gain

The data set is then partitioned based on the values this maximum information gain attribute can take (e.g. sunny, overcast, and rain). The partitions become the outgoing branches from the root node. These branches terminate in new unlabeled nodes. The maximum information gain attribute is removed from these partitions and is not used again to make any more splits from these new unlabeled nodes. An attribute can only be used as a decision variable once on a given path in a tree but can be used multiple times in an entire tree.

Step 5: Repeat Steps 1-4 for Each Branch of the Tree Using the Relevant Partition

We stop when:

1. Every instance in the partition is part of the same class. Return a decision tree that is made up of a leaf node and labeled with the class of the instances.
2. There are no more attributes left. Return a decision tree that is made up of a leaf node labeled with the class that comprised the majority of the instances.
3. No instances in the partition. We return a decision tree that is made up of a leaf node and label with the most common class in the parent node.

Here is the final decision tree:

Source: (Mitchell, 1997)

#### Information Gain Ratio

An issue with just using information gain as the selection criteria for the root is that it has a strong bias towards selecting attributes that contain a large number of different values (which could lead to trees that overfit to the data). One way to rectify this is to use what is called the gain ratio. It acts as a kind of normalization factor.

The gain ratio defines an additional term that penalizes attributes that create a large number of partitions. The gain ratio chooses attributes based on the ratio between their gain and their intrinsic information content. It represents the proportion of information generated that is useful for classification. The attribute with the highest gain ratio is selected as the attribute that splits the data at that node.

Gain Ratio of an Attribution = Information Gain of the Attribute / Split Information of the Attribute

The clearest way to demonstrate the information gain is to show an example. Going back to our original data set, suppose we partition the data set based on the outlook attribute. This partition will result in three subsets: “sunny” subset (frequency count of 5), “overcast” subset (frequency count of 4), and “rainy” subset (frequency count of 5).

Split Information of Outlook = -[(5/14) * log2(5/14) + (4/14) * log2(4/14) + (5/14) * log2(5/14)] = 1.577

The gain was calculated as 0.246, so the gain ratio is 0.246 / 1.577 = 0.156.

For humidity, we can also calculate the gain ratio.

Split Information of Humidity = -[(7/14) * log2(7/14) + (7/14) * log2(7/14)] = 1

The gain was calculated as 0.1513, so the gain ratio is 0.1513 / 1 = 0.1513.

### Pruned

One common way to resolve the issue of overfitting is to do reduced error pruning (Mitchell, 1997).

• Training, validation, and test sets are used.
• A portion of the original data set is withheld as a validation set. This validation set is not used during the training phase of the tree (i.e. while the decision tree is getting built).
• Once training is finished and the decision tree is built, the tree is tested on the validation set.
• Each node is considered as a candidate for pruning. Pruning means removing the subtree at that node, making it a leaf, and assigning the most common class at the node.
• A node is removed if the resulting pruned tree has a classification accuracy (as tested against the validation set) that is at least as good as the accuracy of the original decision tree.
• If replacing the subtree by a leaf results in improved classification accuracy, replace the subtree with that leaf.
• Pruning continues until no further improvement in the classification accuracy occurs.
• This process happens in a depth-first manner, starting from the leaves.

Return to Table of Contents

# Iterative Dichotomiser 3 Algorithm Design

The ID3 algorithm was implemented from scratch with and without reduced error pruning. The abalone, car evaluation, and image segmentation data sets were then preprocessed to meet the input requirements of the algorithms.

## Five-Fold Cross-Validation

In this project, for the pruned version of ID3, 10% of the data was pulled out of the original data set to be used as a validation set. The remaining 90% of the data was used for five-fold stratified cross-validation to evaluate the performance of the models. For the unpruned version, the entire data set was used for five-fold stratified cross-validation.

## Required Data Set Format for ID3

### Required Data Set Format for ID3-Unpruned

• Columns (0 through N)
• 0: Class
• 1: Attribute 1
• 2: Attribute 2
• 3: Attribute 3
• N: Attribute N

### Required Data Set Format for ID3-Pruned

• Columns (0 through N)
• 0: Class
• 1: Attribute 1
• 2: Attribute 2
• 3: Attribute 3
• N: Attribute N

## Abalone Data Set

The original abalone data set contains 4177 instances, 8 attributes, and 29 classes (Waugh, 1995). The purpose of this data set is to predict the age of an abalone from physical measurements.

### Modification of Attribute Values

All attribute values except for the sex attribute were made discrete by putting the continuous values into bins based on the quartile the value fell into. This modification follows the principle of Occam’s Razor.

Some of the classes only had one or two instances, so the class values (ages of the abalones) were converted into three classes of approximately equal size:

• Young = 0
• Middle-Aged = 1
• Old = 2

### Missing Data

There were no missing attribute values.

## Car Evaluation Data Set

The original car evaluation data set contains 6 attributes and 1728 instances (Bohanec, 1997). The purpose of the data set is to predict car acceptability based on price, comfort, and technical specifications.

### Modification of Attribute Values

No modification of attribute values was necessary since all attributes were discrete.

### Missing Data

There were no missing attribute values.

## Image Segmentation Data Set

This data set is used for classification. It contains 19 attributes, 210 instances, and 7 classes (Brodley, 1990). This data set was created from a database of seven outdoor images.

### Modification of Attribute Values

All attribute values were made discrete by putting the continuous values into bins based on the quartile the value fell into.

The class values were already discrete and were left as-is.

### Missing Data

There were no missing attribute values.

Return to Table of Contents

# Iterative Dichotomiser 3 Algorithm in Python, Coded From Scratch

Here are the preprocessed data sets:

Here is the code that parses the input file:

```import csv # Library to handle csv-formatted files

# File name: parse.py
# Author: Addison Sears-Collins
# Date created: 7/6/2019
# Python version: 3.7
# Description: Used for parsing the input data file

def parse(filename):
"""
Parameters:
filename: Name of a file
Returns:
data: Information on the attributes and the data as a list of
dictionaries. Each instance is a different dictionary.
"""

# Initialize an empty list named 'data'
data = []

# Open the file in READ mode.
# The file object is named 'file'
with open(filename, 'r') as file:

# Convert the file object named file to a csv.reader object. Save the
# csv.reader object as csv_file
csv_file = csv.reader(file)

# Return the current row (first row) and advance the iterator to the
# next row. Since the first row contains the attribute names (headers),
# save them in a list called headers
headers = next(csv_file)

# Extract each of the remaining data rows one row at a time
for row in csv_file:
# append method appends an element to the end of the list
# The element that is appended is a dictionary.
# A dictionary contains search key-value pairs, analogous to
# word-definition in a regular dictionary.
# In this case, each instance is a separate dictionary.
# The zip method joins two lists together so that we have
# attributename(header)-value(row) pairs for each instance
# in the data set
data.append(dict(zip(headers, row)))

return data

##Used for debugging
#name_of_file =  "abalone.txt"
#data = parse(name_of_file)
#print(*data, sep = "\n")
#print()
#print(str(len(data)))
```

Here is the code for five-fold stratified cross-validation:

```import random
from collections import Counter # Used for counting

# File name: five_fold_stratified_cv.py
# Author: Addison Sears-Collins
# Date created: 7/7/2019
# Python version: 3.7
# Description: Implementation of five-fold stratified cross-validation
# Divide the data set into five random groups. Make sure
# that the proportion of each class in each group is roughly equal to its
# proportion in the entire data set.

# Required Data Set Format for Classification Problems:
# Columns (0 through N)
# 0: Class
# 1: Attribute 1
# 2: Attribute 2
# 3: Attribute 3
# ...
# N: Attribute N

def get_five_folds(instances):
"""
Parameters:
instances: A list of dictionaries where each dictionary is an instance.
Each dictionary contains attribute:value pairs
Returns:
fold0, fold1, fold2, fold3, fold4
Five folds whose class frequency distributions are
each representative of the entire original data set (i.e. Five-Fold
Stratified Cross Validation)
"""
# Create five empty folds
fold0 = []
fold1 = []
fold2 = []
fold3 = []
fold4 = []

# Shuffle the data randomly
random.shuffle(instances)

# Generate a list of the unique class values and their counts
classes = []  # Create an empty list named 'classes'

# For each instance in the list of instances, append the value of the class
# to the end of the classes list
for instance in instances:
classes.append(instance['Class'])

# Create a list of the unique classes
unique_classes = list(Counter(classes).keys())

# For each unique class in the unique class list
for uniqueclass in unique_classes:

# Initialize the counter to 0
counter = 0

# Go through each instance of the data set and find instances that
# are part of this unique class. Distribute them among one
# of five folds
for instance in instances:

# If we have a match
if uniqueclass == instance['Class']:

# Allocate instance to fold0
if counter == 0:

# Append this instance to the fold
fold0.append(instance)

# Increase the counter by 1
counter += 1

# Allocate instance to fold1
elif counter == 1:

# Append this instance to the fold
fold1.append(instance)

# Increase the counter by 1
counter += 1

# Allocate instance to fold2
elif counter == 2:

# Append this instance to the fold
fold2.append(instance)

# Increase the counter by 1
counter += 1

# Allocate instance to fold3
elif counter == 3:

# Append this instance to the fold
fold3.append(instance)

# Increase the counter by 1
counter += 1

# Allocate instance to fold4
else:

# Append this instance to the fold
fold4.append(instance)

# Reset the counter to 0
counter = 0

# Shuffle the folds
random.shuffle(fold0)
random.shuffle(fold1)
random.shuffle(fold2)
random.shuffle(fold3)
random.shuffle(fold4)

# Return the folds
return  fold0, fold1, fold2, fold3, fold4
```

Here is the ID3 code:

```from node import Node # Import the Node class from the node.py file
from math import log # We need this to compute log base 2
from collections import Counter # Used for counting

# File name: id3.py
# Author: Addison Sears-Collins
# Date created: 7/5/2019
# Python version: 3.7
# Description: Iterative Dichotomiser 3 algorithm

# Required Data Set Format for Classification Problems:
# Columns (0 through N)
# 0: Class
# 1: Attribute 1
# 2: Attribute 2
# 3: Attribute 3
# ...
# N: Attribute N

def ID3(instances, default):
"""
Parameters:
instances: A list of dictionaries where each dictionary is an instance.
Each dictionary contains attribute:value pairs
e.g.: instances =
{'Class':'Play','Outlook':'Sunny','Temperature':'Hot'}
{'Class':'Don't Play','Outlook':'Rain','Temperature':'Cold'}
{'Class':'Play','Outlook':'Overcast','Temperature':'Hot'}
...
etc.
The first attribute:value pair is the
target variable (i.e. the class of that instance)
default: The default class label (e.g. 'Play')
Returns:
tree: An object of the Node class
"""
# The len method returns the number of items in the list
# If there are no more instances left, return a leaf that is labeled with
# the default class
if len(instances) == 0:
return Node(default)

classes = []  # Create an empty list named 'classes'

# For each instance in the list of instances, append the value of the class
# to the end of the classes list
for instance in instances:
classes.append(instance['Class'])

# If all instances have the same class label or there is only one instance
# remaining, create a leaf node labeled with that class.
# Counter(list) creates a tally of each element in the list. This tally is
# represented as an element:tally pair.
if len(Counter(classes)) == 1 or len(classes) == 1:
tree = Node(mode_class(instances))
return tree

# Otherwise, find the best attribute, the attribute that maximizes the gain
# ratio of the data set, to be the next decision node.
else:
# Find the name of the most informative attribute of the data set
# e.g. "Outlook"
best_attribute = most_informative_attribute(instances)

# Initialize a tree with the most common class
# e.g. "Play"
tree = Node(mode_class(instances))

# The most informative attribute becomes this decision node
# e.g. "Outlook" becomes this node
tree.attribute = best_attribute

best_attribute_values = []

# The branches of the node are the values of the best_attribute
# e.g. "Sunny", "Overcast", "Rainy"
# Go through each instance and create a list of the values of
# best_attribute
for instance in instances:
try:
best_attribute_values.append(instance[best_attribute])
except:
no_best_attribute = True
# Create a list of the unique best attribute values
# Set is like a list except it extracts nonduplicate (unique)
# items from a list.
# In short, we create a list of the set of unique
# best attribute values.
tree.attribute_values = list(set(best_attribute_values))

# Now we need to split the instances. We will create separate subsets
# for each best attribute value. These become the child nodes
# i.e. "Sunny", "Overcast", "Rainy" subsets
for best_attr_value_i in tree.attribute_values:

# Generate the subset of instances
instances_i = []
# Go through one instance at a time
for instance in instances:
# e.g. If this instance has "Sunny" as its best attribute value
if instance[best_attribute] == best_attr_value_i:
instances_i.append(instance) #Add this instance to the list

# Create a subtree recursively
subtree = ID3(instances_i, mode_class(instances))

# Initialize the values of the subtree
subtree.instances_labeled = instances_i

# Keep track of the state of the subtree's parent (i.e. tree)
subtree.parent_attribute = best_attribute # parent node
subtree.parent_attribute_value = best_attr_value_i # branch name

# Assign the subtree to the appropriate branch
tree.children[best_attr_value_i] = subtree

# Return the decision tree
return tree

def mode_class(instances):
"""
Parameters:
instances: A list of dictionaries where each dictionary is an instance.
Each dictionary contains attribute:value pairs
Returns:
Name of the most common class (e.g. 'Don't Play')
"""

classes = []  # Create an empty list named 'classes'

# For each instance in the list of instances, append the value of the class
# to the end of the classes list
for instance in instances:
classes.append(instance['Class'])

# The 1 ensures that we get the top most common class
# The  ensures we get the name of the class label and not the tally
# Return the name of the most common class of the instances
return Counter(classes).most_common(1)

def prior_entropy(instances):
"""
Calculate the entropy of the data set with respect to the actual class
prior to splitting the data.
Parameters:
instances: A list of dictionaries where each dictionary is an instance.
Each dictionary contains attribute:value pairs
Returns:
Entropy value in bits
"""
# For each instance in the list of instances, append the value of the class
# to the end of the classes list
classes = []  # Create an empty list named 'classes'

for instance in instances:
classes.append(instance['Class'])
counter = Counter(classes)

# If all instances have the same class, the entropy is 0
if len(counter) == 1:
return 0
else:
# Compute the weighted sum of the logs of the probabilities of each
# possible outcome
entropy = 0
for c, count_of_c in counter.items():
probability = count_of_c / len(classes)
entropy += probability * (log(probability, 2))
return -entropy

def entropy(instances, attribute, attribute_value):
"""
Calculate the entropy for a subset of the data filtered by attribute value
Parameters:
instances: A list of dictionaries where each dictionary is an instance.
Each dictionary contains attribute:value pairs
attribute: The name of the attribute (e.g. 'Outlook')
attribute_value: The value of the attribute (e.g. 'Sunny')
Returns:
Entropy value in bits
"""
# For each instance in the list of instances, append the value of the class
# to the end of the classes list
classes = []  # Create an empty list named 'classes'

for instance in instances:
if instance[attribute] == attribute_value:
classes.append(instance['Class'])
counter = Counter(classes)

# If all instances have the same class, the entropy is 0
if len(counter) == 1:
return 0
else:
# Compute the weighted sum of the logs of the probabilities of each
# possible outcome
entropy = 0
for c, count_of_c in counter.items():
probability = count_of_c / len(classes)
entropy += probability * (log(probability, 2))
return -entropy

def gain_ratio(instances, attribute):
"""
Calculate the gain ratio if we were to split the data set based on the values
of this attribute.
Parameters:
instances: A list of dictionaries where each dictionary is an instance.
Each dictionary contains attribute:value pairs
attribute: The name of the attribute (e.g. 'Outlook')
Returns:
The gain ratio
"""
# Record the entropy of the combined set of instances
priorentropy = prior_entropy(instances)

values = []

# Create a list of the attribute values for each instance
for instance in instances:
values.append(instance[attribute])
counter = Counter(values) # Store the frequency counts of each attribute value

# The remaining entropy if we were to split the instances based on this attribute
# This is a weighted entropy score sum
remaining_entropy = 0

# This variable is used for the gain ratio calculation
split_information = 0

# items() method returns a list of all dictionary key-value pairs
for attribute_value, attribute_value_count in counter.items():
probability = attribute_value_count/len(values)
remaining_entropy += (probability * entropy(
instances, attribute, attribute_value))
split_information += probability * (log(probability, 2))

information_gain = priorentropy - remaining_entropy

split_information = -split_information

gainratio = None

if split_information != 0:
gainratio = information_gain / split_information
else:
gainratio = -1000

return gainratio

def most_informative_attribute(instances):
"""
Choose the attribute that provides the most information if you were to
split the data set based on that attribute's values. This attribute is the
one that has the highest gain ratio.
Parameters:
instances: A list of dictionaries where each dictionary is an instance.
Each dictionary contains attribute:value pairs
attribute: The name of the attribute (e.g. 'Outlook')
Returns:
The name of the most informative attribute
"""
selected_attribute = None
max_gain_ratio = -1000

# instances.items() extracts the first instance in instances
# for key, value iterates through each key-value pair in the first
# instance in instances
# In short, this code creates a list of the attribute names
attributes = [key for key, value in instances.items()]
# Remove the "Class" attribute name from the list
attributes.remove('Class')

# For every attribute in the list of attributes
for attribute in attributes:
# Calculate the gain ratio and store that value
gain = gain_ratio(instances, attribute)

# If we have a new most informative attribute
if gain > max_gain_ratio:
max_gain_ratio = gain
selected_attribute = attribute

return selected_attribute

def accuracy(trained_tree, test_instances):
"""
Parameters:
trained_tree: A tree that has already been trained
test_instances: A set of test instances
Returns:
Classification accuracy (# of correct predictions/# of predictions)
"""
# Set the counter to 0
no_of_correct_predictions = 0

for test_instance in test_instances:
if predict(trained_tree, test_instance) == test_instance['Class']:
no_of_correct_predictions += 1

return no_of_correct_predictions / len(test_instances)

def predict(node, test_instance):
'''
Parameters:
node: A trained tree node
test_instance: A single test instance
Returns:
Class value (e.g. "Play")
'''
# Stopping case for the recursive call.
# If this is a leaf node (i.e. has no children)
if len(node.children) == 0:
return node.label
# Otherwise, we are not yet on a leaf node.
# Call predict method recursively until we get to a leaf node.
else:
# Extract the attribute name (e.g. "Outlook") from the node.
# Record the value of the attribute for this test instance into
# attribute_value (e.g. "Sunny")
attribute_value = test_instance[node.attribute]

# Follow the branch for this attribute value assuming we have
# an unpruned tree.
if attribute_value in node.children and node.children[
attribute_value].pruned == False:
# Recursive call
return predict(node.children[attribute_value], test_instance)

# Otherwise, return the most common class
# return the mode label of examples with other attribute values for the current attribute
else:
instances = []
for attr_value in node.attribute_values:
instances += node.children[attr_value].instances_labeled
return mode_class(instances)

TREE = None
def prune(node, val_instances):
"""
Prune the tree recursively, starting from the leaves
Parameters:
node: A tree that has already been trained
val_instances: The validation set
"""
global TREE
TREE = node

def prune_node(node, val_instances):
# If this is a leaf node
if len(node.children) == 0:
accuracy_before_pruning = accuracy(TREE, val_instances)
node.pruned = True

# If no improvement in accuracy, no pruning
if accuracy_before_pruning >= accuracy(TREE, val_instances):
node.pruned = False
return

for value, child_node in node.children.items():
prune_node(child_node, val_instances)

# Prune when we reach the end of the recursion
accuracy_before_pruning = accuracy(TREE, val_instances)
node.pruned = True

if accuracy_before_pruning >= accuracy(TREE, val_instances):
node.pruned = False

prune_node(TREE, val_instances)
```

Here is the code for the nodes. This is needed in order to run ID3 (above).

```# File name: node.py
# Author: Addison Sears-Collins
# Date created: 7/6/2019
# Python version: 3.7
# Description: Used for constructing nodes for a tree

class Node:

# Method used to initialize a new node's data fields with initial values
def __init__(self, label):

# Declaring variables specific to this node
self.attribute = None  # Attribute (e.g. 'Outlook')
self.attribute_values = []  # Values (e.g. 'Sunny')
self.label = label   # Class label for the node (e.g. 'Play')
self.children = {}   # Keeps track of the node's children

# References to the parent node
self.parent_attribute = None
self.parent_attribute_value = None

# Used for pruned trees
self.pruned = False  # Is this tree pruned?
self.instances_labeled = []

```

Here is the code that displays the results. This is the driver program:

```import id3
import parse
import random
import five_fold_stratified_cv
from matplotlib import pyplot as plt

# File name: results.py
# Author: Addison Sears-Collins
# Date created: 7/6/2019
# Python version: 3.7
# Description: Results of the Iterative Dichotomiser 3 runs
# This source code is the driver for the entire program

# Required Data Set Format for Classification Problems:
# Columns (0 through N)
# 0: Class
# 1: Attribute 1
# 2: Attribute 2
# 3: Attribute 3
# ...
# N: Attribute N

ALGORITHM_NAME = "Iterative Dichotomiser 3"

def main():

print("Welcome to the " +  ALGORITHM_NAME + " Program!")
print()

# Enter the name of your input file
#file_name = 'car.txt'
file_name = input("Enter the name of your input file (e.g. car.txt): ")

# Show functioning of the program
#trace_runs_file = 'car_id3_trace_runs.txt'
trace_runs_file = input(
"Enter the name of your trace runs file (e.g. car_id3_trace_runs.txt): ")

# Save the output graph of the results
#imagefile = 'car_id3_results.png'
imagefile = input(
"Enter the name of the graphed results file (e.g. foo.png): ")

# Open a new file to save trace runs
outfile_tr = open(trace_runs_file,"w")

outfile_tr.write("Welcome to the " +  ALGORITHM_NAME + " Program!" + "\n")
outfile_tr.write("\n")

data = parse.parse(file_name)
pruned_accuracies_avgs = []
unpruned_accuracies_avgs = []

# Shuffle the data randomly
random.shuffle(data)

# This variable is used for the final graph. Places
# upper limit on the x-axis.
# 10% of is pulled out for the validation set
# 20% of that set is used for testing in the five-fold
# stratified cross-validation
# Round up to the nearest value of 10
upper_limit = (round(len(data) * 0.9 * 0.8) - round(
len(data) * 0.9 * 0.8) % 10) + 10
#print(str(upper_limit)) # Use for debugging
if upper_limit <= 10:
upper_limit = 50

# Get the most common class in the data set.
default = id3.mode_class(data)

# Pull out 10% of the data to be used as a validation set
# The remaining 90% of the data is used for cross validation.
validation_set = data[: 1*len(data)//10]
data = data[1*len(data)//10 : len(data)]

# Generate the five stratified folds
fold0, fold1, fold2, fold3, fold4 = five_fold_stratified_cv.get_five_folds(
data)

# Generate lists to hold the training and test sets for each experiment
testset = []
trainset = []

# Create the training and test sets for each experiment
# Experiment 0
testset.append(fold0)
trainset.append(fold1 + fold2 + fold3 + fold4)

# Experiment 1
testset.append(fold1)
trainset.append(fold0 + fold2 + fold3 + fold4)

# Experiment 2
testset.append(fold2)
trainset.append(fold0 + fold1 + fold3 + fold4)

# Experiment 3
testset.append(fold3)
trainset.append(fold0 + fold1 + fold2 + fold4)

# Experiment 4
testset.append(fold4)
trainset.append(fold0 + fold1 + fold2 + fold3)

step_size = len(trainset)//20

for length in range(10, upper_limit, step_size):
print('Number of Training Instances:', length)
outfile_tr.write('Number of Training Instances:' + str(length) +"\n")
pruned_accuracies = []
unpruned_accuracies = []

# Run all 5 experiments for 5-fold stratified cross-validation
for experiment in range(5):

# Each experiment has a training and testing set that have been
# preassigned.
train = trainset[experiment][: length]
test = testset[experiment]

# Pruned
tree = id3.ID3(train, default)
id3.prune(tree, validation_set)
acc = id3.accuracy(tree, test)
pruned_accuracies.append(acc)

# Unpruned
tree = id3.ID3(train, default)
acc = id3.accuracy(tree, test)
unpruned_accuracies.append(acc)

# Calculate and store the average classification
# accuracies for each experiment
avg_pruned_accuracies = sum(pruned_accuracies) / len(pruned_accuracies)
avg_unpruned_accuracies = sum(unpruned_accuracies) / len(unpruned_accuracies)

print("Classification Accuracy for Pruned Tree:", avg_pruned_accuracies)
print("Classification Accuracy for Unpruned Tree:", avg_unpruned_accuracies)
print()
outfile_tr.write("Classification Accuracy for Pruned Tree:" + str(
avg_pruned_accuracies) + "\n")
outfile_tr.write("Classification Accuracy for Unpruned Tree:" + str(
avg_unpruned_accuracies) +"\n\n")

# Record the accuracies, so we can plot them later
pruned_accuracies_avgs.append(avg_pruned_accuracies)
unpruned_accuracies_avgs.append(avg_unpruned_accuracies)

# Close the file
outfile_tr.close()

plt.plot(range(10, upper_limit, step_size), pruned_accuracies_avgs, label='pruned tree')
plt.plot(range(10, upper_limit, step_size), unpruned_accuracies_avgs, label='unpruned tree')
plt.xlabel('Number of Training Instances')
plt.ylabel('Classification Accuracy on Test Instances')
plt.grid(True)
plt.title("Learning Curve for " +  str(file_name))
plt.legend()
plt.savefig(imagefile)
plt.show()

main()
```

Return to Table of Contents

# Iterative Dichotomiser 3 Output

Here are the trace runs:

Here are the results:

# Analysis

## Abalone Data Set

While the overall classification accuracies on the data set were only ~60%, the data show the impact that pruning a decision tree can have on improving prediction performance. The pruned tree performed better than the unpruned tree on the same set of test instances, irrespective of how many training instances the decision trees were trained on. These results suggest that smaller decision trees can generalize better than larger trees and can more accurately classify new test instances. This pruning impact was more pronounced on this data set than any of the other data sets evaluated in this project.

## Car Evaluation Data Set

Classification accuracy for both the pruned and unpruned trees was high on this data set, plateauing at ~92%. This performance was the best of any of the data sets examined in this project.

When the decision trees were trained on a relatively small number of training instances, the pruned trees outperformed the unpruned trees. However, beyond 600 training instances, unpruned trees began to have higher classification accuracy.

It is not clear if the superior accuracy of the unpruned tree algorithm on large numbers of training instances was due to overfitting on the training data or if there was actually an improvement in performance. More training data would be needed to examine if this effect remains consistent for larger numbers of training instances.

## Image Segmentation Data Set

For the segmentation data set, neither the unpruned nor the pruned trees outperformed the other consistently. Overall classification accuracy quickly improved for both ID3 algorithms when the decision trees were trained on more training instances. Performance plateaued at ~72%, which was not as good as the car evaluation data set but was better than the abalone data set.

The results show that creating smaller decision trees by pruning the branches might not lead to improved classification performance on unseen new test instances, in some cases.

# Summary and Conclusions

We can conclude that decision trees can create a convenient set of if-then-else decision rules that can be used to classify new unseen test instances. Classification accuracy for both the pruned and unpruned ID3 algorithms was >60% for all data sets, reaching a peak of ~92% on the car evaluation data set.

Neither type of tree, unpruned and pruned, consistently outperformed the other in terms of classification accuracy. The results suggest the impact of pruning depends highly on the data set being evaluated. On the abalone data set, pruning the tree reduced overfitting and led to improved performance. On the car evaluation data set, the performance was mixed. On the image segmentation data set, performance was relatively the same for both ID3-unpruned and ID3-pruned.

In a real-world setting, the overhead associated with reduced error pruning would need to be balanced against the increased speed and simplicity with which new unseen test instances could be classified. Future work would need to compare the performance of the ID3-pruned and ID3-unpruned algorithms on other classification data sets.

Return to Table of Contents

# References

Alpaydin, E. (2014). Introduction to Machine Learning. Cambridge, Massachusetts: The MIT Press.

Bohanec, M. (1997, 6 1). Car Evaluation Data Set. Retrieved from UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Car+Evaluation

Brodley, C. (1990, November 1). Image Segmentation Data Set. Retrieved from UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Image+Segmentation

James, G. (2013). An Introduction to Statistical Learning: With Applications in R. New York: Springer.

Kelleher, J. D., Namee, B., & Arcy, A. (2015). Fundamentals of Machine Learning for Predictive Data Analytics. Cambridge, Massachusetts: The MIT Press.

Mitchell, T. M. (1997). Machine Learning. New York: McGraw-Hill.

Quinlan, J. R. (1986). Induction of Decision Trees. Machine Learning, 81-106.

Waugh, S. (1995, 12 1). Abalone Data Set. Retrieved from UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Abalone

Return to Table of Contents

## Advantages of the K-Nearest Neighbors Algorithm

For the k-nearest neighbors algorithm (k-NN), we store a set of instances and then classify each new instance based on the most common class among that instance’s k neighboring instances, where k is some positive integer.

Here are some reasons why we would consider using k-nearest neighbors instead of another model type:

# Simplicity

k-NN is straightforward and easy to understand. If you need an algorithm that you can easily explain to a non-technical boss or project/product manager, k-NN is a good starting point.

# Non-parametric

Instead of making a bunch of assumptions about the data (e.g. linearity, conditional independence, etc.), you let the data speak for itself rather than fixing a distribution to the data. The only thing that you need to do is set the value for k, the number of neighbors that you want to consider when classifying a new, unseen instance. The only thing you assume is that neighboring instances are similar to each other.

# Noisy Data

The k-NN algorithm is robust to data that contains a lot of noise.

# No Training Needed

k-NN is a lazy algorithm in that there is no training step. The lack of a training step means that you get to classify new instances right from the start.

# Flexibility

If you need an algorithm that can do both regression and classification, k-NN can do that for you.

# Multi-class Problems

If you need to classify data in which there are multiple classes instead of just two classes, k-NN can handle that.

# Continuous Improvement

In order to update k-NN, all you need to do is add the training instance. Since there is no training needed, k-NN does not need to build a completely new model for each training instance added. The more instances you have, the better k-NN can classify unseen instances.

## Linear Separability and the XOR Problem

Classification algorithms like Logistic Regression and Naive Bayes only work on linearly separable problems.

What Does Linearly Separable Mean?

Consider a data set with two attributes x1 and x2 and two classes 0 and 1. Let class 0 = o and class 1 = x.

A straight line (or plane) can be used to separate the two classes (i.e. the x’s from the o’s). In other words, a single decision surface can be used as the boundary between both classes.

When the data is nonlinearly separable, we cannot use linear models. We must instead use models that can handle nonlinearly separable data.

What Does Nonlinearly Separable Mean?

Here is an example. Consider the XOR function acting on training data that has two attributes x1 and x2 and a class (0 or 1).

The function x1 XOR x2 has the following truth table:

Now let us graph this function. For the class, black circles = 1 and white circles = 0.

We could try all day to try to place a line on the graph that can separate the classes, and we still would not succeed. This problem is known as the XOR problem and is a clear example of when linear classifiers like Naive Bayes and Logistic Regression would not work, and we would need to consider other types of machine learning models that can handle nonlinearly separable data.

## Difference Between Recursion and Iteration

If you are learning algorithms or data structures, you might have come across the terms recursion and iteration. Knowing the difference between the two can be confusing, so I will explain both terms using a real-world example.

# Eating My Morning Bowl of Oatmeal

Everyday before I go to work, I take a shower, get dressed, and then head over to the kitchen to fix myself a big bowl of oatmeal. Breakfast is my favorite meal of the day and the most important. Many days I feel like skipping breakfast so that I can go on with my day. However, on those rare occasions when I decide to skip breakfast, I have usually regretted it. I spend the morning hungry and cannot fully concentrate on the tasks at hand.

The problem of eating my morning bowl of oatmeal can be implemented both iteratively and recursively. Iteration is the act of repeating a process. So, in the case of oatmeal, an iterative way to solve the problem of eating my bowl of oatmeal would be to do the following:

1. Take one bite of the oatmeal
2. Repeat Step 1 until the bowl is empty.

Recursion, on the other hand, is when a method calls itself. It involves breaking a problem into smaller versions of the same problem until you reach a trivial stopping case.

Let’s consider the problem of eating a bowl of oatmeal. I can solve the problem of eating the entire bowl of oatmeal recursively by breaking it into two subproblems as follows:

• Take one bite of oatmeal
• Eat the rest of the bowl of oatmeal

Each step of this recursive procedure gets smaller and smaller until all I am left with is one more spoonful of oatmeal. Once I finish that last spoonful, I am done, and I can go on about my day.

Here is how I would implement the iterative and recursive procedures for eating a bowl of oatmeal in Java pseudocode:

## Iterative Implementation

```public static main void main(String[] args) {

while (!bowl.empty()) {
bowl.takeOneBite();
// Keep taking bites over and over
// again until the bowl of oatmeal
// is empty
}
}
```

## Recursive Implementation

```public static void eatOatmeal(Bowl bowl) {

if (bowl.empty()) {
// This is our stopping case.
// We stop taking bites once
// the bowl is empty

return;
}
else {
bowl.takeOneBite();
eatOatmeal(bowl); // Recursive call
}
}
```