Significance of the Misunderstanding of Deep Learning

In my previous post on deep learning, I posted about some of the potential fallout that could occur due to the misunderstanding of what deep learning is really learning. I mentioned some possible significant consequences:

  • Wasted resources as venture capitalists throw money at anything that has to do with deep learning. 
  • Wasted resources as non-expert government agencies fund any research project that has the term “deep learning” in it.
  • A boat load of computer science graduates around the world that, all of a sudden, have found their “passion” in deep learning.
  • Disappointed companies as deep learning does not have the expected impact on their bottom line.
  • Another AI winter.

Let’s look at the last bullet point. Rodney Brooks, former MIT professor and co-founder of iRobot and Rethink Robotics, predicts that we will enter a new AI winter in 2020

AI winter is a period of reduced funding and interest in artificial intelligence research that comes at the tail end of an AI hype cycle. Each AI hype cycle begins with some major breakthrough. Then for the next 5-10 years after that breakthrough, all sorts of papers get written on AI, companies that are doing “X + [insert some new hot, AI technology]” get funded, computer science students around the world change their career paths, and the media goes into a feeding frenzy about how the new breakthrough will change the world.

Executives at big companies around the world then shout out quotations like this: “AI is more profound than … electricity or fire” –  Sundar Pichai (CEO of Google). 

Experts in AI then chime in, “This time is different!” 

When you hear comments like this, ask yourself “what are they selling?”.

Deep learning is a tool like any other tool…like a wrench for a car mechanic or a serrated knife for a master chef. Deep learning can currently solve specific problems really well but others not so well. 

Machine learning, the field that encompasses deep learning, is about automating the process of finding relationships based on empirical data. It is a powerful tool that has an enormous amount of potential, but it is not a panacea and is still a long way away from replacing the human brain. 

I do agree with Mr. Pichai that when we have true artificial general intelligence that such a breakthrough would be as profound as electricity or fire. We are not there yet. Much more work needs to be done (and that is a great thing for us scientists and engineers). The future is bright.

How Can We Help Others Gain a Better Understanding of What These Models Are Learning?

The example that is often marketed to explain deep learning is a neural network that first takes the inputs and learns lines, curves, and other shapes. Each successive layer abstracts and combines the data more and more until we see letters and fully formed images. This method of explaining deep learning seems like a good example of how we might gain an understanding of what exactly these algorithms actually learn.

I think that one reason why people might not trust deep learning is because they don’t understand how it works and even when they do, we cannot see the hidden layers in the neural network. When we look at the neural network for deep learning, we have multiple layers including hidden layers that are compose some of the neurons.

neural_network
Neural Network

With deep learning, we allow the program to learn and distinguish the key features from our sample data set. The problem is that we may not be easily able to understand the features that are distinguished and how they might be related to each other as defined by the hidden layers in the algorithm’s learning model.

I think that one strategy for improving, or at least understanding what deep learning is doing is to unpack these abstracted layers in order to hand tune the results into something that is more relevant.

What are Deep Learning Methods Really Learning?

It is not exactly clear what deep learning methods are really learning. Sure, they are highly effective and are learning something, but I’m still trying to get my head around exactly what they are learning.

Consider your run-of-the-mill deep neural network. “Learning” is nothing more than an optimization procedure. We are trying to produce an optimized mathematical formula that takes in a set of training examples and then can, as accurately as possible, map the inputs (i.e. attributes, features, etc.) of those examples to the outputs (i.e. class, target variable, etc.). We then use this formula to classify a new set of examples.

gradient_descent_png
Gradient Descent. Is this really all there is to learning?

At its core, deep learning is about input-process-output. It is not true learning in the sense of the word (the way we humans do). True learning entails understanding, and understanding is nonexistent during deep learning. 

You can memorize a book, chapter-by-chapter, word-for-word; but that doesn’t mean you are learning. You still would not understand the plot. Similarly, in deep learning there is no understanding. Deep learning “memorizes” a mapping between inputs and outputs without any real understanding of the why behind those relationships. And in my view, the why is a huge part of learning. True learning (in the human sense of the word) without understanding is not learning. Perhaps then we should call deep learning something different? Deep optimization perhaps??? Guess that didn’t sound as marketable and sexy as deep learning.

If you look out in nature — the human brain or the brain of any living organism — nothing out there learns in a way that even remotely resembles backpropagation. Neural networks are about classification error, but real learning — the way humans learn — is deeper than that (pun intended). 

A neural network, for example, has a completely different concept of what it is to be a dog. That concept could involve where certain groups of pixels are placed and may have nothing to do with the actual structure of the animal. Where a human would see legs, arms, torso, etc., a deep learning algorithm may abstract a completely different set of things. This has led a rise in adversarial attacks, where an attacker is able to determine what representation provides the highest probability for an image to be classified as anything and is able to insert noise that causes things to be misclassified.

Another point to consider is that neural networks generate something. It may be a relationship that we did not previously understand, but it may also just be nonsense that happens to work.  The abstractions may result in some representational form that is at its most basic just complete nonsense. If there is no real understanding of what the abstractions that the algorithm makes, then it is hard to confirm that it’s actually doing anything.

neural_network
A Basic Neural Network

The significance of the lack of understanding of what deep learning is, is yet to be seen; but here are just a few of the consequences if the hype gets unchecked:

  • Wasted resources as venture capitalists throw money at anything that has to do with deep learning. 
  • Wasted resources as non-expert government agencies fund any research project that has the term “deep learning” in it.
  • A boat load of computer science graduates around the world that, all of a sudden, have found their “passion” in deep learning.
  • Disappointed companies as deep learning does not have the expected impact on their bottom line.
  • Another AI winter.

Remember, there is the marketing element in there too. Using anthropomorphic terms like machine “learning” and deep “learning” is a much better sell to a general audience than machine mathematical optimization or deep optimization. Researchers gotta sell their ideas too!

Bottom Line: Artificial intelligence is not yet intelligent, and deep learning is not yet deep (yay! we still have work to do!)…nor is it learning in the true sense of the word. Deep learning certainly will continue to have an enormous impact on the world, but there needs to be more awareness and discussion of not just the enormous potential of deep learning but also its limitations so non-technical stakeholders can make more informed decisions.

Why Deep Learning Has Received So Much Attention Lately

Deep learning has been receiving an enormous amount of interest over the last seven years in the academic and business communities. Let’s take a look at the definition of deep learning, and then we will take a look at how this field has become so popular so quickly.

What is Deep Learning

Deep learning is a machine learning technique in which we teach a computer how to make predictions. Predictions are made by mapping a set of inputs to a set of outputs. 

Input Data —–> Deep Learning Algorithm (i.e. Process) —–> Output Data

For example, let’s say our input data into a deep learning algorithm is a set of photos. We want to be able to automatically tag each photo as either being dogs or elephants.

dogs_playing_on_beach
Dogs
elephant_flock_baby_elephant
Elephants

Input Data (lots of images containing dogs and elephants) —–> Deep Learning Algorithm —–> Classification of Each Image (i.e. Dogs or Elephants)

The “learning” part of the term deep learning entails looking at a bunch (hundreds, thousands, even millions+) of photos of elephants and dogs to develop a mathematical model of what both animals look like. Once the deep learning algorithm has been trained to recognize dogs and elephants, it can then be used to classify new photos as either dogs or elephants.

Most deep learning algorithms use neural network architectures as the structure of the underlying mathematical model. For this reason, deep learning methods are commonly called deep neural networks. 

Neural networks consist of layers and interconnected nodes. The first layer is the input layer. This layer might consist of, for example, thousands of matrices of pixels that represent photos of dogs or elephants. Each layer after the input layer transforms the data slightly so that the data is more abstract and complete than the previous layer. 

The layer after the input layer (i.e. second layer), for example, might contain nodes that recognize simple shapes like circles and edges (that at this point look nothing like a dog or elephant). The third layer contains nodes that recognize more complex shapes that look like a dog’s body parts (e.g. nose, eye, ear, etc.). Then the final layer, the output layer, outputs the classification of a photo as being either a dog or elephant.

neural-network
A basic multi-layer neural network architecture. The first layer on the left is the input layer. The two inner layers of nodes (neurons) are the hidden layers. The fourth layer on the right is the output layer that outputs the classification. In this case, the network expects four different classes in the data set (e.g. dogs, elephants, cows, horses).

Forbes Magazine has a good image showing the basic deep neural network structure I described above.

The “deep” part of deep learning refers to the number of hidden layers in the neural network. Standard neural networks have two or three (like in my example above) hidden layers, but deep neural networks can have 100+ layers. 

In short, a deep neural network is one that has several hidden layers, with the idea that these layers learn different levels of abstraction of the input attributes; thereby allowing the network to solve more complex problems, such as face recognition, object tracking and so on.

Origin of the Deep Learning Revolution: AlexNet

This post at Medium.com shows the graphs of the percentage of selected arXiv publications with either “deep”, “adversarial” or “convolutional” in the title. Note how the graph was virtually all 0s prior to 2010. It then took off like a rocket in 2012. What happened in 2012?

In 2010 and 2011, Fei-Fei Li held the ImageNet competition, an annual machine learning contest. Contest participants were given millions of images to use to train their models. These images were pre-labeled with one of ~1,000 different categories (e.g. leopard, cherry, mushroom, etc.). The objective of the contest was to correctly classify examples that were not in the training set. 

During those first two years of the competition in 2010 and 2011, the winning teams had a classification accuracy of 72%. None of the winners of those competitions used deep learning methods. Then in 2012, a team from the University of Toronto led by Alex Krizhevsky won the competition with a classification accuracy of 84%. The second place contestant had a classification accuracy of 74%. The team from the University of Toronto used deep learning methods combined with the computational power of graphical processing units (GPUs) to completely blow the competition out of the water.

The results were remarkable and gave birth to the deep learning era that continues to this day.

Why Deep Learning Has Received So Much Attention Lately

With traditional machine learning approaches, you would have to design a feature extraction algorithm which generally involves a lot of heavy mathematics (complex design), may not be very efficient, and does not perform well (i.e. accuracy may not be suitable for real-world applications). After doing all of that, you would also have to design a whole classification model to classify your input given the extracted features (i.e. attributes).

That’s a lot of work!

Enter Deep Learning…

  • With deep neural networks, we can perform feature extraction and classification in one shot, which means we could only need to design one model.
  • The availability of large amounts of labeled data as well as GPUs, which can process data in parallel at high speeds, enables these models to be much faster than previous methods.
  • Using the back-propagation algorithm, a well-designed loss function, and millions of parameters, these deep networks are able to learn highly complex features (which had to traditionally be hand designed)…i.e. no more complex design!
  • Deep neural networks have become fairly easy to implement with high-level open source libraries such as Keras, Pytorch, and TensorFlow.

Deep Learning has made many new applications practically feasible. We wouldn’t have been able to make good language translators pre-deep learning, because we simply had no technique at the time that would perform well enough or at a high enough speed for a real-world application.  Deep learning techniques have been applied to not just image recognition, but automatic speech recognition, natural language processing, drug discovery, customer relationship management, robotics, self-driving cars, and more.