Practical Deep Learning Part 2 - Integrating Recent Advances and Classic Machine Learning17 Jan 2017 Jeremy Howard & Rachel Thomas
With part 2 of our in person SF course starting in 6 weeks, and applications having just opened, we figured we better tell you a bit about what to expect!… So here’s an overview of what we’re planning to cover.
The main theme of this part of the course will be tackling more complex problems, that require integrating a number of techniques. This includes both integrating multiple deep learning techniques (such as combining RNNs and CNNs for attentional models), as well as combining classic machine learning techniques with deep learning (such as using clustering and nearest neighbors for semi-supervised and zero-shot learning). As always, we’ll be introducing all methods in the context of solving end-to-end real world modeling problems, using Kaggle datasets where possible (so that we have a clear best-practice goal to aim for).
Since we have no pre-requisites for the course other than a year of coding experience and completion of part 1 of the course, we’ll be fully explaining all the classic ML techniques we’ll use as well.
In addition, we’ll be covering some more sophisticated extensions of the DL methods we’ve seen, such as adding memory to RNNs (e.g. for building question answering systems / “chat bots”), and multi-object segmentation and detection methods.
Some of the methods we’ll examine will be very recent research directions, including unpublished research we’ve done at fast.ai. So we’ll be looking at journal articles much more frequently in this part of the course—a key teaching goal for us is that you come away from the course feeling much more comfortable reading, understanding, and implementing research papers. We’ll be sharing some simple tricks that make it much easier to quickly scan and get the key insights from a paper.
Python 3 and Tensorflow
This part of the course will use Python 3 and Tensorflow, instead of Python 2 and Theano as used in part 1. We’ll explain our reasoning in more detail in a future post; we hope that you will come away from the course feeling confident in both of these tools, and able to identify the strengths and weaknesses of both, to help you decide what to use in your own projects.
We’ve found using Python 3 to develop the course materials quite a bit more pleasant than Python 2. Whilst version 3 of the language has provided some incremental improvements for many years, until recently we’ve found the lack of support for Python 3 in scientific computing libraries resulted in it being a very frustrating experience. The good news is that that’s all changed now, and furthermore recent developments in Python 3.4 and 3.5 have greatly improved the productivity of the language.
Our view of Tensorflow is that buried in a rather verbose and complex API there’s a very nice piece of software buried away in there. We’ll be showing how to write custom GPU accelerated algorithms from scratch in Tensorflow, staying within a small and simple subset of the Tensorflow API where things stay simple and elegant.
Structured data, time series analysis, and clustering
One area where deep learning has been almost entirely ignored is in the area of structured data analysis (i.e. analyzing data where each column represents a distinct feature, such as from a database table). We had wondered whether this is because deep learning is simply less well suited to this task than the very popular decision tree ensembles (such as random forests and XGBoost, which we’re big fans of), but we’ve recently done some research that has shown that deep learning can be both simpler and more effective than these techniques. But getting it to work well requires getting a lot of little details right—details that have never been fully understood or documented elsewhere to the best of our knowledge.
We’ll be showing how to get state of the art results in structured data analysis, including showing how to use the wonderful XGBoost, and comparing these techniques. We’ll also take a brief detour into looking at R, where structured data analysis is still quite a bit more straightforward than Python.
Most of the structured data sets we’ll investigate will have a significant time series component, so we’ll also be discussing the best ways to deal with this kind of data. Time series pop up everywhere, such as fraud and credit models (using time series of transactions), maintenance and operations (using time series of sensor readings), finance (technical indicators), medicine (medical sensors and EMR data), and so forth.
We will also begin our investigation of cluster analysis, showing how it can be combined with a softmax layer to create more accurate models. We will show how to implement this analysis from scratch in Tensorflow, creating a novel GPU accelerated algorithm.
Deep dive into computer vision
We will continue our investigation into computer vision applications from part 1, getting into some new techniques and new problem areas. We’ll study resnet and inception architectures in more detail, with a focus on how these architectures can be used for transfer learning. We’ll also look at more data augmentation techniques, such as test time augmentation, and occlusion.
We’ll learn about the K nearest neighbors algorithm, and use it in conjunction with CNNs to get state of the art results on multi-frame image sequence analysis (such as videos or photo sequences). From there, we will look at other ways of grouping objects using deep learning, such as siamese and triplet networks, which we will use to get state of the art results for image comparisons.
Unsupervised and semi-supervised learning, and productionizing models
In part 1 we studied pseudo-labeling and knowledge distillation for semi-supervised learning. In part 2 we’ll learn more techniques, including bayesian-inspired techniques such as variational autoencoder and variational ladder networks. We will also look at the role of generative models in semi-supervised learning.
We will show how to use unsupervised learning to build a useful photo fixing tool, which we’ll then turn into a simple web app in order to show how you can put deep learning models into production.
Zero-shot learning will be a particular focus, especially the recently developed problem of generalized zero-shot learning. Solving this problem allows us to build models on a subset of the full dataset, and apply those models to whole new classes that we haven’t seen before. This is important for real-world applications, where things can change and new types of data can appear any time, and where labeling can be expensive, slow, and/or hard to come by.
And don’t worry, we haven’t forgotten NLP! NLP is a great area to apply unsupervised and semi-supervised learning, and we will look at a number of interesting problems and techniques in this space, including how to use siamese and triplet networks for text analysis.
Segmentation, detection, and handling large datasets
Handling large datasets requires careful management of resources, and doing it in a reasonable time frame requires being thoughtful about the full modeling process. We will show how to build models on the well-known Imagenet dataset, and will show that analysing such a large dataset can readily be done on a single machine fairly quickly. We will discuss how to use your GPU, CPUs, RAM, SSD, and HDD together, taking advantage of each part most effectively.
Whereas most of our focus on computer vision so far has been classification, we’ll now move our focus to localization—that is, finding the objects in an image (or in NLP, finding the relevant parts of a document). We have looked at some simple heatmap and bounding box approaches in part 1 already; in part 2 we build on that to look at more complete segmentation systems, and methods for finding multiple objects in an image. We will look at the results of the recent COCO competition to understand the best approaches to these problems.
Neural machine translation
As recently covered by the New York Times, Google has totally revamped their Translate tool using deep learning. We will learn about what’s behind this system, and similar state of the art systems—including some more recent advances that haven’t yet found their way into Google’s tool.
We’ll start with looking at the original encoder-decoder model that neural machine translation is based on, and will discuss the various potential applications of this kind of sequence to sequence algorithm. We’ll then look at attentional models, including applications in computer vision (where they are useful for large and complex images). In addition, we will investigate stacking layers, both in the form of bidirectional layers, and deep RNN architectures.
Question answering and multi-modal models
Recently there has been a lot of hype about chatbots. Although in our opinion they’re not quite ready for prime time (which is why pretty much all production chatbots still have a large human element), it’s instructive to see how they’re built. In general, question answering systems are built using architectures that have an explicit memory; we will look at ways of representing that in a neural network, and see the impact it has on learning.
We will also look at building visual Q&A systems, where you allow the user to ask questions about an image. This will build on top of the work we did earlier on zero-shot learning.
Reinforcement learning has become very popular recently, with Google showing promising results in training robots to complete complex grasping actions, and DeepMind showing impressive results in playing computer games. We will survey the reinforcement learning field and attempt to identify the most promising application areas, including looking beyond the main academic areas of study (robots and games) to opportunities for reinforcement learning of more general use.
We hope to see you at the course! Part 1 was full, and part 2 is likely to be even more popular, so get your application in soon!
To discuss this post at Hacker News, click here