What We Will Cover in the First Deep Learning Certificate

For those of you considering joining our deep learning certificate, I’m sure you’d like to hear more about what we will be covering. This first course if part 1 of a two part series, which have the following high level goals:

Here’s what we’re planning to cover in part 1 of the course:

  1. The opportunities and constraints in applying deep learning to solving a wide range of problems, including how deep learning is being applied today
  2. How to quickly get up and running using popular deep learning libraries such as Keras
  3. How to test that a model is working correctly
  4. Just enough linear algebra, probability theory, and calculus to understand how deep learning works
  5. The role of each key component of deep learning: input, architecture, output, loss function, optimization, regularization, and testing
  6. The key techniques used for each of these components, why they are used, and how to apply them using popular deep learning libraries
  7. How each of these techniques are applied to achieve state of the art results in computer vision, and natural language processing
  8. Recent advances in deep learning for improving model training outcomes
  9. Techniques for getting good results even with smaller datasets

We’ll be covering these topics in a very different way to what you’ll be used to if you’ve taken any university level math or CS courses in the past. We’ll be telling you all about our teaching philosophy in our next post. Our approach will be code-heavy and math-light, so we do ask that participants already have at least a year or two of solid coding experience. We’ll be using Python (via the wonderful Jupyter Notebook) for our examples, so if you’re not already familiar with Python, we’d strongly suggest going through a quick introduction to Python and to Jupyter (formerly known as IPython)

More Details

Here’s some more detail on what topics we will be covering. For convolutional neural networks (CNNs), primarily used for image classification, we will teach:

To learn more, you may be interested this great visual explanation of image kernels

For recursive neural networks (RNNs), used for natural language processing (NLP) and times series data, we will cover:

To find out more now, you can read this excellent post by Andrej Karpathy.

One of our primary goals for this course is to teach you practical techniques for training better models such as:

Check out this helpful advice on babysitting your learning process and Chris Olah’s illuminating visualizations of language representations

There is a dangerous myth that you need huge data sets to effectively use deep learning. This is false, and we will teach you to deal with data shortages, such as through:

Background & Preparation

To participate, you should either have some familiarity with matrix multiplication, basic differentiation, and the chain rule, or be willing to study them before the course starts. If you need a refresher on these concepts, we recommend the Khan Academy videos on matrix multiplication and the chain rule).

We will make significant use of list comprehensions in Python - here is a useful introduction. It would also be very helpful to know your way around the basic python data science tools: numpy, scipy, scikit-learn, pandas, jupyter notebook, and matplotlib. The best guide I know of to these tools is Python For Data Analysis. For those with no python experience, you may want to prepare by reading Learn Python The Hard Way.

Read the official USF Data Institute description of our upcoming deep learning course on Monday evenings and send your resume to datainstitute@usfca.edu by Oct 12 to apply.