Our courses (all are free and have no ads):

fast.ai in the news:


Deep Learning: Not Just for Silicon Valley

Recent American news events range from horrifying to dystopian, but reading the applications of our fast.ai international fellows brought me joy and optimism. I was blown away by how many bright, creative, resourceful folks from all over the world are applying deep learning to tackle a variety of meaningful and interesting problems. Their passions range from ending illegal logging, diagnosing malaria in rural Uganda, translating Japanese manga, reducing farmer suicides in India via better loans, making Nigerian fashion recommendations, monitoring patients with Parkinson’s disease, and more. Our mission at fast.ai is to make deep learning accessible to people from varied backgrounds outside of elite institutions, who are tackling problems in meaningful but low-resource areas, far from mainstream deep learning research.

Our group of selected fellows for Deep Learning Part 2 includes people from Nigeria, Ivory Coast, South Africa, Pakistan, Bangladesh, India, Singapore, Israel, Canada, Spain, Germany, France, Poland, Russia, and Turkey. We wanted to introduce just a few of our international fellows to you today.

Tahsin Mayeesha is a Bangladeshi student who created a network visualization project analyzing data from a prominent Bangladeshi newspaper to explore the media coverage of violence against women. She wrote up her methodology and findings here, and hopes that the project can increase knowledge and empathy. In working on her Udacity Machine Learning Nano-degree, she overcame the challenges of a broken generator and intermittent electricity during Ramadan to successfully complete her projects. Mayeesha is a fan of Naruto, a popular Japanese manga series, and would like to use deep learning to translate it into English. Naruto characters use different stylized hand signs for different fight moves, and she is interested in trying to recognize these with a CNN. On a broader scale, Mayeesha wants to explore the question of how ML will impact places like Bangladesh with a semi-broken infrastructure.

Karthik Mahadevan, an industrial designer in Amsterdam, previously created an interactive medical toy for children with cancer. More recently, he helped develop smart diagnosis solutions for rural health centres in Uganda. His team developed a smartphone-based device that captures magnified images of blood smear of malaria patients. The images are processed through an AI-based software that highlights potential parasites in the image for lab technicians to check. The long-term aim, however, is to create a fully automated diagnosis system to compensate for the shortage of lab technicians in rural Uganda (84% of the population of Uganda lives in rural areas.)

After being selected as our first international fellow, and completing part 1 of our course, language researcher Samar Haider of Pakistan collected the largest dataset ever of his native language of Urdu. He says he was inspired by Lesson 5 of Part 1 to acquire, clean, and segment into sentences an Urdu corpus with over 150 million tokens. Haider trained a model to learn vector representations of the words, which captured useful semantic relationships and lexical variations. Haider writes “this marks the first time such word representations have been trained for Urdu, and, while they are themselves an incredibly valuable resource, it is exciting to think of ways in which they can be used to advance the state of natural language processing for Urdu in applications ranging from text classification to sentiment analysis to machine translation.” Haider will be joining us again for Part 2 and says, “In the long run, I hope to use deep learning techniques to bridge gaps in human communication (an especially important duty in these polarizing times) by helping computers better process and understand regional languages and helping unlock a world of information for people who don’t speak English or other popular languages.”

Xinxin Li previously developed carbon management technologies as an environmental research engineer, and built a Python app to diagnose plant diseases through photos of leaves. She is now working with a wearable technology company to develop a system for Parkinson’s patient therapy management, the core of which is a machine learning model to be trained with clinical trial data. This new system would enable a doctor to gauge patients’ symptoms, such as tremors and dyskinetic, via sensor data collected out of clinic, rather than relying on written diaries or interviews with patient caregivers.

Sahil Singla works at a social impact startup in India, using deep learning on satellite imagery to help the Indian government identify which villages have problems of landlessness or crop failure. Singla plans to use deep learning to build better crop insurance and agriculture lending models, thus reducing farmer suicides (exorbitant interest rates on predatory loans contribute to the high suicide rate of Indian farmers).

Amy Xiao, an undergraduate at the University of Toronto, plans to create tools such as browser extensions to help people distinguish between facts and fiction in online information. Her goal is to rate the legitimacy of online content via a deep learning model by integrating sentiment analysis of the comments, legitimacy of news source, and the content itself, trained on labeled articles with a “predetermined” score. She is also interested in exploring how to discern legitimate vs. fake reviews from online sites.

Prabu Ravindran is developing a deep learning system for automated wood identification in the Center for Wood Anatomy Research, Forest Products Laboratory, and the U of Wisconsin Botany Department. This system will be deployed to combat illegal logging and identify wood products. Orlando Adeyemi, a Nigerian currently working in Malaysia, has already begun scraping Nigerian fashion websites for data that he plans to apply deep learning to. Previously, he created an iOS app for Malaysian cinema times, and won an award for his arduino wheelchair stabilizer.

Gurumoorthy C is excited about the “Clean India” initiative launched by Prime Minister Modi. Together with a group of friends, Gurumoorthy plans to create a small robot to pick up trash in the street, and correctly identify waste. Karthik Kannan is currently working on a idea that incorporates deep learning and wearable cameras to help the visually impaired navigate closed spaces in India.

Alexis Fortin-Cote is a PhD student in robotics at U Laval from French-speaking Quebec. He plans to create a model capable of inferring the level of fun players are experiencing from video games, using bio sensor information and self-reported emotional state. Together with a team from the school of psychology, he has already collected over 400 total hours of data from 200 players.

We welcome the above fellows, along with the rest of our international fellows and our in-person participants to Deep Learning Part 2. We are very excited about this community, and what we can build together!

Deep Learning For Coders - Full notes and transcripts now available

We’ve been so excited to watch the thousands of people working their way through part 1 of Practical Deep Learning For Coders, and the buzzing community that has formed around the course’s Deep Learning Discussion Forums. But we have heard from those for whom English is not their first language that a major impediment to understanding the content is the lack of written transcript or course notes. As a student of Chinese I very much empathize - I find it far easier to understand Chinese videos when they have subtitles, especially when it’s more technical material.

So I’m very happy to announce that the course now has complete transcripts (available directly as captions within Youtube) and course notes for every lesson.

notes example

This is thanks to the hard work of our intern, Brad Kenstler (course notes), and part 1 international fellow, Lin Crampton (transcripts). We are very grateful to both of them for their wonderful contributions, and we expect that they will significantly progress our mission to make the power of deep learning accessible to all.

caption example

Diversity and International Fellowships for Deep Learning Part 2

Applications are now open for Deep Learning Part 2, to be offered at the University of San Francisco Data Institute on Monday evenings, Feb 27-April 10. The course will cover integrating multiple cutting-edge deep learning techniques, as well as combining classic machine learning techniques with deep learning.

In part 1, we worked hard to curate a diverse group of participants, because we’d observed that artificial intelligence is missing out because of its lack of diversity. A study of 366 companies found that ethnically diverse companies are 35% more likely to perform well financially, and teams with more women perform better on collective intelligence tests. Scientific papers written by diverse teams receive more citations and have higher impact factors.

Everyone benefited from having a class full of curious coders from a variety of backgrounds. We had a number of students interested in using deep learning for social good, including Sara Hooker, founder and executive director of Delta Analytics, which partners non-profits with data scientists. She is now working on a project to use audio data streamed from recycled cell phones in endangered forests to track harmful human activity. Another student was a former Literature PhD student interested in analyzing gender and language in Github commits. Several students connected over their shared interest in Alzheimer’s research. International fellow Samar Haider is a researcher applying natural language processing to his native language of Urdu, one of the 70 different spoken languages in Pakistan, many of which have not been well-studied and are in need of the additional resources deep learning can provide. Another international fellow said he never expected to be using so many command line tools (we provide scripts and guidance to walk you through the setup) and he ended up creating an Amazon Machine Image which saved memory to share with the rest of the class.

One of the ways we achieved this great outcome was by, together with USF Data Institute, sponsoring diversity fellowships and international fellowships. It was such a success that we’ve decided to do it again.

I am saddened and angered that President Trump is banning immigrants from certain countries from entering the US, even when they have visas and green cards. The deep learning community is suffering from its lack of diversity already, and we are trying to fight that. We can’t change government policy at fast.ai, but we can do our little bit: we will again offer free remote international fellowships for those selected outside San Francisco to attend classes virtually, have access to all the same online resources, and be a part of our community. People of all religions and from all countries, including Iran, Iraq, Libya, Somalia, Syria, Sudan, and Yemen, are welcome and encouraged to apply.

Diversity fellowship are full or partial tuition waivers to attend the in-person course in San Francisco for women, people of Color, LGBTQ people, or veterans. We are looking for applicants who have shown the ability to follow through on projects and a significant level of intellectual curiosity.

International fellowships allow those who can not get to San Francisco to attend virtual classes for free during the same time period as the in-person class and provides access to all the same online resources. (Note that international fellowships do not provide an official completion certificate through USF). Our international fellows from part 1 contributed greatly to the community.

Both fellowships require completion of part 1. When applying, please let us know about any way that you have contributed to the student community (such as forum posts, pull requests, or open source projects). To apply, email your resume to rachel@fast.ai and datainstitute@usfca.edu, along with a note of whether you are interested in the diversity or international fellowships and a brief paragraph on how you want to use deep learning. Note that to be eligible, you must have completed Deep Learning Part 1, either in person, or through our MOOC. Deep Learning Part 1 involves approximately 70 hours of work, so if you haven’t finished yet, you should get studying. The deadline to apply is 2/13.

Practical Deep Learning Part 2 - Integrating Recent Advances and Classic Machine Learning

With part 2 of our in person SF course starting in 6 weeks, and applications having just opened, we figured we better tell you a bit about what to expect!… So here’s an overview of what we’re planning to cover.

Overall approach

The main theme of this part of the course will be tackling more complex problems, that require integrating a number of techniques. This includes both integrating multiple deep learning techniques (such as combining RNNs and CNNs for attentional models), as well as combining classic machine learning techniques with deep learning (such as using clustering and nearest neighbors for semi-supervised and zero-shot learning). As always, we’ll be introducing all methods in the context of solving end-to-end real world modeling problems, using Kaggle datasets where possible (so that we have a clear best-practice goal to aim for).

Since we have no pre-requisites for the course other than a year of coding experience and completion of part 1 of the course, we’ll be fully explaining all the classic ML techniques we’ll use as well.

In addition, we’ll be covering some more sophisticated extensions of the DL methods we’ve seen, such as adding memory to RNNs (e.g. for building question answering systems / “chat bots”), and multi-object segmentation and detection methods.

Some of the methods we’ll examine will be very recent research directions, including unpublished research we’ve done at fast.ai. So we’ll be looking at journal articles much more frequently in this part of the course—a key teaching goal for us is that you come away from the course feeling much more comfortable reading, understanding, and implementing research papers. We’ll be sharing some simple tricks that make it much easier to quickly scan and get the key insights from a paper.

Python 3 and Tensorflow

This part of the course will use Python 3 and Tensorflow, instead of Python 2 and Theano as used in part 1. We’ll explain our reasoning in more detail in a future post; we hope that you will come away from the course feeling confident in both of these tools, and able to identify the strengths and weaknesses of both, to help you decide what to use in your own projects.

We’ve found using Python 3 to develop the course materials quite a bit more pleasant than Python 2. Whilst version 3 of the language has provided some incremental improvements for many years, until recently we’ve found the lack of support for Python 3 in scientific computing libraries resulted in it being a very frustrating experience. The good news is that that’s all changed now, and furthermore recent developments in Python 3.4 and 3.5 have greatly improved the productivity of the language.

Our view of Tensorflow is that buried in a rather verbose and complex API there’s a very nice piece of software buried away in there. We’ll be showing how to write custom GPU accelerated algorithms from scratch in Tensorflow, staying within a small and simple subset of the Tensorflow API where things stay simple and elegant.

Structured data, time series analysis, and clustering

One area where deep learning has been almost entirely ignored is in the area of structured data analysis (i.e. analyzing data where each column represents a distinct feature, such as from a database table). We had wondered whether this is because deep learning is simply less well suited to this task than the very popular decision tree ensembles (such as random forests and XGBoost, which we’re big fans of), but we’ve recently done some research that has shown that deep learning can be both simpler and more effective than these techniques. But getting it to work well requires getting a lot of little details right—details that have never been fully understood or documented elsewhere to the best of our knowledge.

We’ll be showing how to get state of the art results in structured data analysis, including showing how to use the wonderful XGBoost, and comparing these techniques. We’ll also take a brief detour into looking at R, where structured data analysis is still quite a bit more straightforward than Python.

Most of the structured data sets we’ll investigate will have a significant time series component, so we’ll also be discussing the best ways to deal with this kind of data. Time series pop up everywhere, such as fraud and credit models (using time series of transactions), maintenance and operations (using time series of sensor readings), finance (technical indicators), medicine (medical sensors and EMR data), and so forth.

We will also begin our investigation of cluster analysis, showing how it can be combined with a softmax layer to create more accurate models. We will show how to implement this analysis from scratch in Tensorflow, creating a novel GPU accelerated algorithm.

Deep dive into computer vision

We will continue our investigation into computer vision applications from part 1, getting into some new techniques and new problem areas. We’ll study resnet and inception architectures in more detail, with a focus on how these architectures can be used for transfer learning. We’ll also look at more data augmentation techniques, such as test time augmentation, and occlusion.

We’ll learn about the K nearest neighbors algorithm, and use it in conjunction with CNNs to get state of the art results on multi-frame image sequence analysis (such as videos or photo sequences). From there, we will look at other ways of grouping objects using deep learning, such as siamese and triplet networks, which we will use to get state of the art results for image comparisons.

Unsupervised and semi-supervised learning, and productionizing models

In part 1 we studied pseudo-labeling and knowledge distillation for semi-supervised learning. In part 2 we’ll learn more techniques, including bayesian-inspired techniques such as variational autoencoder and variational ladder networks. We will also look at the role of generative models in semi-supervised learning.

We will show how to use unsupervised learning to build a useful photo fixing tool, which we’ll then turn into a simple web app in order to show how you can put deep learning models into production.

Zero-shot learning will be a particular focus, especially the recently developed problem of generalized zero-shot learning. Solving this problem allows us to build models on a subset of the full dataset, and apply those models to whole new classes that we haven’t seen before. This is important for real-world applications, where things can change and new types of data can appear any time, and where labeling can be expensive, slow, and/or hard to come by.

And don’t worry, we haven’t forgotten NLP! NLP is a great area to apply unsupervised and semi-supervised learning, and we will look at a number of interesting problems and techniques in this space, including how to use siamese and triplet networks for text analysis.

Segmentation, detection, and handling large datasets

Handling large datasets requires careful management of resources, and doing it in a reasonable time frame requires being thoughtful about the full modeling process. We will show how to build models on the well-known Imagenet dataset, and will show that analysing such a large dataset can readily be done on a single machine fairly quickly. We will discuss how to use your GPU, CPUs, RAM, SSD, and HDD together, taking advantage of each part most effectively.

Whereas most of our focus on computer vision so far has been classification, we’ll now move our focus to localization—that is, finding the objects in an image (or in NLP, finding the relevant parts of a document). We have looked at some simple heatmap and bounding box approaches in part 1 already; in part 2 we build on that to look at more complete segmentation systems, and methods for finding multiple objects in an image. We will look at the results of the recent COCO competition to understand the best approaches to these problems.

Neural machine translation

As recently covered by the New York Times, Google has totally revamped their Translate tool using deep learning. We will learn about what’s behind this system, and similar state of the art systems—including some more recent advances that haven’t yet found their way into Google’s tool.

We’ll start with looking at the original encoder-decoder model that neural machine translation is based on, and will discuss the various potential applications of this kind of sequence to sequence algorithm. We’ll then look at attentional models, including applications in computer vision (where they are useful for large and complex images). In addition, we will investigate stacking layers, both in the form of bidirectional layers, and deep RNN architectures.

Question answering and multi-modal models

Recently there has been a lot of hype about chatbots. Although in our opinion they’re not quite ready for prime time (which is why pretty much all production chatbots still have a large human element), it’s instructive to see how they’re built. In general, question answering systems are built using architectures that have an explicit memory; we will look at ways of representing that in a neural network, and see the impact it has on learning.

We will also look at building visual Q&A systems, where you allow the user to ask questions about an image. This will build on top of the work we did earlier on zero-shot learning.

Reinforcement learning

Reinforcement learning has become very popular recently, with Google showing promising results in training robots to complete complex grasping actions, and DeepMind showing impressive results in playing computer games. We will survey the reinforcement learning field and attempt to identify the most promising application areas, including looking beyond the main academic areas of study (robots and games) to opportunities for reinforcement learning of more general use.

We hope to see you at the course! Part 1 was full, and part 2 is likely to be even more popular, so get your application in soon!

Big deep learning news: Google Tensorflow chooses Keras

Buried in a Reddit comment, Francois Chollet, author of Keras and AI researcher at Google, made an exciting announcement: Keras will be the first high-level library added to core TensorFlow at Google, which will effectively make it TensorFlow’s default API. This is excellent news for a number of reasons!

As background, Keras is a high-level Python neural networks library that runs on top of either TensorFlow or Theano. There are other high level Python neural networks libraries that can be used on top of TensorFlow, such as TF-Slim, although these are less developed and not part of core TensorFlow.

Using TensorFlow makes me feel like I’m not smart enough to use TensorFlow; whereas using Keras makes me feel like neural networks are easier than I realized. This is because TensorFlow’s API is verbose and confusing, and because Keras has the most thoughtfully designed, expressive API I’ve ever experienced. I was too embarrassed to publicly criticize TensorFlow after my first few frustrating interactions with it. It felt so clunky and unnatural, but surely this was my failing. However, Keras and Theano confirm my suspicions that tensors and neural networks don’t have to be so painful. (In addition, in part 2 of our deep learning course Jeremy will be showing some tricks to make it easier to write custom code in Tensorflow.)

For a college assignment, I once used a hardware description language to code division by adding and shifting bits in the CPU’s registers. It was an interesting exercise, but I certainly wouldn’t want to code a neural network this way. There are a number of advantages to using a higher level language: quicker coding, fewer bugs, and less pain. The benefits of Keras go beyond this: it is so well-suited to the concepts of neural networks, that Keras has improved how Jeremy and I think about neural networks and facilitated new discoveries. Keras makes me better at neural networks, because the language abstractions match up so well with neural network concepts.

Writing programs in the same conceptual language that I’m thinking in allows me to focus my attention on the problems I’m trying to solve, and not on artifacts of the programming language. When most of my mental energy is spent converting between the abstractions in my head and the abstractions of the language, my thinking becomes slower and fuzzier. TensorFlow effects my productivity in a similar way that having to code in Assembly would effect my productivity.

As Chollet wrote, “If you want a high-level object-oriented TF API to use for the long term, Keras is the way to go.” And I am thrilled about this news.

Note: For our Practical Deep Learning for Coders course, we used Keras and Theano. For Practical Deep Learning for Coders Part 2, we plan to use Keras and TensorFlow. We prefer Theano over TensorFlow, because Theano is more elegant and doesn’t make scope super annoying. Unfortunately, only TensorFlow supports some of the things we want to teach in part 2.

UPDATE: I drafted this post last week. After publishing, I saw on Twitter that Francois Chollet had announced the integration of Keras into TensorFlow a few hours earlier.