Our courses (all are free and have no ads):

fast.ai in the news:


Advice to Medical Experts Interested in AI

This week’s Ask-A-Data-Scientist column is from a medical doctor. Email your data science advice questions to rachel@fast.ai. Previous posts include:

Q: I’m a medical doctor (MD-PhD). I do a mix of clinical work and basic science research. My research primarily involves small scale animal studies for hypothesis testing, although others in my lab do some statistical clinical studies, such as paired cohort analysis. I’m interested in AI and wondering if and how it can be applied to my field?

A: AI is being applied to several fields of medicine, including:

  • Diabetic retinopathy is the fastest growing cause of blindness. The first step in the screening process is for ophthalmologists to examine a picture of the back of the eye, yet in many parts of the world there are not enough specialists available to do so. Researchers at Google and Stanford have used deep learning to create computer models that are as accurate as human ophthalmologists. This technology could help doctors screen a larger number of patients faster, helping to alleviate the global shortage of doctors.

  • In 2012, Merck sponsored a drug discovery competition where participants were given a dataset describing the chemical structure of thousands of molecules and asked to predict which were most likely to make for effective drugs. Remarkably, the winning team had only decided to enter the competition at the last minute and had no specific knowledge of biochemistry. They used deep learning.

  • In a 2016 New York Times article, it was shared that medical start-up Enlitic, founded by fast.ai’s Jeremy Howard, was 50 percent more accurate than human radiologists in making lung cancer diagnoses.

Jeremy Howard Jeremy Howard, photographed by Jason Henry for the New York Times

  • Fast.ai remote fellow Xinxin Li is working with Ikaishe and Xeed to develop wearable devices for patients with Parkinson’s Disease. Traditionally, doctors observe the patient walking to assess disease progression, and wearable devices will allow for much more data and more precise data to be collected.

  • Deep learning can classify skin cancer with dermatologist-level accuracy, as published in Nature earlier this year.

  • Cardiogram is an app for Apple Watch that screens users’ cardio health and is able to detect atrial fibrillation, a common form of heart irregularity, with 97 percent accuracy.

Does this mean I need “big data”? No.

Currently, when news articles talk about “AI”, they are often referring to deep learning, one particular family of algorithms.

Although the above examples involve relatively large datasets, deep learning is being effectively applied to smaller and smaller datasets all the time. Here are a few examples that I listed in a previous blog post: Francois Chollet, creator of the popular deep learning library Keras and now at Google Brain, has an excellent tutorial entitled Building powerful image classification models using very little data in which he trains an image classifier on only 2,000 training examples. At Enlitic, Jeremy Howard led a team that used just 1,000 examples of lung CT scans with cancer to build an algorithm that was more accurate at diagnosing lung cancer than a panel of 4 expert radiologists. The C++ library Dlib has an example in which a face detector is accurately trained using only 4 images, containing just 18 faces!

Face Recognition with Dlib

Fast.ai student Ben Bowles wrote a post on how data platform Quid uses deep learning with small data to improve the quality of one of their data sets.

Transfer learning is a powerful technique in which models trained on larger data sets (by teams with more computational resources) can be fine-tuned to fit other problems, with less data and less computation. For instance, models originally trained on ImageNet (images in a set of 1,000 categories) are a good starting point for other computer vision problems (such as analyzing a CT scan for lung cancer). Transfer learning is a major focus of our Practical Deep Learning for Coders course.

How does machine learning compare to hypothesis testing or paired cohort analysis?

Machine learning shines at handling messy data. Techniques such as controlled studies or paired cohort analysis rely on carefully controlling for different variables in your experiment set-up (or in finding the pairings), whereas machine learning is an excellent choice when this isn’t possible.

Random Forests

Deep learning is just one kind of machine learning. Another machine learning algorithm is the random forest, which is great for observational studies.

The original random forest paper successfully tested the algorithm on a number of small data sets, including 569 images of a finite aspirate of a breast cancer mass, data from 345 men with liver disorders, and 336 ecoli samples.

Getting started

If you are at a university, seek out collaborators or interns. Note, if you have a student who knows how to code, they can learn deep learning. If you are looking for a collaborator, you do not need to find a deep learning expert. All you need is someone with a year or two of coding experience who is interested in your project and wants to learn deep learning. It’s even better if they are familiar with your research (for instance, perhaps a student who may already be working in your lab and knows how to code).

I recommend that you learn to code. Even if it’s not your area of focus and you will be collaborating with programmers, knowing some code will help you better understand what’s possible and have a better sense of what the programmers you’re collaborating with are doing.

For doctors, I think the best way to start coding is by learning R: R has the easiest to use implementation of random forests (which is a great general purpose machine learning algorithm) and R is commonly used by statisticians, so you will most likely meet biostatisticians who use it. Rstudio is a relatively user-friendly, free environment in which to use R (although it still requires writing code). This free coursera class, taught by biostatisticians from Johns Hopkins University, is one good way to get started. Folks that know me may be surprised by this recommendation: in general, I recommend that people interested in becoming data scientists learn Python; I recommend that teenagers or anyone who likes art or games learn JavaScript; and now I’m recommending that doctors learn R. You will need to learn Python if you start learning deep learning, but random forests in R are a great place to get started with machine learning (and random forests still produce top quality results in many areas– they are not just for beginners!)

Sponsor a Deep Learning Diversity Scholarship

Last year’s diversity fellowships (funded by University of San Francisco and fast.ai), open to women, people of Color, LGBTQ people, and vets, played a role in helping us create a diverse community. However, we need your help to be able to offer additional scholarships this year. If your company, firm, or organization would be willing to sponsor diversity scholarships ($1,500 each), please email rachel@fast.ai.

Deep learning is incredibly powerful and is being used to diagnose cancer, stop deforestation of endangered rainforests, provide better crop insurance to farmers in India, provide better language translation than humans, improve energy efficiency, and more. To find out why diversity in AI is a crucial issue, read this post on the AI diversity crisis.

While many in tech are bemoaning the “skills gap” or “talent shortage” in trying to hire AI practitioners, we at fast.ai set out 1 year ago with a novel experiment: could we teach deep learning to coders with no pre-requisites beyond just 1 year of coding experience? Other deep learning materials often assume an advanced math background, yet we were able to get our students to the state of the art, through a practical, hands-on approach in our part-time course, without the advanced math requirements. Our students have been incredibly successful and their stories include the following:

  • Sara Hooker, who only started coding 2 years ago, and is now part of the elite Google Brain Residency
  • Tim Anglade, who used Tensorflow to create the Not Hot Dog app for HBO’s Silicon Valley, leading Google’s CEO to tweet “our work here is done
  • Gleb Esman, who created a new fraud product for Splunk using the tools he learnt in the course, and was featured on Splunk’s blog
  • Jacques Mattheij, who built a robotic system to sort two tons of lego
  • Karthik Kannan, founder of letsenvision.com, who told us “Today I’ve picked up steam enough to confidently work on my own CV startup and the seed for it was sowed by fast.ai with Pt1. and Pt.2”
  • Matthew Kleinsmith and Brendon Fortuner, who in 24 hours built a system to add filters to the background and foreground of videos, giving them victory in the 2017 Deep Learning Hackathon.

For those interested in applying for our diversity fellowships (to take our course), read this post for details.

Diversity Crisis in AI, 2017 edition

Deep learning has great potential, but currently the people using this technology are overwhelmingly white and male. We’re already seeing society’s racial and gender biases being encoded into software that uses AI when built by such a homogeneous group. Additionally, people can’t address problems that they’re not aware of, and with more diverse practitioners, a wider variety of important societal problems will be tackled.

Deep Learning has great potential

Deep learning is being used by fast.ai students and teachers to diagnose cancer, stop deforestation of endangered rainforests, provide better crop insurance to farmers in India (who otherwise have to take predatory loans from thugs, which have led to high suicide rates), help Urdu speakers in Pakistan, develop wearable devices for patients with Parkinson’s disease, and much more. Deep learning offers hope of a way for us to fill the global shortage of doctors, providing more accurate medical diagnoses and potentially saving millions of lives. It could improve energy efficiency, increase farm yields, reduce pesticide use, and more.

We want to get deep learning into the hands of as many people as possible, from as many diverse backgrounds as possible. People with different backgrounds have different problems they’re interested in solving. The traditional approach is to start with an AI expert and then give them a problem to work on; at fast.ai we want people who are knowledgeable and passionate about the problems they are working on, and we’ll teach them the deep learning needed to address them.

Deep Learning can be misused

Deep learning isn’t “more biased” than simpler models such as regression; however, the amazing effectiveness of deep learning suggests that it will be used in far more applications. As a society, we risk encoding our existing gender and racial biases into algorithms that determine medical care, employment decisions, criminal justice decisions, and more. This is already happening with simple models, but the widespread adoption of deep learning will rapidly accelerate this trend. The next 5 to 10 years are a particularly crucial time. We must get more women and people of Color building this technology in order to recognize, prevent, or address these baises.

Earlier this year, Taser (now rebranded Axon), the maker of the electronic stun guns, acquired two AI companies. Taser/Axon owns 80% of the police body camera market in the US, keeps this footage from police body cams in private databases, and is now advertising that they are developing technology for “predictive policing”. As a private company they are not subject to the same public records laws or oversight that police departments are. Given that racial bias in policing has been well-documented and shown to create negative feedback loops, this is terrifying. What kind of biases may be in their datasets or algorithms?

Google’s popular Word2Vec language library (covered in Lesson 5 of our course and in a workshop I gave this summer) has learned meaningful analogies, such as man is to king as women is to queen. However, it also creates sexist analogies such as man is to computer programmer as woman is to homemaker. This is concerning as Word2Vec has become a commonly used building block in a wide variety of applications. This is not the first (or even second) time Google’s use of deep learning has showed troubling biases. In 2015, Google Photos labeled Black people as “gorillas” while automatically labeling photos. Google Translate continues to provide sexist translations such as translating “O bir doktor. O bir hemşire” to “He is a doctor. She is a nurse” even though the original Turkish did not specify gender.

The state of diversity in AI

A year after prominent Google AI leader Jeff Dean said he is deeply worried about the lack of diversity in AI, guess what the diversity stats of the Google Brain team is? It is ~94% male with 44 men and just 3 women and over 70% White. OpenAI’s openness does not extend to sharing diversity stats or who works there, and from photos, the OpenAI team looks extremely homogenous. I’d guess that it’s even less diverse than Google Brain. Earlier this year Vanity Fair ran an article about AI that featured 60 men, without quoting a single woman that works in AI.

Google Brain, OpenAI, and the media can’t solely blame the pipeline for this lack of diversity, given that there are over 1,000 women active in machine learning. Furthermore, Google has a training program to bring engineers in other areas up to speed on AI, which could be a great way to increase diversity. However, this program is only available to Google engineers, and just 3% of Google’s technical employees are Black or Latino (despite the fact that 90,000 Black and Latino students have graduated with computer science majors in the US in the last decade); thus, this training program is not going to have much impact on diversity.

fast.ai diversity scholarships

At fast.ai, we want to do our part to increase diversity in this powerful field. Therefore, we are providing diversity scholarships for our updated in-person Practical Deep Learning for Coders course presented in conjunction with the University of San Francisco Data Institute, to be offered on Monday evenings in in downtown San Francisco and beginning on Oct 30. The only requirements are:

  • At least 1 year of coding experience
  • At least 8 hours a week to commit to the course (includes time for homework)
  • Curiosity and a willingness to work hard
  • Identify as a woman, person of Color, LGBTQ person, and/or veteran
  • Be available to attend in-person 6:30-9pm, Monday evenings, in downtown San Francisco (SOMA)

You can find more details about how to apply in this post.

Announcing fast.ai diversity scholarships

At fast.ai, we want to do our part to increase diversity in deep learning and to lower the unnecessary barriers to entry for everyone. Therefore, we are providing diversity scholarships for our updated in-person Practical Deep Learning for Coders course presented in conjunction with the University of San Francisco Data Institute, to be offered on Monday evenings in in downtown San Francisco and beginning on Oct 30. Wondering if you’re qualified? The only requirements are:

  • At least 1 year of coding experience
  • At least 8 hours a week to commit to the course (includes time for homework)
  • Curiosity and a willingness to work hard
  • Identify as a woman, person of Color, LGBTQ person, person with a disability, and/or veteran
  • Be available to attend in-person 6:30-9pm, Monday evenings, in downtown San Francisco (SOMA)

Understanding the serious issues caused by homogenity issue in AI are important for us all, so read about the current state of the AI diversity crisis here.

The fast.ai approach

Fast.ai students have been accepted to the elite Google Brain residency, launched companies, won hackathons, invented a new fraud detection algorithm, had work featured on the HBO TV show Silicon Valley, and more, all from taking a course that has only one year of coding experience as the pre-requisite.

Last year we attempted an experiment: to see if we could teach deep learning to coders, with no math pre-requisites beyond high school math, and get them to state-of-the-art results in just 7 weeks. This was very different from other deep learning materials, many of which assume a graduate level math background, focus on theory, only work on toy problems, and don’t even include the practical tips. We didn’t even know if what we were attempting was possible, but the fast.ai course was a huge success!

The traditional approach to teaching math or deep learning requires that all the underlying components and theory be taught before learners can start creating and using models on their own. This approach to teaching is similar to not allowing children to play baseball until they have memorized all the formal rules and are able to commit to a full 9 innings with a full team, or to not allowing children to sing until they have extensive practice transcribing sheet music by hand in different keys. We want to get people “playing ball” (that is, applying deep learning to the problems they care about and getting great results) as quickly as possible, and we drill into the details later, as time goes on.

How to Apply

Women, people of Color, LGBTQ people, people with disabilities, and veterans in the Bay Area, if you have at least one year of coding experience and can commit 8 hours a week to working on the course, we encourage you to apply for a diversity scholarship. The number of scholarships we are able to offer depends on how much funding we receive. To apply, email datainstitute@usfca.edu:

  • title your email “Diversity Fellowship Application”
  • include your resume
  • 1 paragraph describing one or more problems you’d like to apply deep learning to
  • confirm that you are available to attend the course on Monday evenings in SOMA (for 7 weeks, beginning Oct 30), and that you can commit 8 hours a week to working on the course
  • which under-indexed group(s) you are a part of (gender, race, sexual identity, veteran)

The deadline to apply is Sept 15, 2017.

To those outside the Bay Area

We will again have a remote/international fellows program, which is separate from our diversity scholarships. Details on how to apply will be announced in a separate blog post in the next few weeks, so stay tuned.

Thoughts on OpenAI, reinforcement learning, and killer robots

“So how is fast.ai different from OpenAI?” I’ve been asked this question numerous times, and on the surface, there are several similarities: both are non-profits, both value openness, and both have been characterized as democratizing AI. One significant difference is that fast.ai has not been funded with $1 billion from Elon Musk to create an elite team of researchers with PhDs from the most impressive schools who publish in the most elite journals. OpenAI, however, has.

It turns out we’re different in pretty much every other way as well: in our goals, values, motivations, and target audiences.

Artificial General Intelligence

There is a lot of confusion about the term AI. To some people, AI means building a super-intelligent computer that is similar to a human brain (this is often referred to as Artificial General Intelligence or AGI). These people are often interested in philosophical questions such as What does it mean to be human? or What is the nature of intelligence? They may have fears such as Could super-intelligent machines destroy the human race? A New Yorker article describes OpenAI’s goal as making sure that AI doesn’t wipe out humanity, and it’s webpage says that the mission is to create artificial general intelligence. Elon Musk has expressed fear about DeepMind (acquired by Google for $500 million) and the development of evil AI, saying “If the A.I. that they develop goes awry, we risk having an immortal and superpowerful dictator forever. Murdering all competing A.I. researchers as its first move strikes me as a bit of a character flaw.”

Cracking AGI is a very long-term goal. The most relevant field of research is considered by many to be reinforcement learning: the study of teaching computers how to beat Atari. Formally, reinforcement learning is the study of problems that require sequences of actions that result in a reward/loss, and not knowing how much each action contributes to the outcome. Hundreds of the world’s brightest minds, with the most elite credentials, are working on this Atari problem. Their research has applications to robotics, although a lot of it is fairly theoretical. OpenAI is largely motivated by publishing in top academic journals in the field, and you would need to have a similarly elite background to understand their papers.

Practical Artificial Intelligence

To other people, AI refers to that algorithms that display some level of intelligence in doing application-focused tasks, such as algorithms that can:

Many rapid advances are happening right now in the area of these algorithms that do their application-focused tasks very, very well. These algorithms would not be mistaken for humans (outside their excellent performance on the application they were designed for). This may sound less exciting than OMG super-intelligent human-like machines, but these algorithms are having a much bigger impact on the world today: they are more accurate than human radiologists at diagnosing cancer, more accurate at understanding Mandarin and English speech than humans, at work in self-driving cars, improving treatment for pediatric ICU patients, working to end illegal logging in endangered rainforests, granting the visually impaired more independence, reducing farmer suicides in India, and more (note: although they are in the same family, a specific algorithm was trained and tuned for each of these applications).

The capabilities listed above are transformative: improvements to medicine will help fill a global shortage of doctors, improve outcomes even in countries that have enough doctors, and save millions of lives. Self-driving cars are safer and will drastically reduce traffic fatalities, congestion, and pollution.

However, these application-focused algorithms are also already having some scary negative impacts as well: an increase in unemployment as more jobs are automated, widening wealth inequality (as highly profitable companies are created using these algorithms and employing relatively few workers), encoding existing biases (for example, Google Photos labeling Black people as “gorillas” as part of their automatic photo labeling), and with the potential of bots and convincing fake videos to further impact politics.

At fast.ai, we are not working on AGI, but instead focused on how to make it possible for more people from a wide variety of backgrounds to create and use neural net algorithms. We have students working to stop illegal deforestation, create more resources for the Pakistani language Urdu, help farmers in India prove how much land they own so they qualify for crop insurance, working on wearable devices to monitor Parkinson’s disease, and more. These are issues that Jeremy and I knew very little (if anything) about, and illustrate the importance of making this technology as accessible as possible. People with different backgrounds, in different locations, with different passions, are going to be aware of whole new sets of problems that they want to solve.

It is hard for me to empathize with Musk’s fixation on evil super-intelligent AGI killer robots in a very distant future. I support research funding at all levels, and have nothing against mostly theoretical research, but is it really the best use of resources to throw $1 billion at reinforcement learning without any similar investments into addressing mass unemployment and wealth inequality (both of which are well-documented to cause political instability), how existing gender and racial biases are being encoded in our algorithms, and on how to best get this technology into the hands of people working on high impact areas like medicine and agriculture around the world?