The Diversity Crisis in AI, and fast.ai Diversity Fellowship

ai-in-society
courses
Author

Rachel Thomas

Published

October 9, 2016

Update: The deadline has been extended to 10/17. Read more here

At fast.ai, we want to do our part to help make deep learning more inclusive, and we are beginning by offering 1 full tuition fellowship for our deep learning certificate course at the Data Institute at USF, to be held on Monday evenings starting 10/24. Women, people of color, and LGBTQ people are invited to apply. To apply, please send your resume to before 10/12 10/17, along with a note that you are interested in the diversity fellowship and a brief paragraph on how you want to use deep learning. You can read more here about what we’ll cover and our approach to teaching.

Why are we doing this? Artificial intelligence is an incredibly exciting field to be working in right now, with new breakthroughs occurring almost daily. I personally feel so lucky to be able to work in this field and want everyone to have access to such fascinating and creative work. . Furthermore, artificial intelligence is missing out because of it’s lack of diversity. A study of 366 companies found that ethnically diverse companies are 35% more likely to perform well financially, and teams with more women perform better on collective intelligence tests. Scientific papers written by diverse teams receive more citations and have higher impact factors.

As big as the diversity crisis in tech is, it’s even worse in the field of artificial intelligence, which includes deep learning. Immensely powerful algorithms are being created by a very narrow and homeogeneous slice of the population. Only 3 of the 35 people on the Google Brain team are women; only 1 of the 15 AI researchers at Stanford is a women; and in 2015, only 14% of the attendees at one of the largest AI conferences (NIPS) were women. An analysis of the language in job postings found that job ads for machine intelligence roles were significantly more masculinely biased compared to postings for all other types of software engineer roles.

We’ve already seen the following sad (yet unintentional) reflections of bias in AI:

The opportunity for biased algorithms to have negative real world consequences will only increase as the role of machine learning continues to grow in coming years.

Olga Russakovsky, a research fellow at the CMU Robotics Institute (and soon to be CS professor at Princeton), wrote that the field of AI is in a rut, “We’ve tended to breed the same style of researchers over and over again–people who come from similar backgrounds, have similar interests, read the same books as kids, learn from the same thought leaders, and ultimately do the same kinds of research.”

Jeff Dean, the legendary head of Google Brain, said that he is not worried about an AI apocolypse, but he is very concerned by the lack of diversity in the field of AI.

Mathematics is a field notorious for its sexism, and when we make advanced mathematics an unnecessary barrier to entry for deep learning (a follow up post expanding on this is in the works), we greatly reduce the number of women that will be eligible, since they’ve already been weeded out by hostile and biased environments. Note that this is due to cultural factors in the US, and doesn’t hold true in all countries.

Please email with any questions about the diversity fellowship or comments about this article.