Thoughts on OpenAI, reinforcement learning, and killer robots

ai-in-society
Author

Rachel Thomas

Published

July 28, 2017

“So how is fast.ai different from OpenAI?” I’ve been asked this question numerous times, and on the surface, there are several similarities: both are non-profits, both value openness, and both have been characterized as democratizing AI. One significant difference is that fast.ai has not been funded with $1 billion from Elon Musk to create an elite team of researchers with PhDs from the most impressive schools who publish in the most elite journals. OpenAI, however, has.

It turns out we’re different in pretty much every other way as well: in our goals, values, motivations, and target audiences.

Artificial General Intelligence

There is a lot of confusion about the term AI. To some people, AI means building a super-intelligent computer that is similar to a human brain (this is often referred to as Artificial General Intelligence or AGI). These people are often interested in philosophical questions such as What does it mean to be human? or What is the nature of intelligence? They may have fears such as Could super-intelligent machines destroy the human race? A New Yorker article describes OpenAI’s goal as making sure that AI doesn’t wipe out humanity, and it’s webpage says that the mission is to create artificial general intelligence. Elon Musk has expressed fear about DeepMind (acquired by Google for $500 million) and the development of evil AI, saying “If the A.I. that they develop goes awry, we risk having an immortal and superpowerful dictator forever. Murdering all competing A.I. researchers as its first move strikes me as a bit of a character flaw.”

Cracking AGI is a very long-term goal. The most relevant field of research is considered by many to be reinforcement learning: the study of teaching computers how to beat Atari. Formally, reinforcement learning is the study of problems that require sequences of actions that result in a reward/loss, and not knowing how much each action contributes to the outcome. Hundreds of the world’s brightest minds, with the most elite credentials, are working on this Atari problem. Their research has applications to robotics, although a lot of it is fairly theoretical. OpenAI is largely motivated by publishing in top academic journals in the field, and you would need to have a similarly elite background to understand their papers.

Practical Artificial Intelligence

To other people, AI refers to that algorithms that display some level of intelligence in doing application-focused tasks, such as algorithms that can:

Many rapid advances are happening right now in the area of these algorithms that do their application-focused tasks very, very well. These algorithms would not be mistaken for humans (outside their excellent performance on the application they were designed for). This may sound less exciting than OMG super-intelligent human-like machines, but these algorithms are having a much bigger impact on the world today: they are more accurate than human radiologists at diagnosing cancer, more accurate atunderstanding Mandarin and English speech than humans, at work in self-driving cars, improving treatment for pediatric ICU patients, working to end illegal logging in endangered rainforests, granting the visually impaired more independence, reducing farmer suicides in India, and more (note: although they are in the same family, a specific algorithm was trained and tuned for each of these applications).

The capabilities listed above are transformative: improvements to medicine will help fill a global shortage of doctors, improve outcomes even in countries that have enough doctors, and save millions of lives. Self-driving cars are safer and will drastically reduce traffic fatalities, congestion, and pollution.

However, these application-focused algorithms are also already having some scary negative impacts as well: an increase in unemployment as more jobs are automated, widening wealth inequality (as highly profitable companies are created using these algorithms and employing relatively few workers), encoding existing biases (for example, Google Photos labeling Black people as “gorillas” as part of their automatic photo labeling), and with the potential of bots and convincing fake videos to further impact politics.

At fast.ai, we are not working on AGI, but instead focused on how to make it possible for more people from a wide variety of backgrounds to create and use neural net algorithms. We have students working to stop illegal deforestation, create more resources for the Pakistani language Urdu, help farmers in India prove how much land they own so they qualify for crop insurance, working on wearable devices to monitor Parkinson’s disease, and more. These are issues that Jeremy and I knew very little (if anything) about, and illustrate the importance of making this technology as accessible as possible. People with different backgrounds, in different locations, with different passions, are going to be aware of whole new sets of problems that they want to solve.

It is hard for me to empathize with Musk’s fixation on evil super-intelligent AGI killer robots in a very distant future. I support research funding at all levels, and have nothing against mostly theoretical research, but is it really the best use of resources to throw $1 billion at reinforcement learning without any similar investments into addressing mass unemployment and wealth inequality (both of which are well-documented to cause political instability), how existing gender and racial biases are being encoded in our algorithms, and on how to best get this technology into the hands of people working on high impact areas like medicine and agriculture around the world?