Deep Learning: Not Just for Silicon Valley27 Feb 2017 Rachel Thomas
Recent American news events range from horrifying to dystopian, but reading the applications of our fast.ai international fellows brought me joy and optimism. I was blown away by how many bright, creative, resourceful folks from all over the world are applying deep learning to tackle a variety of meaningful and interesting problems. Their passions range from ending illegal logging, diagnosing malaria in rural Uganda, translating Japanese manga, reducing farmer suicides in India via better loans, making Nigerian fashion recommendations, monitoring patients with Parkinson’s disease, and more. Our mission at fast.ai is to make deep learning accessible to people from varied backgrounds outside of elite institutions, who are tackling problems in meaningful but low-resource areas, far from mainstream deep learning research.
Our group of selected fellows for Deep Learning Part 2 includes people from Nigeria, Ivory Coast, South Africa, Pakistan, Bangladesh, India, Singapore, Israel, Canada, Spain, Germany, France, Poland, Russia, and Turkey. We wanted to introduce just a few of our international fellows to you today.
Tahsin Mayeesha is a Bangladeshi student who created a network visualization project analyzing data from a prominent Bangladeshi newspaper to explore the media coverage of violence against women. She wrote up her methodology and findings here, and hopes that the project can increase knowledge and empathy. In working on her Udacity Machine Learning Nano-degree, she overcame the challenges of a broken generator and intermittent electricity during Ramadan to successfully complete her projects. Mayeesha is a fan of Naruto, a popular Japanese manga series, and would like to use deep learning to translate it into English. Naruto characters use different stylized hand signs for different fight moves, and she is interested in trying to recognize these with a CNN. On a broader scale, Mayeesha wants to explore the question of how ML will impact places like Bangladesh with a semi-broken infrastructure.
Karthik Mahadevan, an industrial designer in Amsterdam, previously created an interactive medical toy for children with cancer. More recently, he helped develop smart diagnosis solutions for rural health centres in Uganda. His team developed a smartphone-based device that captures magnified images of blood smear of malaria patients. The images are processed through an AI-based software that highlights potential parasites in the image for lab technicians to check. The long-term aim, however, is to create a fully automated diagnosis system to compensate for the shortage of lab technicians in rural Uganda (84% of the population of Uganda lives in rural areas.)
After being selected as our first international fellow, and completing part 1 of our course, language researcher Samar Haider of Pakistan collected the largest dataset ever of his native language of Urdu. He says he was inspired by Lesson 5 of Part 1 to acquire, clean, and segment into sentences an Urdu corpus with over 150 million tokens. Haider trained a model to learn vector representations of the words, which captured useful semantic relationships and lexical variations. Haider writes “this marks the first time such word representations have been trained for Urdu, and, while they are themselves an incredibly valuable resource, it is exciting to think of ways in which they can be used to advance the state of natural language processing for Urdu in applications ranging from text classification to sentiment analysis to machine translation.” Haider will be joining us again for Part 2 and says, “In the long run, I hope to use deep learning techniques to bridge gaps in human communication (an especially important duty in these polarizing times) by helping computers better process and understand regional languages and helping unlock a world of information for people who don’t speak English or other popular languages.”
Xinxin Li previously developed carbon management technologies as an environmental research engineer, and built a Python app to diagnose plant diseases through photos of leaves. She is now working with a wearable technology company to develop a system for Parkinson’s patient therapy management, the core of which is a machine learning model to be trained with clinical trial data. This new system would enable a doctor to gauge patients’ symptoms, such as tremors and dyskinetic, via sensor data collected out of clinic, rather than relying on written diaries or interviews with patient caregivers.
Sahil Singla works at a social impact startup in India, using deep learning on satellite imagery to help the Indian government identify which villages have problems of landlessness or crop failure. Singla plans to use deep learning to build better crop insurance and agriculture lending models, thus reducing farmer suicides (exorbitant interest rates on predatory loans contribute to the high suicide rate of Indian farmers).
Amy Xiao, an undergraduate at the University of Toronto, plans to create tools such as browser extensions to help people distinguish between facts and fiction in online information. Her goal is to rate the legitimacy of online content via a deep learning model by integrating sentiment analysis of the comments, legitimacy of news source, and the content itself, trained on labeled articles with a “predetermined” score. She is also interested in exploring how to discern legitimate vs. fake reviews from online sites.
Prabu Ravindran is developing a deep learning system for automated wood identification in the Center for Wood Anatomy Research, Forest Products Laboratory, and the U of Wisconsin Botany Department. This system will be deployed to combat illegal logging and identify wood products. Orlando Adeyemi, a Nigerian currently working in Malaysia, has already begun scraping Nigerian fashion websites for data that he plans to apply deep learning to. Previously, he created an iOS app for Malaysian cinema times, and won an award for his arduino wheelchair stabilizer.
Gurumoorthy C is excited about the “Clean India” initiative launched by Prime Minister Modi. Together with a group of friends, Gurumoorthy plans to create a small robot to pick up trash in the street, and correctly identify waste. Karthik Kannan is currently working on a idea that incorporates deep learning and wearable cameras to help the visually impaired navigate closed spaces in India.
Alexis Fortin-Cote is a PhD student in robotics at U Laval from French-speaking Quebec. He plans to create a model capable of inferring the level of fun players are experiencing from video games, using bio sensor information and self-reported emotional state. Together with a team from the school of psychology, he has already collected over 400 total hours of data from 200 players.
We welcome the above fellows, along with the rest of our international fellows and our in-person participants to Deep Learning Part 2. We are very excited about this community, and what we can build together!