USF Launches New Center of Applied Data Ethics

University of San Francisco is launching a Center of Applied Data Ethics, and Rachel Thomas is the director of the new center.
ai-in-society
courses
Author

Rachel Thomas

Published

August 5, 2019

Update: The first year of the USF Center for Applied Data Ethics will be funded with a generous gift from Craig Newmark Philanthropies, the organization of craigslist founder Craig Newmark. Read the official press release for more details.

While the widespread adoption of data science and machine learning techniques has led to many positive discoveries, it also poses risks and is causing harm. Facial recognition technology sold by Amazon, IBM, and other companies has been found to have significantly higher error rates on Black women, yet these same companies are already selling facial recognition and predictive policing technology to police, with no oversight, regulations, or accountability. Millions of people’s photos have been compiled into databases, often without their knowledge, and shared with foreign governments, military operations, and police departments. Major tech platforms (such as Google’s YouTube, which auto-plays videos selected by an algorithm), have been shown to disproportionately promote conspiracy theories and disinformation, helping radicalize people into toxic views such as white supremacy.

USF Data Institute in downtown SF, Image Credit: By Eric in SF - Own work, CC BY-SA 4.0

In response to these risks and harms, I am helping to launch a new Center for Applied Data Ethics (CADE), housed within the University of San Francisco’s Data Institute to address issues surrounding the misuse of data through education, research, public policy and civil advocacy. The first year will include a tech policy workshop, a data ethics seminar series, and data ethics courses, all of which will be open to the community at-large.

Misuses of data and AI include the encoding & magnification of unjust bias, increasing surveillance & erosion of privacy, spread of disinformation & amplification of conspiracy theories, lack of transparency or oversight in how predictive policing is being deployed, and lack of accountability for tech companies. These problems are alarming, difficult, urgent, and systemic, and it will take the efforts of a broad and diverse range of people to address them. Many individuals, organizations, institutes, and entire fields are already hard at work tackling these problems. We will not reinvent the wheel, but instead will leverage existing tools and will amplify experts from a range of backgrounds. Diversity is a crucial component in addressing tech ethics issues, and we are committed to including a diverse range of speakers and supporting students and researchers from underrepresented groups.

I am director of the new center. Since you’re reading the fast.ai blog, you may be familiar with my work, but if not, you can read about my background here. I earned my PhD at Duke University in 2010, was selected by Forbes as one of “20 Incredible Women in AI”, am co-founder of fast.ai, and have been a researcher at USF Data Institute since it was founded in 2016. In the past few years, I have done a lot of writing and speaking on data ethics issues.

Speaking about misuses of AI at TEDx SF

What is the USF Data Institute?

The Center for Applied Data Ethics will be housed within the USF Data Institute, located in downtown San Francisco, and will be able to leverage our existing community, partnerships, and successes. In the 3 years since the founding of the Data Institute, more than 900 entrepreneurs & employees from local tech companies have taken evening and weekend courses here, and we have granted more than 177 diversity scholarships to people from underrepresented groups. The USF MS in Data Science program, now housed in the Data Institute, is entering its 8th year, and all students complete 8 month practicum projects at our 160 partner companies. Jeremy Howard and I have both been involved with the USF Data Institute since it first began 3 years ago; it is where we have taught the in-person versions of our deep learning, machine learning, computational linear algebra, and NLP courses, and we have both been chairs of tracks for the Data Institute conference. Additionally, Jeremy launched the Wicklow AI in Medicine Research Initiative as part of the Data Institute last year.

What will you do in the 1st year? How can I get involved?

Data Ethics Seminar Series: We will bring in experts on issues of data ethics in talks open to the community, and high-quality recordings of the talks will be shared online. We are excited to have Deborah Raji as our first speaker. Please join us on Monday August 19 for a reception with food and Deborah’s talk on “Actionable Auditing and Algorithmic Justice.”

Tech Policy Workshop: Systemic problems require systemic solutions. Individual behavior change will not address the structural misalignment of incentives and lack of accountability. We need thoughtful and informed laws to safeguard human rights, and we do not want legislation written by corporate lobbyists. When it comes to setting policy in this area, too few legislators have the needed technical background and too few of those with knowledge of the tech industry have the needed policy background. We will hold a 3-day tech policy workshop, tentatively scheduled for November 15-17.

Data Ethics Certificate Course open to the community: The USF Data Institute has been offering part-time evening and weekend courses in downtown SF for the last 3 years, including the popular Practical Deep Learning for Coders course taught by Jeremy Howard. You do not need to be a USF student to attend these courses, and over 900 people, most working professionals, have attended past courses at the Data Institute. I will be teaching a Data Ethics course one evening per week in January-February 2020.

Required Data Ethics Course for MS in Data Science students: USF has added a required data ethics course that all students in the Masters of Science in Data Science program will take.

Data Ethics Fellows: We plan to offer research fellowships for those working on problems of applied data ethics, with a particular focus on work that has a direct, practical impact. Fellows will have access to the resources, community, and courses at the USF Data Institute. We will begin accepting invitations this fall, for 1-year long fellowships with start dates of January 2020 or June 2020.

If you are interested in any of these upcoming initiatives, please sign up for our mailing list to be notified when applications open.

<center><label for="mce-EMAIL">Subscribe to the Center for Applied Data Ethics email list to find out about upcoming events</label>
<input type="email" value="" name="EMAIL" class="email" id="mce-EMAIL" placeholder="email address" required>
<!-- real people should not fill this in and expect good things - do not remove this or risk form bot signups-->
<div style="position: absolute; left: -5000px;" aria-hidden="true"><input type="text" name="b_a41fda1ae353f0586ce6feaa7_66518975b8" tabindex="-1" value=""></div>
<div class="clear"><center><input type="submit" value="Subscribe" name="subscribe" id="mc-embedded-subscribe" class="button"></center></div>
</div>

Other FAQ

Q: What does this mean for your involvement with fast.ai?

A: We plan to release a data ethics course through fast.ai, sometime in mid-2020. (We have previously covered ethics issues in our Deep Learning for Coders course, and our recent A Code-First Intro to NLP included lessons on unjust bias and disinformation). I will continue to blog here on the fast.ai site and am still committed to the fast.ai mission.

Q: Given misuses of AI, isn’t your work at fast.ai to make AI accessible to more people dangerous?

A: What is dangerous is having a homogeneous and exclusive group designing technology that impacts us all. Companies such as Amazon, Palantir, Facebook, and others are generally considered quite prestigious and only hire those with “elite” backgrounds, yet we can see the widespread harm these companies are causing. We need a broader and more diverse group of people involved with AI, both to take advantage of the positives, as well as to address misuses of the technology. Please see my TEDx San Francisco talk for more details on this.

Q: Will you be coming up with a set of AI ethics principles?

A: No, there are many sets of AI ethics principles out there. We will not attempt to duplicate the work of others, but instead hope to amplify excellent work that is already being done (in addition to doing our own research).

Q: What do you consider the biggest ethical issues in tech?

Some of the issues that alarm me most are the encoding & magnification of unjust bias, increasing surveillance & erosion of privacy, spread of disinformation & amplification of conspiracy theories, lack of transparency or oversight in how predictive policing is being deployed, and lack of accountability for tech companies. For more information on these, please see some of my talks and posts linked below.

Here are some of my talks that you may be interested in:

And some previous blog posts:

I hope you can join us for our first data ethics seminar on the evening of Monday Aug 19 downtown in SF, and please sign up for our mailing list to stay in touch!

<!-- real people should not fill this in and expect good things - do not remove this or risk form bot signups-->
<div style="position: absolute; left: -5000px;" aria-hidden="true"><input type="text" name="b_a41fda1ae353f0586ce6feaa7_66518975b8" tabindex="-1" value=""></div>
<div class="clear"><center><input type="submit" value="Subscribe" name="subscribe" id="mc-embedded-subscribe" class="button"></center></div>
</div>