A Conversation about Tech Ethics with the New York Times Chief Data Scientist

ai-in-society
Author

Rachel Thomas and Chris Wiggins

Published

March 4, 2019

Note from Rachel: Although I’m excited about the positive potential of tech, I’m also scared about the ways that tech is having a negative impact on society, and I’m interested in how we can push tech companies to do better. I was recently in a discussion during which New York Times chief data scientist Chris Wiggins shared a helpful framework for thinking about the different forces we can use to influence tech companies towards responsibility and ethics. I interviewed Chris on the topic and have summarized that interview here.

In addition to having been Chief Data Scientist at the New York Times since January 2014, Chris Wiggins is professor of applied mathematics at Columbia University, a founding member of Columbia’s Data Science Institute, and co-founder of HackNY. He co-teaches a course at Columbia on the history and ethics of data.

Ways to Influence Tech Companies to be More Responsible and More Ethical

Chris has developed a framework showing the different forces acting on and within tech companies:

  1. External Forces
    1. Government Power
      1. Regulation
      2. Litigation
      3. Fear of regulation and litigation
    2. People Power
      1. Consumer Boycott
      2. Data Boycott
      3. Talent Boycott
    3. Power of Other Companies
      1. Responsibility as a value-add
      2. Direct interactions, such as de-platforming
      3. The Press
  2. Internal Forces
    1. How we define ethics
      One key example: the Belmont Principles
      1. Respect for persons
      2. Beneficence
      3. Justice
    2. How we design for ethics
      1. Starts with leadership
      2. Includes the importance of monitoring user experience

The two big categories are internal forces and external forces. Chris shared that at the New York Times, he’s seen the internal process through the work of a data governance committee on responsible data stewardship. Preparing for GDPR helped focus those conversations, as well as wanting to be proactive in preparing for other data regulations that Chris and his team expect are coming in the future. The New York Times has standardized processes, including for data deletion and for protecting personally identifiable information (PII). For instance, any time you are storing aggregate information, e.g. about page views, you don’t need to keep the PII of the individuals who viewed the page.

How to Impact Tech Companies: External Forces

Chris cites Doing Capitalism in the Innovation Economy by Bill Janeway as having been influential on his thinking about external forces impacting companies. Janeway writes of an unstable game amongst three players: government, people, and companies. Government power can take the form of regulations and litigation, or even just the fear of regulation and litigation.

A second external force is people power. The most well-known example of exercising people power is a consumer boycott– not giving companies money when we disagree with their practices. There is also a data boycott, not giving companies access to our data, and a talent boycott, refusing to work for them. In the past year, we have seen engineers from Google (protesting Project Maven military contract and a censored Chinese search engine), Microsoft (protesting Hololens military contract), and Amazon (protesting facial recognition technology sold to police) start to exercise this power to ask for change. However, most engineers and data scientists still do not realize the collective power that they have. Engineers and data scientists are in high demand, and they should be leveraging this to push companies to be more ethical.

A third type of external power is the power of other companies. For instance, companies can make responsibility part of the value-add that differentiates them from competitors. The search engine DuckDuckGo (motto: “The search engine that doesn’t track you”) has always made privacy a core part of their appeal. Apple has become known for championing user privacy in recent years. Consumer protection was a popular idea in the 70s and 80s, yet has somewhat fallen out of favor. More companies could make consumer protection and responsibility part of their products and what differentiates them from competitors.

Companies can also exert power in direct ways on one another, for instance, when Apple de-platformed Google and Facebook by revoking their developer certificates after they violated Apple’s privacy policies. And finally, the press counts as a company that influences other companies. There are many interconnections here: the press influences people as citizens, voters, and consumers, which then impacts the government and the companies directly.

Internal Forces: Defining Ethics vs Designing for Ethics

Chris says there is an important to distinguish between how we define ethics vs. how we design for ethics. Conversations quickly get muddled up when people are jumping between these two.

Defining ethics involves identifying your principles. There is a granularity to ethics: we need principles to be granular enough to be meaningful, but not so granular that they are context-dependent rules which change all the time. For example, “don’t be evil” is too broad to be meaningful.

Ethical principles are distinct from their implementation, and defining principles means being willing to commit to do the work to define more specific rules that follow from these principles, or to redefine them in a way more consistent with your principles as technology or context changes. For instance, many ethics principles were laid out in the U.S. Bill of Rights, and we’ve spent the next few centuries working out what that means in practice.

Designing for Ethics

In terms of designing for ethics, Chris notes that this needs to start from the top. Leaders set company goals, which are then translated into objectives; those objectives are translated into KPIs; and those KPIs are used in operations. Feedback from operations and KPIs should be used to continually reflect on whether the ethical principles are being defended or challenged, or to revisit ways that the system is falling short. One aspect of operations that most major tech companies have neglected is monitoring user experience, particularly deleterious user experiences. When companies use contractors for content moderation (as opposed to full-time employees), it says a lot about the low priority they place on negative user experiences. While content moderation addresses one component of negative user experiences, there are also many others.

Chris wrote about this topic in a blog post, Ethical Principles, OKRs, and KPIs: what YouTube and Facebook could learn from Tukey, saying that “Part of this monitoring will not be quantitative. Particularly since we can not know in advance every phenomenon users will experience, we can not know in advance what metrics will quantify these phenomena. To that end, data scientists and machine learning engineers must partner with or learn the skills of user experience research, giving users a voice.

Learning from the Belmont Report and IRBs

The history of ethics is not discussed enough and many people aren’t very familiar with it. We can learn a lot from other fields, such as human subject research. In the wake of the horrifying and unethical Tuskegee Syphilis Study, the National Research Act of 1974 was passed and researchers spent much time identifying and formalizing ethical principles. These are captured in the Belmont Principles for ethical research on human subjects. This is the ethical framework that informed the later creation of institutional review boards (IRBs), which are used to review and approve research involving human subjects. Studying how ethics have been operationalized via the Belmont principles can be very informative, as they have for almost 40 years been stress-tested via real-world implementations, and there is copious literature about their utility and limitations.

The core tenets of the Belmont Principles can be summarized:

  1. Respect for Persons:
    1. informed consent;
    2. respect for individuals’ autonomy;
    3. respect individuals impacted;
    4. protection for individuals with diminished autonomy or decision making capability.
  2. Beneficence:
    1. Do not harm;
    2. assess risk.
  3. Justice:
    1. equal consideration;
    2. fair distribution of benefits of research;
    3. fair selection of subjects;
    4. equitable allocation of burdens.

Note that the principle of beneficence can be used to make arguments in which the ends justify the means, and the principle of respect for persons can be used to make arguments in which the means justify the ends, so there is a lot captured here.

This topic came up in the national news after Kramer, et. al, the 2014 paper in which Facebook researchers manipulated users’ moods, which received a lot of criticism and concern. There was a follow-up paper by two other Facebook researchers, Evolving the IRB: Building Robust Review for Industry Research, which suggests that a form of IRB Design has now been implemented at Facebook. I have learned a lot from studying work of ethicists on research with human subjects, particularly the Belmont Principles.

For those interested in learning more about this topic, Chris recommends Chapter 6 of Matthew Salganik’s book, Bit by Bit: Social Research in the Digital Age, which Chris uses in the course on the history and ethics of data that he teaches at Columbia. Salganik is a Princeton professor who does computational social science research, and is also professor in residence at the New York Times.

Chris also says he has learned a lot from legal theorists. Most engineers may not have thought much about legal theorists, but they have a long history of addressing the balance between standards, principles, and rules.

High Impact Areas

Ethics is not an API call. It needs to happen at a high-level. The greatest impact will happen when leaders to take it seriously. The level of the person in the org chart who speaks about ethics is very telling of how much the company values ethics (because for most companies, it is not the CEO, although it should be).

As stated above, engineers don’t understand their own power, and they need to start using that power more. Chris recalls listening to a group of data scientists saying that they wished their company had some ethical policy that another company had. But they can make it happen! They just need to decide to use their collective power.

Reading Recommendations from and by Chris

Other fast.ai posts on tech ethics