Computing's Oppenheimer Moment? When AI Innovation, Research Ethics, and Human Rights Collide | 2019

Computing's Oppenheimer Moment? When AI Innovation, Research Ethics, and Human Rights Collide
Friday, April 19, 2019

Time: 2:00pm - 4:00pm
Location: MIT E53-354, Anthropology Department

Sponsored by MIT Anthropology

Abstract

Artificial intelligence (AI) research is an ethically daunting, inherently risky endeavor. It blends state-of-the art computer science and engineering to model solutions to alleviate pressing social challenges—from predicting criminal behavior to identifying young people contemplating suicide. However, most ethical breaches in tech do not come from rogue actors. Current methods for AI innovation make it all-too-easy to lose sight of the people whose activities inform algorithms. Many of AI’s ethical debacles are the fallout of tech researchers mixing good intentions with novel methodologies and nascent ethical frameworks. Much as nuclear physicists had to come to grips with the impact of splitting atoms, computer science and engineering communities must grapple with the reality that they are developing powerful technologies with far-reaching, social consequencesThis talk explains how to apply a new paradigm that bridges humanistic social sciences and computer science to produce socially-attuned AI research. It lays out how AI work might apply respect, beneficence, and justice—the three tenets that orient human subjects research—and a fourth value, mutuality, to AI research. It is our collective job, as researchers, to earn and maintain the public’s trust and dignity anytime we seek to understand technology’s role in society.

Speakers

Mary L. Gray, Harvard University's Berkman Klein Center for Internet and Societty, Microsoft Research, and Indiana University