The Science of Designing Ethical Algorithms: Michael Kearns and Aaron Roth on Ethical AI

Michael Kearns and Aaron Roth sit side-by-side in front of a blackboard covered in math- and computer science-related notes.
Michael Kearns, National Center Professor of Management & Technology in Computer and Information Science (CIS), and Aaron Roth, Henry Salvatori Professor in Computer & Cognitive Science in CIS

In your 2019 book The Ethical Algorithm, you make clear that computer scientists have largely focused on designing the algorithms that power artificial intelligence to achieve certain technical goals. You argue that these algorithms should also be designed with ethical considerations like privacy and fairness in mind. What led you to conclude this should be a component of algorithm design, as opposed to just tech policy?

Michael Kearns 

We saw the rapid advance in power and scope of machine learning starting around 2010 — the dawn of the deep learning era. Then we started seeing reports in the early 2010s of algorithms being used for purposes that touch end-users, and then what we would call audits of algorithms. You would see people going to commercially available face recognition engines and seeing if they underperform on different skin tones.

Aaron Roth 

It became apparent that privacy was going to be a problem well before it became apparent that fairness was going to be a problem. Fairness concerns arise when you start using machine learning algorithms to make important decisions about individual citizens.

Before the deep learning revolution, people were using simpler machine learning algorithms to make relatively unimportant decisions, like recommending what movies to rent on Netflix. As soon as you’re training some model on the data of all Netflix users, or you’re coming up with a model on Facebook to recommend people you might want to be friends with, you start to be worried about privacy.

Michael Kearns

In short, we think it’s a good idea to bake ethical considerations into algorithms when it’s sensible and possible. It is not always sensible and possible — and many effects of algorithms are exogenous to the development of the algorithms themselves.

For instance, algorithms that predict criminal recidivism risk have downstream consequences in the legal system. There’s the algorithm and then there’s what the legal system is doing with the algorithm; you might not be able to address the way the system is using the algorithm in the algorithm’s design itself.

But from the beginning our stance has been that if there are things about algorithms that we don’t like, we should change the way we design those algorithms to avoid these behaviors. The first thing you should do is try to fix the problem in the technology itself rather than wait for the harm to happen and then regulate it in the courts.

To read the full interview, please visit the Penn Engineering AI site.

Share: