The Ethical Algorithm
The new book by professors Michael Kearns and Aaron Roth details how these increasingly ubiquitous pieces of technology can be designed to be more “socially aware.”
Algorithms, the digital decision-making tools at the core of some of the most successful and powerful technologies of the modern era, don’t come to conclusions in a vacuum. Their predictive abilities come with tradeoffs, as well as conscious and unconscious biases that their creators have imbued them with.
With the stakes so high, and with factors so complex, computer scientists and technology companies must start taking a deeper look into how their algorithms balance accuracy with fairness and a host of other social concerns.
Michael Kearns, founding director of the Warren Center and National Center Professor of Management & Technology in Penn Engineering’s Department of Computer and Information Science (CIS), and fellow Warren Center member Aaron Roth, Class of 1940 Bicentennial Term Associate Professor in CIS, outline this goal — and ways to achieve it — in their new book, The Ethical Algorithm.
Even when algorithms take fairness into account, poorly designed ones can satisfy tougher constraints by unwittingly concentrating the most biased outcomes in a single subgroup, a process known as “fairness gerrymandering.” Preventing biased or unethical outcomes is thus not just a matter of technologists considering how social factors play out in policy or the law, but how those concepts can be best represented in lines of code.
These issues are not hypothetical; the journal Science published a study early this week showing that a common piece of healthcare management software that routinely underserved black patients, despite their greater need. Kearns spoke to Marketplace Morning Report’s David Brancaccio about how a singular focus on accuracy led to the biased outcome of that software, and how algorithm designers might build in protections against such results.
Kearns and Roth also recently spoke to Knowledge@Wharton about how their work on “differential privacy” might be a way to maintain the accuracy technology companies’ services are predicated on while still protecting customer data. More ethical algorithms would go hand-in-hand with policy-based solutions, such as the California Consumer Privacy Ac tCalifornia governor Gavin Newsom recently signed into law.
For a broader view on the issues of ethics in programming, listen to the most recent episode of the Office Hours podcast, featuring Kearns, Roth and Lisa Miracchi, an assistant professor in the Department of Philosophy in the School of Arts and Sciences.