Amazon Provides Gift to 10 Penn Engineering PhD students for Work on Trustworthy AI

PhD Students Funded by Amazon
Top row, left to right, Eshwar Ram Arunachaleswaran, Natalie Collina, Ziyang Li, Stephen Mell, and Georgy Noarov; and second row, left to right, Artemis Panagopoulou, Jianing Qian, Alex Robey, Anton Xue, and Yahan Yang are the 10 PhD students whose research on fair and trustworthy AI is being supported by a gift from AWS.

Today Amazon Web Services (AWS) announced it is providing a $700,000 gift to the University of Pennsylvania School of Engineering and Applied Science to support research on fair and trustworthy AI. The funds will be distributed to 10 Engineering Ph.D. students who are conducting research in that area.

The students are pursuing their research under the auspices of the ASSET (AI-Enabled Systems: Safe, Explainable and Trustworthy) Center, part of Penn Engineering’s Innovation in Data Engineering and Science (IDEAS) Initiative. ASSET’s mission is to advance “science and tools for developing AI-enabled data-driven engineering systems so that designers can guarantee that they are doing what they designed them to do and users can trust them to do what they expect them to do.”

“The ASSET Center is proud to receive Amazon’s support for these doctoral students working to ensure that systems relying on artificial intelligence are trustworthy,” said Rajeev Alur, Zisman Family Professor in Computer and Information Science (CIS) and the Director of ASSET. “Penn’s interdisciplinary research teams lead the way in answering the core questions that will define the future of AI and its acceptance by society. How do we make sure that AI-enabled systems are safe? How can we give assurances and guarantees against harm? How should decisions made by AI be explained in ways that are understandable to stakeholders? How must AI algorithms be engineered to address concerns about privacy, fairness, and bias?”

“It’s great to collaborate with Penn on such important topics as trust, safety and interpretability,” said Stefano Soatto, Vice President of Applied Science for Amazon Web Services (AWS) Artificial Intelligence (AI). “These are key to the long-term beneficial impact of AI, and Penn holds a leadership position in this area. I look forward to seeing the students’ work in action in the real world.”

The funded research projects are centered around themes of machine learning algorithms with fairness/privacy/robustness/safety guarantees; analysis of artificial intelligence-enabled systems for assurance; explainability and interpretability; neurosymbolic learning; and human-centric design.

“This gift from AWS comes at an important time for research in responsible AI,” said Michael Kearns, National Center Professor of Management & Technology in CIS and an Amazon Scholar. “Our students are hard at work creating the knowledge that industry requires for commercial technologies that will define so much of our lives, and it’s essential to invest in talented researchers focused on technically rigorous and socially engaged ways to use AI to our advantage.”

Below are the 10 students receiving funding and details on their research.

Eshwar Ram Arunachaleswaran is a fourth-year Ph.D. student, advised by Sampath Kannan, Henry Salvatori Professor in CIS, and Anindya De, Assistant Professor in CIS. Arunachaleswaran’s research is focused on fairness notions and fair algorithms when individuals are classified by a network of classifiers, possibly with feedback.

Natalie Collina is a second-year Ph.D. student, advised by Kearns and Aaron Roth, Henry Salvatori Professor of Computer and Cognitive Science who, like Kearns, is also an Amazon Scholar. Collina is investigating models for data markets, in which a seller might choose to add noise to query answers for both privacy and revenue purposes. Her goal is to put the study of markets for data on firm algorithmic and microeconomic foundations.

Ziyang Li is a fourth-year Ph.D. student, advised by Mayur Naik, Professor in CIS. Li is developing a programming language and open-source framework called Scallop for developing neurosymbolic AI applications. Li sees neurosymbolic AI as an emerging paradigm which seeks to integrate deep learning and classical algorithms in order to leverage the best of both worlds.

Stephen Mell is a fourth-year Ph.D. student, advised by Osbert Bastani, Assistant Professor in CIS, and Steve Zdancewic, Schlein Family President’s Distinguished Professor and Associate Chair of CIS. Mell is currently studying how to make machine learning algorithms more robust and data efficient by leveraging neurosymbolic techniques. His goal is to design algorithms that can learn from just a handful of examples in safety-critical settings.

Georgy Noarov is a third-year Ph.D. student, advised by Kearns and Roth. Noarov is studying means for uncertainty quantification of black box machine learning models, including strong variants of calibration and conformal prediction.

Artemis Panagopoulou is a second-year Ph.D. student, advised by Chris Callison-Burch, Associate Professor in CIS, and Mark Yatskar, Assistant Professor in CIS. Panagopoulou is designing explainable models for image classification using large language models to generate concepts used in classification. The goal of his research is to produce more trustworthy AI systems by creating human-readable features that are faithfully used by the model during classification.

Jianing Qian is a third-year Ph.D. student, advised by Dinesh Jayaraman, Assistant Professor in CIS. Qian’s research is focused on acquiring hierarchical object-centric visual representations that are interpretable to humans, and learning structured visuomotor control policies for robots that exploit these visual representations, through imitation and reinforcement learning.

Alex Robey is a fifth-year Ph.D. student, advised by George Pappas, UPS Foundation Professor and Chair of Electrical and Systems Engineering (ESE), and Hamed Hassani, Assistant Professor in ESE. Robey is working on deep learning that is robust to distribution shifts due to natural variation, e.g. lighting, background changes, and weather changes.

Anton Xue is a fourth-year Ph.D. student, advised by Alur. Xue’s research is focused on robustness and interpretability of deep learning. He is currently researching techniques to compare and analyze the effectiveness of methods for interpretable learning.

Yahan Yang is a third-year Ph.D. student advised by Insup Lee, Cecilia Fitler Moore Professor in CIS and Director of the PRECISE Center in the School of Engineering and Applied Science. Yang has been researching a two-stage classification technique, called memory classifiers, that can improve robustness of standard classifiers to distribution shifts. Her approach combines expert knowledge about the “high-level” structure of the data with standard classifiers.

This announcement was produced in collaboration with Amazon Science.