
As members of the inaugural AI x Science Postdoctoral Fellowship, Brynn Sherman and Kieran Murphy are already reaping the rewards of cross discipline collaboration—testing new ideas quickly and learning new research languages. The new fellowship program, offered through the School of Arts & Sciences (SAS) and the School of Engineering and Applied Science (SEAS), provides mentorship and peer engagement opportunities.
“AI is a part of doing research in the sciences now,” says Colin Twomey, the executive director of the School of Arts & Sciences’ Data Driven Discovery Initiative (DDDI) who oversees the new program. “There’s this growing recognition that there are experts who have scientific research goals here at Penn that want to use these AI tools, and there are folks with expertise in building them, so we thought, ‘Why not get them together, get them communicating, and eventually, collaborating?’”
Prying open AI’s black box
For Murphy, his research employs information theory to demystify AI’s notorious “black box” problem—making deep learning systems more transparent, interpretable, and reliable.
“I try to make models more interpretable by tracing the flow of information—tracking where each piece of data originates within the dataset,” Murphy explains. His work examines how information flows through neural networks, identifying which parts of input data most influence an AI system’s final conclusions pertaining to a task it’s instructed to execute.
One of his favorite tools is something quite familiar: lossy compression, which cleverly discards extraneous data from JPEG images or MP3 audio. “I use lossy compression not for reducing file sizes, but for pointing out what the important information is in data,” Murphy says. In other words, by strategically “compressing” a dataset—be it sensor readings from hospital patients or behavioral data from experiments—his algorithms reveal which variables or features carry the real signal amidst the noise.