A team of researchers from the University of Pennsylvania’s School of Engineering and Applied Science and the Children’s Hospital of Pennsylvania (CHOP) have been awarded a five-year, $6 million Multidisciplinary University Research Initiative (MURI) grant. The MURI program is the signature research funding mechanism of the Department of Defense.
The Penn team’s proposal, “Robust Concept Learning and Lifelong Adaptation Against Adversarial Attacks,” aims to leverage insights from human cognitive development to make artificial intelligence systems better at protecting themselves from malicious disruptions.
With these systems increasingly interacting with the physical world, they are more vulnerable to being confused by ambiguous information. Rather than attempt to directly access the software that controls a self-driving car’s accelerator, an ill-intentioned person could subtly alter a speed-limit sign such that the car’s AI no longer recognizes it.
By imbuing AI with the kind of robust, adaptive learning capabilities that biological intelligences exhibit, these cyber-physical systems will be able to work with broader categories of information and thus be less prone to potentially dangerous confusion.
The team is led by Insup Lee, Cecilia Fitler Moore Professor in Penn Engineering’s Departments of Computer and Information Science (CIS) and Electrical and Systems Engineering (ESE). Lee is also the director of the PRECISE Center, which is dedicated to researching the technical strategies and the technology road-map for advanced safety and security solutions of these cyber-physical systems.
Other team members include Research Assistant Professor Osbert Bastani, Ruth Yalom Stone Professor Kostas Daniilidis, Senior Lecturer Eric Eaton, Eduardo D. Glandt Distinguished Professor Dan Roth, Research Assistant Professor James Weimer, all of CIS, and Julia Parish-Morris, Assistant Professor of Psychiatry at CHOP, who provides expertise in how children develop language and a theory of mind.
The need for insights from human cognitive development stems from the fact that current “deep learning” approaches require a significant amount of labeled data to be effective. Artificial intelligence systems may have perfect memory and react faster than any human, but their knowledge is limited to the narrowly focused domains that they have been explicitly trained on. Their ability to make correct decisions falls apart when applied in novel settings, while humans naturally apply lessons learned in one context to others with no explicit training.
“Robust, concept-learning techniques will assure that trained models operate effectively in the presence of malicious attacks, offering a substantial improvement over the vulnerability of today’s systems that can be easily compromised by even small anomalies,” Lee says.
“Research on neuro-inspired machine learning models has long been driven by biological principles, and the incorporation of learning mechanisms employed by young children is a natural extension of that,” says Weimer.
Altogether, this team’s work will benefit researchers and designers of autonomous systems by raising awareness of the danger these systems can present when placed in the real world, and by creating new tools and technologies to reduce these risks.