Penn Engineering Research Discovers Critical Vulnerabilities in AI-Enabled Robots to Increase Safety and Security

The researchers found that AI-governed robots can easily be hacked, posing unaddressed safety risks. (Credit: Alex Robey)

In its commitment to drive responsible innovation, Penn Engineering convenes an open dialogue in D.C. on the government-funded research and solutions for a safer AI-enabled world

Rapid advancements across industries from healthcare, technology, finance and beyond present novel opportunities as well as challenges. As part of the University of Pennsylvania’s School of Engineering and Applied Science’s (Penn Engineering) commitment to develop leading-edge solutions that provide a better future for all, the School is bringing together today renowned leaders in engineering, academia, industry and policy for a dialogue on responsibly shaping the future of innovation at the Penn Washington Center in Washington, D.C.

Within its new Responsible Innovation initiative, researchers at Penn Engineering discovered that certain features of AI-governed robots carry security vulnerabilities and weaknesses that were previously unidentified and unknown. Funded by the National Science Foundation and the Army Research Laboratory, the research aims to address the emerging vulnerability for ensuring the safe deployment of large language models (LLMs) in robotics.

“Our work shows that, at this moment, large language models are just not safe enough when integrated with the physical world,” says George Pappas, UPS Foundation Professor of Transportation in Electrical and Systems Engineering (ESE), in Computer and Information Science (CIS), and in Mechanical Engineering and Applied Mechanics (MEAM).

Read the full story on the Penn AI site

Share: