Society’s widespread adoption of self-driving cars is just around the corner. But while these vehicles have the potential to provide significant economic and societal benefits by addressing persistent traffic safety, congestion and accessibility issues, AI-powered transportation is also a double-edged sword. Current AI systems that these cars are trained on can allow for alarmingly dangerous consequences due to unintentionally unreliable decisions made under complex, uncertain situations, their vulnerability to adversarial attacks against transportation elements, and their unintentional discrimination against certain user groups. Releasing products powered by immature AI technology could not only present a risk of physical harm to people who use them, it could also create distrust in the technology overall and hinder the public’s acceptance of this technology when it is ready.
To address both the safety and trustworthiness of the AI that powers autonomous vehicles and transportation systems, Rahul Mangharam, Professor in Electrical and Systems Engineering (ESE) and in Computer and Information Science (CIS) and a founding member of the PRECISE Center, joins a collaborative team investigating “Trustworthy AI for Transportation Cyber Physical Systems (CPS).” With a $1.2 million grant from the National Science Foundation (NSF) and its MSI Expansion Program, the multidisciplinary team of eight distinguished faculty members from the University of Texas Rio Grande Valley (UTRGV), the University of California, Riverside and the University of Pennsylvania will address critical issues such as autonomous driving safety, vulnerability to adversarial attacks and ensuring equitable AI decisions for all transportation system users.
“I am deeply grateful for this incredible opportunity provided by NSF,” says Mangharam. “This project represents a pivotal step in our efforts to enhance the multidisciplinary research capacity at the intersection of AI safety, security and fairness within transportation cyber-physical systems. As we push the boundaries of AI in these critical areas, our goal is not only to advance the technology but also to ensure that it operates in a manner that is safe, secure and fair for all users. I am excited to collaborate with such a talented group of researchers and look forward to the impactful work we will achieve together.”
While the research team works to develop cutting-edge AI tools that address both technical and social trust issues in transportation CPS, they are also invested in the education of future engineers. To support that investment, the project’s research efforts will be integrated with a robust education and outreach program designed to foster a diverse and skilled workforce. This initiative will train people from underrepresented groups in AI trustworthiness, educate both undergraduate and graduate students about trustworthy AI in transportation systems and inspire K-12 students to pursue careers in AI and engineering. Additionally, the project is committed to building a broader research-education community that will support these objectives and foster collaboration in the next generation of AI researchers.
“We are proud of Rahul and his team for leading this important research on trustworthy AI in transportation,” says Insup Lee, Cecilia Fitler Moore Professor in CIS and ESE and Director of the PRECISE (Penn Research In Embedded Computing and Integrated Systems Engineering) Center. This NSF award aligns with the mission of PRECISE to develop safe, reliable and intelligent systems. By addressing AI safety and fairness, the project not only advances technology but also inspires the next generation of engineers to create solutions for a safer, more equitable future.”
This announcement was co-authored by Liz Wai-Ping Ng, Associate Director at the PRECISE Center.