Chris Callison-Burch Talks Empowering Tomorrow’s AI Engineers

Through programs like the Raj and Neera Singh Program in Artificial Intelligence, the first Ivy League undergraduate degree of its kind, Penn Engineering is addressing the critical demand for specialized engineers and empowering students to be leaders in AI’s responsible use. We sat down with Chris Callison-Burch, Associate Professor in Computer and Information Science, who shares his own vision of an AI-integrated future. 

You have been working on language learning models for decades. How did your research perspective shift after the release of GPT-3, the predecessor of Chat-GPT?

In the early 2000’s, researchers, including myself, thought that getting language models to produce human-quality language would be extremely difficult. In fact, most of the people in my field had shifted their attention away from the Turing Test towards other more immediately quantifiable goals. When GPT-3 came out, I realized that AI-generated language was here, much sooner than I anticipated, and I was startled with the quality of the language it could produce. 

Honestly, the release of GPT-3 spurred a career existential crisis for me. I quickly realized that these AI language models were both revelatory and slightly terrifying. I had to think about how to position myself to help advance this technology and I questioned if academia was the right place to do this.

Why were you questioning academia and what convinced you to stay?

With the amount of data and computing power needed to train large language models, or LLMs (think a Google-sized data center), I wasn’t sure if academic environments would be able to contribute much. I ended up staying because I carefully thought through other contributions academic researchers can make to advance AI. 

Academics can use open-source models such as Meta’s model, Llama, to adapt models to their own use cases following other great use cases for open source models in healthcare provider systems or within government. However, a widely accessible and powerful LLM comes with serious repercussions since harmful use cases that are discovered after a model’s release are hard to limit. Without proper regulation and mandated updates, this tool could lead to harmful uses, including spreading misinformation, hate speech and dangerous suggestions. As researchers, it is our responsibility to mitigate those outcomes, and that alone is a very important role.

This article was written by Melissa Pappas. To read the full interview, please visit Penn Engineering’s AI site.

Share: