Luana Ruiz: Graph Neural Networks and PhD Life

Luana Ruiz: Graph Neural Networks and PhD Life

Luana Ruiz received the Best Student Paper Award at the 27th European Signal Processing Conference in La Coruna, Spain.

Luana Ruiz, a Ph.D. student in the Department of Electrical and Systems Engineering in the School of Engineering and Applied Science, recently presented her work on graph recurrent neural networks at the 27th European Signal Processing Conference in La Coruna, Spain. There, she received the Best Student Paper Award for the paper co-authored with fellow student Fernando Gama and Professor Alejandro Ribeiro of the Electrical and Systems Engineering department. Luana’s work applies signal processing tools to machine learning, and she has recently published research that uses seismic wave readings from a seismograph network to predict the region an earthquake will hit seconds before it occurs.

Last September, I went to La Coruna, Spain, to present my research on graph recurrent neural networks and participate in a student competition at the European Signal Processing Conference (EUSIPCO).

Luana Ruiz received the Best Student Paper Award at the 27th European Signal Processing Conference in La Coruna, Spain.

It was a wonderful experience. The days at the conference where intense and productive, filled with lectures and poster sessions. In my free time, I also got to see Galicia, a Spanish region overlooking the Atlantic that Celts and Romans have held strategic in the past for maritime trade.

EUSIPCO is an annual conference where researchers from all over the world go to present their latest developments in signal processing, a field of electrical engineering concerned with the analysis, synthesis and transformation of signals carrying information, such as audio and images. As a Ph.D. student, conferences are a vital part of my and my peers’ work. We go to conferences to present our research to the scientific community and to learn from other researchers’ works, as well as to network, establish collaborations and find jobs.

While conferences only last a few days at most, attending them requires months of preparation. Papers may need to be submitted as early as a year before the conference, and it is not uncommon for many of us to pull all-nighters to meet those deadlines. After acceptance, we also have to prepare our presentations, which are usually in poster or lecture format.

But conferences are only a small part of what we do. In fact, it is hard to describe what a typical day at work looks like for a Ph.D. student. Some days, we will be writing papers for conferences or journal papers, which are longer manuscripts that go into the details of a particular piece of research. In others, we may be taking classes, teaching, holding office hours, going to talks, attending meetings and discussing research ideas. Towards the end of the program, we also need to write research proposals and the final thesis, and prepare for the Ph.D. defense. All of this happens over the course of 5 years or longer, which may seem like a lot of time at the start of the program, but the years go by fast.

The piece of research I presented at EUSIPCO, co-authored by my colleague Fernando Gama and my advisor Professor Alejandro Ribeiro, lies at the intersection of signal processing, my main area of research, and machine learning. Signal processing deals with analyzing and modifying information and, in particular, we are interested in information in the form of “graph processes,” which are time signals on graphs. Consider, for instance, a graph connecting U.S. weather stations that are geographically close: on this graph, an example of graph process is the series of temperature measurements recorded by each station in the course of a day.

Machine learning architectures use information to learn to perform specific tasks, such as automatic translation or face recognition. A key characteristic of these models is that they have to be adapted to the kind of information that they take in. The architectures best adapted to learn from graph processes are graph recurrent neural networks. From a series of temperature measurements in weather stations across the U.S., a graph recurrent neural network can learn how morning temperatures affect the evolution of afternoon temperatures in different regions of the country. This is achieved by approximately modeling each station’s temperature measurement as a function of both previous hourly readings in the same station — which is a “time dependence” — and in neighboring stations — a “spatial dependence.”

The mathematical model that best represents these time and spatial dependencies has parameters that are learned from past sequences of temperature readings, to which we refer as the data used to “train” the model. In other words, we feed the model with old temperature readings and ask it to predict temperatures that are already known; then, by minimizing the difference between these predictions and the real temperatures using optimization algorithms, we adjust the values of the model parameters until the prediction error is small enough. What is interesting about this approach is that it relies entirely on data, with no knowledge of meteorology being necessary at any point.

The novelty of graph recurrent neural networks is that they take the underlying structure of a graph process into account as they learn from it. In the weather example, the graph is easy to identify: it is the map of the weather station network. However, previous approaches to temperature prediction neglect its structure, making independent temperature predictions at each station using data from that station alone. As such, structural information that we know about the problem goes underused.

In our method, we leverage the graph structure by taking a multidisciplinary approach: applying tools from graph signal processing, an electrical engineering discipline dedicated to the study of signals defined on graphs, to existing machine learning architectures traditionally studied in computer science. The result was a learning architecture that largely outperforms less structured neural networks in a number of numerical experiments involving graph processes, which are summarized in the paper we presented at EUSIPCO.

I am very grateful for the award that we received for this paper, but, more importantly, I believe that it is a recognition of the value of multidisciplinary science. Regardless of how much there is to be discovered within a particular field, there will be an even bigger number of possibilities at the interface of research areas. It is not always easy to work at these interfaces, because it takes time for research communities to allow “outsiders” in. But in my experience, the positives largely outweigh the negatives, and making connections between signal processing and machine learning has been a rewarding journey that I want to keep following throughout my career.

Share: