May I have your attention please

Since the beginning of history, humans were the only ones capable of producing and understanding language. In the last few years, deep learning is starting to change that. Modern language models can now chat with humans, answer questions, and more. These models are big, complex beasts (175 billion parameters in the GPT3 model ). Key to their success is their ability to consider words in context and automatically focus on the important parts, thanks to attention.

In this pilot project, we train deep learning language models to “understand” not human language, but neural signals. Our goal is to use context to improve decoding of neural activity during learning.

Methods & tools:

This is work in progress, please check back for updates!

Photo by Ryan Stone on Unsplash

Meir Meshulam
Meir Meshulam
Postdoc researcher

Meir Meshulam is a postdoc in Neuroscience at Princeton University, using machine learning and deep learning to study human cognition.