What We Remember

Cognitive psychologist and neuroscientist Wilma Bainbridge examines the interactions between what we see and what we remember, using large-scale online experiments, brain imaging, and artificial intelligence.

All of us have grappled with the striking mismatch between what we experience and then our memories of those experiences. For example, when teaching a class for the first time, even if we see every student’s face, some will stick in our memories while others may still be hard to recognize after several classes. We may be able to vividly reconstruct the arrangement of the classroom desks from memory, but have trouble remembering the pattern on the carpet. In other words, not everything we see is created equal in memory. This gap between seeing and remembering is the driving force behind the research in my lab—we are working to understand what we remember and why. This content-based perspective is still surprisingly rare within the field of memory. However, understanding how different types of items are encoded into memory has incredible promise to transform our theories of memory and apply research results to the real-world—we can design models of memory that make specific predictions about people’s memories based on the images that they are seeing.

Our memories are predictable.

Despite our unique individual experiences, people surprisingly tend to remember and forget the same items as each other. In other words, certain items are inherently memorable while others are forgettable—and this is why some students’ faces may stick in memory better than others. This means that you can measure the memorability score of any item and be able to form successful predictions about people’s memory based on the item alone.

In fact, memorability effects are so strong and pervasive that our lab has developed a publicly available deep learning neural network called ResMem that can predict your chances of remembering an image with remarkable success. As one of the strongest tests of ResMem’s predictive abilities, we ran an experiment where we asked participants to conduct a freeform visit to the Art Institute of Chicago (they could go with friends, explore the pieces in any order, did not have an experimenter around, etc.), and then afterwards completed a memory test of that exhibit on their mobile phones. Even given such a noisy naturalistic task and such complex and subjective images (i.e., artwork), ResMem was significantly able to predict the pieces people remembered in their visit. Further, even though ResMem was not trained on art and has no knowledge of culture, art history, or artists, it found that famous pieces in the Art Institute’s collection were more memorable than non-famous pieces. This implies that part of what makes a piece famous is that the image itself lasts in memory, and this finding may have big implications in how we think about the goals of art and graphic design. We have recently replicated these findings through a nationwide art contest, where we challenged artists to intentionally create memorable or forgettable artwork. Again, ResMem significantly predicted what people remembered in their gallery visit—and it did better than the artists and viewers themselves. We find this predictability of memory translates to other domains as well. More memorable image posts on Reddit as judged by ResMem received more comments, and the comments had more abstract language beyond what was depicted in the image. ResMem can also predict memory in children as young as four years of age, showing that already very young children show adult-like patterns in what they remember and forget.

If you can predict people’s memories for an image, this means you can also change people’s memories by changing the image. Our lab is also pursuing studies in generative AI to change the memorability of an image. For example, in one study, we generated memorable and forgettable novel symbols for abstract words (e.g., justice), and found people better remembered ones we engineered to be memorable, and they better remembered the word associated with them, too.

Photo of UChicago campus from air with the text "Your image has a memorability score of 0.58".

What do memories look like if you can’t imagine?

We are also conducting studies looking at what happens when your memories are very different from your visual experiences. Specifically, we have been testing individuals with aphantasia—a congenital condition in which individuals report intact recognition memory and semantic memory but no ability to reconstruct an image from memory in their mind’s eye. In other words, they can recognize their bedroom and describe it, but cannot “see” an image of it when they think of it. Aphantasia is still largely understudied, and the mechanisms behind this condition are unknown. We were thus curious—what does the content of their visual memories look like, and how does it compare to what they see? We ran the first U.S. study of aphantasia where we asked aphantasic participants and controls to view scene photographs and then draw them both from memory and perception (i.e., copying from the image). We found that while aphantasics drew high-quality images during perception (i.e., they had good drawing ability on average), they had dramatic losses of detail in their visual memories. They remembered far fewer objects, drew them in less detail with less color, and relied on more semantic scaffolding (word labels). However, they had completely intact spatial accuracy, and fewer false memories. This has opened up a new hypothesis in the field that aphantasia may involve a specific impairment of object-based memory, with spared spatial memory. Our lab is now conducting multiple studies interrogating the nature of visual memory representations in aphantasic individuals. In one exciting recent study, our lab discovered a pair of identical twins where one has aphantasia and one does not, and scanned their brains while they performed visual imagery tasks. Overall, we found evidence that visual information still exists in short-term memories of the twin with aphantasia, but with less visual detail for longer-term memories. We are now testing these results at a larger scale, to better understand how patterns in the brain reflect the content of what we are trying to call to memory. 

Project Lead

Smiling woman with long brown hair wearing a blue shirt.

Wilma Bainbridge’s research focuses on the cognitive neuroscience of perception and memory, looking at how certain items are intrinsically more memorable than others, and how the brain is sensitive to this information. She finds that there are certain images—photographs and even faces—that are remembered by most people, and some that are globally forgotten. She uses behavioral experiments, computer vision, machine learning, online studies, and functional MRI to understand what makes an item intrinsically memorable, and how the brain processes these items differently. She also explores the visual content of memories, using drawings and functional MRI to decode memory content.

Collaborations

Bainbridge also has many collaborations across the university looking at predicting memory, or predicting behaviors using artificial intelligence.

Smiling woman with curly brown hair wearing a light shirt and dark sweater.

Collaborations with Monica Rosenberg have worked on developing computational models that incorporate people’s level of sustained attention with the memorability of images to make better predictions of memory.

 
A smiling man with dark hair wearing a blue shirt with dark sweater.

A collaboration led by Yuan Chang Leong is working on developing an AI model to make intuitive physical judgments, and comparing them to human behaviors.

 
A smiling man with dark wavy hair wearing a light blue shirt and tan sweater.

A collaboration led by Marc Berman has developed a deep neural network to predict the naturalness of a scene image, and found that natural images tend to be more compressible and less memorable.