The environments that we live in contain far more visual detail than our brains can process. Nevertheless, we are able to navigate through these environments with relative ease. How is it that we form meaningful representations of our world from the limited information that our brains can extract?
The way that we process visual information is shaped by our behavioural goals, like, for instance, finding Waldo. Finding Waldo can be tough, but it would be a lot more difficult if you didn’t know what he was wearing. This is because a number of cognitive mechanisms help us to prioritize only the aspects of our environment that are useful in guiding our behaviours so that we can achieve various behavioural goals. Knowing that Waldo is wearing red, we can speed our search for him by prioritizing red things in the environment for visual processing while ignoring, for example, blue things.
Cognitive mechanisms, such as those that remember the colour of Waldo’s shirt, or those that prioritize goal-relevant features in the environment, have classically been studied in isolation. In reality, these mechanisms often share a singular behavioural goal (i.e., finding Waldo as quickly as possible) and rarely operate independently of one another. As such, my research program has centered on how mechanisms of memory and visual attention promote efficient behaviour through flexible, reciprocal interactions.