Abstract Objects in real-world scenes are often poorly or partially visible, for example because they are occluded or appear in the periphery. An additional challenge of real-world vision is that it is dynamic, causing the appearance of objects (e.g., their size and orientation) to change as we move. Notably, however, these changes are predictable from the three-dimensional structure of the surrounding scene. In two functional magnetic resonance imaging studies, we find that the visual cortex dynamically updates object representations using this predictive contextual information. First, visual cortical representations of objects were enhanced when they rotated congruently (versus incongruently) with the surrounding scene. Second, the inferred orientation of the object could be decoded from…
Abstract Objects in real-world scenes are often poorly or partially visible, for example because they are occluded or appear in the periphery. An additional challenge of real-world vision is that it is dynamic, causing the appearance of objects (e.g., their size and orientation) to change as we move. Notably, however, these changes are predictable from the three-dimensional structure of the surrounding scene. In two functional magnetic resonance imaging studies, we find that the visual cortex dynamically updates object representations using this predictive contextual information. First, visual cortical representations of objects were enhanced when they rotated congruently (versus incongruently) with the surrounding scene. Second, the inferred orientation of the object could be decoded from visual cortex activity even when the object was fully occluded. These findings indicate that predictive processes in the visual cortex follow the geometric structure of the environment, providing a mechanism to support object perception in dynamic natural vision.