Abstract

Animals excel at seamlessly integrating information from different senses, a capability critical for navigating complex environments. Despite recent progress in multisensory research, the absence of stimulus-computable perceptual models fundamentally limits our understanding of how the brain extracts and combines task-relevant cues from the continuous flow of natural multisensory stimuli. Here, we introduce an image- and sound-computable population model for audiovisual perception, based on biologically plausible units that detect spatiotemporal correlations across auditory and visual streams. In a large-scale simulation spanning 69 psychophysical, eye-tracking, and pharmacological experiments, our model replicates human, monkey, and rat behaviour in response to diverse aud…

Similar Posts

Loading similar posts...

Keyboard Shortcuts

Navigation
Next / previous item
j/k
Open post
oorEnter
Preview post
v
Post Actions
Love post
a
Like post
l
Dislike post
d
Undo reaction
u
Recommendations
Add interest / feed
Enter
Not interested
x
Go to
Home
gh
Interests
gi
Feeds
gf
Likes
gl
History
gy
Changelog
gc
Settings
gs
Browse
gb
Search
/
General
Show this help
?
Submit feedback
!
Close modal / unfocus
Esc

Press ? anytime to show this help