Manipulating word awareness dissociates feed-forward from feedback models of language-perception interactions

Open Access
Authors
Publication date 03-07-2015
Journal Neuroscience of Consciousness
Article number niv003
Volume | Issue number 2015
Number of pages 9
Organisations
  • Faculty of Social and Behavioural Sciences (FMG) - Psychology Research Institute (PsyRes)
Abstract
Previous studies suggest that linguistic material can modulate visual perception, but it is unclear at which level of processing these interactions occur. Here we aim to dissociate between two competing models of language-perception interactions: a feed-forward and a feedback model. We capitalized on the fact that the models make different predictions on the role of feedback. We presented unmasked (aware) or masked (unaware) words implying motion (e.g. "rise," "fall"), directly preceding an upward or downward visual motion stimulus. Crucially, masking leaves intact feed-forward information processing from low- to high-level regions, whereas it abolishes subsequent feedback. Under this condition, participants remained faster and more accurate when the direction implied by the motion word was congruent with the direction of the visual motion stimulus. This suggests that language-perception interactions are driven by the feed-forward convergence of linguistic and perceptual information at higher-level conceptual and decision stages.
Document type Article
Language English
Published at https://doi.org/10.1093/nc/niv003
Downloads
503628 (Final published version)
Permalink to this page
Back