Now showing 1 - 2 of 2
  • Publication
    Modulation of the Earliest Component of the Human VEP by Spatial Attention: An Investigation of Task Demands
    (Oxford University Press, 2020-08-05) ; ; ;
    Spatial attention modulations of initial afferent activity in area V1, indexed by the first component “C1” of the human visual evoked potential, are rarely found. It has thus been suggested that early modulation is induced only by special task conditions, but what these conditions are remains unknown. Recent failed replications—findings of no C1 modulation using a certain task that had previously produced robust modulations—present a strong basis for examining this question. We ran 3 experiments, the first to more exactly replicate the stimulus and behavioral conditions of the original task, and the second and third to manipulate 2 key factors that differed in the failed replication studies: the provision of informative performance feedback, and the degree to which the probed stimulus features matched those facilitating target perception. Although there was an overall significant C1 modulation of 11%, individually, only experiments 1 and 2 showed reliable effects, underlining that the modulations do occur but not consistently. Better feedback induced greater P1, but not C1, modulations. Target-probe feature matching had an inconsistent influence on modulation patterns, with behavioral performance differences and signal-overlap analyses suggesting interference from extrastriate modulations as a potential cause.
  • Publication
    Vision from a brief glimpse: the cognitive role of the lowest level of visual cortical activity
    (University College Dublin. School of Electrical and Electronic Engineering, 2021) ;
    In order to generate the rich experience that is visual perception, the brain must accomplish the impressive feat of transforming a continuous stream of light that arrives at the two-dimensional retinal surface into a coherent three-dimensional representation of the objects, colours and scenes that surround us. Although this appears to us to happen quite automatically and effortlessly, a highly evolved and complex system of neural processes are involved in generating it. We are also not simple passive receivers of visual information but rather actively combine visual input with our past experience to create our perception. This interplay between visual input and past experience can provide a great deal of flexibility to calibrate our perceptual processing in line with our surroundings. In some cases, this can have important implications for our survival as when we mistake a tree root for a dangerous snake on a jungle path whereas we wouldn’t give a second’s notice to the garden hose in our back yard (assuming we are fortunate enough to live in a place where garden snakes are uncommon!). Other times, this flexibility can be used for purely leisurely purposes as when we stare at the sky and pick out images of dogs and elephants in the clouds. Yet while there is clearly a great deal of flexibility in our perceptual apparatus, unfettered flexibility would likely not be very adaptive; while we were caught up in our mind’s eye with whatever fantasy we might wish to behold, we might not notice that we were about to become somebody’s dinner. Therefore, the balance between flexibility and rigidity in our visual system is likely to lie at a point that allows for the generality to recognize objects from many different vantage points and in many different environments without permitting us to get carried away with our imaginations. One proposition has been that there is a certain extent of visual processing that proceeds rigidly, without being amenable to top-down influences, and that lays the foundations and sets the limits for our perception. Anatomically, the visual system is divided up into a large number of distinct processing areas and so one candidate area that could provide such veridical processing of information is area V1, the entry point of visual information to the cortical processing suite. However, investigations of the impact of one avenue of top-down influence (spatially directed attention) have yielded mixed results. Results from animal neurophysiology have demonstrated that V1 responses can be modulated by spatial attention under some circumstances but non-invasive electroencephalography (EEG) in humans has most often failed to detect modulation of V1 responses by spatial attention. By contrast, this thesis will argue that V1 responses in humans are amenable to modulation by spatial attention but that these modulations are nuanced and in order to detect them, careful consideration needs to be given to the choice of task paradigm to account for both V1 response properties and the flexibility of visual attention. It will further argue that V1 responses can directly drive visual perceptions that make use of the visual features that V1 extracts. Finally, EEG measurements are coarse and while there is widespread belief that a particular signal, the C1, reflects activity originating in V1, there have also been challenges to this claim. Thus, this thesis will also provide new evidence in support of the claim that V1 activity is reflected in the C1. Taken together, these findings contend that V1 does not simply extract basic visual features to be passed on to cognitive processes further downstream. Rather, V1 is integrally involved in processes of visual cognition, facilitating goal-driven attentional processes and even directly driving perceptual decisions. This challenges the notion that attentional mechanisms are restrained from altering the earliest stages of visual processing.