What is the detection of stimuli?

Abstract

Conflict detection in sensory input is central to adaptive human behavior. Perhaps unsurprisingly, past research has shown that conflict may even be detected in the absence of conflict awareness, suggesting that conflict detection is an automatic process that does not require attention. To test the possibility of conflict processing in the absence of attention, we manipulated task relevance and response overlap of potentially conflicting stimulus features across six behavioral tasks. Multivariate analyses on human electroencephalographic data revealed neural signatures of conflict only when at least one feature of a conflicting stimulus was attended, regardless of whether that feature was part of the conflict, or overlaps with the response. In contrast, neural signatures of basic sensory processes were present even when a stimulus was completely unattended. These data reveal an attentional bottleneck at the level of objects, suggesting that object-based attention is a prerequisite for cognitive control operations involved in conflict detection.

eLife digest

Focusing your attention on one thing can leave you surprisingly unaware of what goes on around you. A classic experiment known as ‘the invisible gorilla’ highlights this phenomenon. Volunteers were asked to watch a clip featuring basketball players, and count how often those wearing white shirts passed the ball: around half of participants failed to spot that someone wearing a gorilla costume wandered into the game and spent nine seconds on screen.

Yet, things that you are not focusing on can sometimes grab your attention anyway. Take for example, the ‘cocktail party effect’, the ability to hear your name among the murmur of a crowded room. So why can we react to our own names, but fail to spot the gorilla? To help answer this question, Nuiten et al. examined how paying attention affects the way the brain processes input.

Healthy volunteers were asked to perform various tasks while the words ‘left’ or ‘right’ played through speakers. The content of the word was sometimes consistent with its location [‘left’ being played on the left speaker], and sometimes opposite [‘left’ being played on the right speaker]. Processing either the content or the location of the word is relatively simple for the brain; however detecting a discrepancy between these two properties is challenging, requiring the information to be processed in a brain region that monitors conflict in sensory input.

To manipulate whether the volunteers needed to pay attention to the words, Nuiten et al. made their content or location either relevant or irrelevant for a task. By analyzing brain activity and task performance, they were able to study the effects of attention on how the word properties were processed.

The results showed that the volunteers’ brains were capable of dealing with basic information, such as location or content, even when their attention was directed elsewhere. But discrepancies between content and location could only be detected when the volunteers were focusing on the words, or when their content or location was directly relevant to the task.

The findings by Nuiten et al. suggest that while performing a difficult task, our brains continue to react to basic input but often fail to process more complex information. This, in turn, has implications for a range of human activities such as driving. New technology could potentially help to counteract this phenomenon, aiming to direct attention towards complex information that might otherwise be missed.

Introduction

Every day we are bombarded with sensory information from the environment, and we often face the challenge of selecting the relevant information and ignoring irrelevant – potentially conflicting – information to maximize performance. These selection processes require much effort and our full attention, sometimes rendering us deceptively oblivious to irrelevant sensory input [e.g., chest-banging apes], as illustrated by the famous inattentional blindness phenomenon [Simons and Chabris, 1999]. However, unattended events that are not relevant for the current task might still capture our attention or interfere with ongoing task performance, for example, when they are inherently relevant to us [e.g., our own name]. This is illustrated by another famous psychological phenomenon: the cocktail party effect [Cherry, 1953; Moray, 1959]. Thus, under specific circumstances, task-irrelevant information may capture attentional resources and be subsequently processed with different degrees of depth.

It is currently a matter of debate which processes require top-down attention [Dehaene et al., 2006; Koch and Tsuchiya, 2007; Koelewijn et al., 2010; Lamme, 2003; Lamme and Roelfsema, 2000; Rousselet et al., 2004; VanRullen, 2007]. It was long thought that only basic physical stimulus features or very salient stimuli are processed in the absence of attention [Treisman and Gelade, 1980] due to an ‘attentional bottleneck’ at higher levels of analysis [Broadbent, 1958; Deutsch and Deutsch, 1963; Lachter et al., 2004; Wolfe and Horowitz, 2004]. However, there is now solid evidence that several tasks may in fact still unfold in the [near] absence of attention, including perceptual integration [Fahrenfort et al., 2017], the processing of emotional valence [Sand and Wiens, 2011; Stefanics et al., 2012], semantical processing of written words [Schnuerch et al., 2016], and visual scene categorization [Li et al., 2002; Peelen et al., 2009]. Although one should be cautious in claiming complete absence of attention [Lachter et al., 2004], these and other studies have pushed the boundaries of input processing that is task-irrelevant [without attention] and may even question the existence of an attentional bottleneck at all, at least for relatively low-level information. Conceivably, the attentional bottleneck is only present at higher, more complex, levels of cognitive processing, like cognitive control functions.

Over the years, various theories have been proposed with regard to this attentional bottleneck among which are the load theory of selective attention and cognitive control [Lavie et al., 2004], the multiple resources theory [Wickens, 2002], and the hierarchical central executive bottleneck theory and formalizations thereof in a cortical network model for serial and parallel processing [Sigman and Dehaene, 2006; Zylberberg et al., 2010; Zylberberg et al., 2011]. These theories all hinge on the idea that resources for the processing of information are limited and that the brain therefore has to allocate resources to processes that are currently most relevant via selective attention [Broadbent, 1958; Treisman, 1969]. Resource [re-]allocation, and thus flexible behavior, is thought to be governed by an executive network, most prominently involving the prefrontal cortex [Goldman-Rakic, 1995; Goldman-Rakic, 1996]. Information that is deemed task-irrelevant has fewer resources at its disposal and is therefore processed to a lesser extent. When more resources are necessary for processing the task-relevant information, for example, under high perceptual load, processing of task-irrelevant information diminishes [Lavie et al., 2003; Lavie et al., 2004]. Yet even under high perceptual load, task-irrelevant features can be processed when they are part of an attended object [when object-based attention is present] [Chen, 2012; Chen and Cave, 2006; Cosman and Vecera, 2012; Kahneman et al., 1992; O'Craven et al., 1999; Schoenfeld et al., 2014; Wegener et al., 2014]. There is currently no consensus which type of information can be processed in parallel by the brain and which attentional mechanisms determine what information passes the attentional bottleneck. One unresolved issue is that most empirical work has investigated the bottleneck with regard to sensory features; however, it is unknown if the bottleneck and the distribution of processing resources also take place for more complex, cognitive processes. Here, we test whether such a high-level attentional bottleneck indeed exists in the human brain.

Specifically, we aim to test whether cognitive control operations, necessary to identify and resolve conflicting sensory input, are operational when that input is irrelevant for the task at hand [and hence unattended] and what role object-based attention may have in conflict detection. Previous work has shown that the brain has dedicated networks for the detection and resolution of conflict, in which the medial frontal cortex [MFC] plays a pivotal role [Ridderinkhof et al., 2004]. Conflict detection and subsequent behavioral adaptation is central to human cognitive control, and, hence, it may not be surprising that past research has shown that conflict detection can even occur unconsciously [Atas et al., 2016; D'Ostilio and Garraux, 2012a; Huber-Huber and Ansorge, 2018; van Gaal et al., 2008], suggesting that the brain may detect conflict fully automatically and that it may even occur without paying attention [e.g., Rahnev et al., 2012]. Moreover, it has been shown that this automaticity can be enhanced by training, resulting in more efficient processing of conflict [Chen et al., 2013; MacLeod and Dunbar, 1988; van Gaal et al., 2008].

Conclusive evidence regarding the claim that conflict detection is fully automatic has, to our knowledge, not been provided, and therefore, the necessity of attention for cognitive control operations remains open for debate. Previous studies have shown that cognitive control processes are operational when to-be-ignored features from either a task-relevant or a task-irrelevant stimulus overlap with the behavioral response to be made to the primary task, causing interference in performance [Mao and Wang, 2008; Padrão et al., 2015; Zimmer et al., 2010]. In these circumstances, the interfering stimulus feature carries information related to the primary task and is therefore de facto not task-irrelevant. Consequently, it is currently unknown whether cognitive control operations are active for conflicting sensory input that is not related to the task at hand. Given the immense stream of sensory input we encounter in our daily lives, conflict between two [unattended] sources of perceptual information is inevitable.

Here, we investigated whether conflict between two features of an auditory stimulus [its content and its spatial location] would be detected by the brain under varying levels of task relevance of these features. The main aspect of the task was as follows. We presented auditory spoken words [‘left’ and ‘right’ in Dutch] through speakers located on the left and right side of the body. By presenting these stimuli through either the left or the right speaker, content-location conflict arises on specific trials [e.g., the word ‘left’ from the right speaker] but not on others [e.g., the word ‘right’ from the right speaker] [Buzzell et al., 2013; Canales-Johnson et al., 2020]. A wealth of previous studies has revealed that conflict arises between task-relevant and task-irrelevant features of the stimulus in these type of tasks [similar to the Simon task and Stroop task; Egner and Hirsch, 2005; Hommel, 2011]. Here, these potentially conflicting auditory stimuli were presented during six different behavioral tasks, divided over two separate experiments, multiple experimental sessions, and different participant groups [both experiments N = 24]. In all tasks, we focus on the processing of content-location conflict of the auditory stimulus. There were several critical differences between the behavioral tasks: [1] task relevance of a conflicting feature of the stimulus, [2] task relevance of a non-conflicting feature that was part of a conflicting stimulus, and [3] whether the response to be given mapped onto a conflicting feature of the stimulus. Note that in all tasks only one feature could be task-relevant and that all the other feature[s] had to be ignored. The systematic manipulation of task relevance and the response-mapping allowed us to explore the full landscape of possibilities of how varying levels of attention affect sensory and conflict processing. Electroencephalography [EEG] was recorded and multivariate analyses on the EEG data were used to extract any neural signatures of conflict detection [i.e., theta-band neural oscillations; Cavanagh and Frank, 2014; Cohen and Cavanagh, 2011] and sensory processing for any of the features of the auditory stimulus. Furthermore, in both experiments we measured behavioral and neural effects of task-irrelevant conflict before and after training on conflict-inducing tasks, aiming to investigate the role of automaticity in the detection of [task-irrelevant] conflict.

Results

Experiment 1: can the brain detect fully task-irrelevant conflict?

In the first experiment, 24 human participants performed two behavioral tasks [Figure 1A]. In the auditory conflict task [from hereon: content discrimination task I], the feature ‘sound content’ was task-relevant. Participants were instructed to respond according to the content of the auditory stimulus [‘left’ vs. ‘right’], ignoring its spatial location that could conflict with the content response [presented from the left or right side of the participant]. For the other behavioral task, participants performed a demanding visual random dot-motion [RDM] task in which they had to discriminate the direction of vertical motion [from hereon: vertical RDM task], while being presented with the same auditory stimuli – all features of which were thus fully irrelevant for task performance. Behavioral responses on this visual task were orthogonal to the response tendencies potentially triggered by the auditory features, excluding any task- or response-related interference [Figure 1B]. Under this manipulation, all auditory features are task-irrelevant and are orthogonal to the response-mapping. To maximize the possibility of observing conflict detection when conflicting features are task-irrelevant and explore the effect of task automatization on conflict processing, participants performed the tasks both before and after extensive training, which may increase the efficiency of cognitive control [Figure 1C; van Gaal et al., 2008].

Experimental design of experiment 1.

[A, B] Schematic representation of the experimental design for auditory content discrimination task I [A] and vertical random dot-motion [RDM] task [B]. In both tasks, the spoken words ‘left’ and “right were presented through either a speaker located on the left or right side of the participant. Note that auditory stimuli are only task-relevant in auditory content discrimination task I and not in the vertical RDM task. In this figure, sounds are only depicted as originating from the right, whereas in the experiment the sounds could also originate from the left speaker. [A] In content discrimination task I, participants were instructed to report the content [‘left’ or ‘right’] of an auditory stimulus via a button press with their left or right hand, respectively, and to ignore the spatial location at which the auditory stimulus was presented. [B] During the vertical RDM task, participants were instructed to report the overall movement direction of dots [up or down] via a button press with their right hand, whilst still being presented with the auditory stimuli, which were therefore task-irrelevant. In both tasks, content of the auditory stimuli could be congruent or incongruent with its location of presentation [50% congruent/incongruent trials]. [C] Overview of the sequence of the four experimental sessions of this study. Participants performed two electroencephalography sessions during which they first performed the vertical RDM task followed by auditory content discrimination task I. Each session consisted of 1200 trials, divided over 12 blocks, allowing participants to rest in between blocks. In between experimental sessions, participants were trained on auditory content discrimination task I on two training sessions of 1 hr each.

Experiment 1: conflicting information induces slower responses and decreased accuracy only for task-relevant sensory input

For content discrimination task I, mean error rates [ERs] were 2.6% [SD = 2.7%] and mean reaction times [RTs] 477.2 ms [SD = 76.1 ms], averaged over all four sessions. For the vertical RDM, mean ERs were 19.2% [SD = 6.6%] and mean RTs were 711.4 ms [SD = 151.3 ms]. The mean ER of vertical RDM indicates that our staircasing procedure was effective [see Materials and methods for details on staircasing performance on the RDM]. To investigate whether our experimental design was apt to induce conflict effects for task-relevant sensory input and to test whether conflict effects were still present when sensory input was task-irrelevant, we performed repeated measures [rm-]ANOVAs [2 × 2 × 2 factorial] on mean RTs and ERs gathered during the EEG recording sessions [session 1, ‘before training’; session 4, ‘after training’]. This allowed us to include [1] task relevance [yes/no], [2] training [before/after], and [3] congruency of auditory content with location of auditory source [congruent/incongruent]. Note that congruency is always defined based on the relationship between two features of the auditorily presented stimuli, also when participants performed the visual task [and therefore the auditory features were task-irrelevant].

Detection of conflict is typically associated with behavioral slowing and increased ERs. Indeed, we observed that, across both tasks, participants were slower and made more errors on incongruent trials as compared to congruent trials [the conflict effect, RT: F[1,23] = 52.83, p

Chủ Đề