Introduction
The typical locked-in syndrome (LIS) is caused by a lesion in the pons, and patients are able to communicate only with movements of their eyes or eyelids [
1]. This injury can be due to stroke or to other etiologies, like trauma or tumor [
2]. This condition of total paralysis can also be encountered in the amyotrophic lateral sclerosis (ALS). ALS is a neurodegenerative disease, altering mainly the motoneurons. At the end of the evolution, when patients have chosen to undergo a mechanical ventilation, they are at risk to lose all muscular control, including those of the eyes. In this state, named “completely locked-in state” (CLIS), the person cannot communicate at all, which implies that the diagnosis of the state of consciousness is clinically impossible or very delayed [
3]. In general, the assessment of consciousness in non-responsive patients (after cerebral anoxia, traumatism or major stroke) remains challenging, with up to 40% of patients in minimal conscious state that may be misdiagnosed in a vegetative state by non-expert teams [
4]. Even after a careful behavioral assessment, there is still the possibility that the patient cannot show any response to command because of complete motor impairment. The development of paraclinical assessments of patients with disorders of consciousness revealed that some of them, although diagnosed in a vegetative state or even in coma [
5,
6], were able to prove their consciousness by willfully modulating their brain activity (command following) when they were asked to, and thus should be considered as in a complete locked-in state. The first striking demonstration of such a cognitive motor dissociation (dissociation between awareness and motor capacity) was reported in 2006, using fMRI [
7].
EEG-based brain–computer interfaces (BCI) are promising tools to detect a cognitive motor dissociation [
8]. Indeed, they measure brain activity directly, in real-time, and enable repeated assessments at the patient’s bedside. Furthermore, they may also be used as communication devices. However, restoring communication with these patients once the diagnosis of command-following has been made, remains a major issue. The authors of two studies published in 2017 claimed that a communication was restored with people in CLIS [
9,
10], but some flaws have been observed in their methodologies and their results remain controversial [
11], which led to the retraction of one of them [
12]. In another study [
13], the authors used a steady state visually evoked potential BCI, which they evaluated longitudinally over 27 months in a patient with ALS. This patient could train with the BCI during three months before entering a CLIS state. The reliability of the BCI proved to be fluctuant, with accuracies below chance level in 13 out of 40 sessions [
13]. A recent publication with an implanted intra-cortical electrode in the dominant left motor cortex demonstrated both the feasibility and the striking limitations of communication with a CLIS patients at the advanced stage of ALS [
14]. This patient was implanted once he was already in a CLIS state with no residual eye movements, as attested by EOG. During the first stages of training, it appeared that when the patient was instructed to attempt or imagine hand, tongue or foot movements, no cortical response could be detected. Reliable yes–no responses were finally obtained three months after implantation thanks to a neurofeedback protocol. Tones with two different frequencies were provided according to the neural activity. During the 356 days following this training paradigm, he obtained an accuracy of 86.6% on 5700 trials. During training sessions where his accuracy was above 80%, he could use an auditory speller to produce one letter per minute, and freely spell intelligible sentences on 44 out of 107 days, which allowed him to express some of his needs. However, despite these encouraging results, invasive devices cannot be proposed to all patients, because the risks associated with the implantation (infection, hemorrhage) have to be compensated by the expected benefits. The truth is, potential advantages remain strikingly difficult to estimate. Indeed, as we saw previously, when facing patients with complete paralysis, there is a huge uncertainty about their consciousness and their cognitive abilities. In this context, non-invasive BCI could help detecting to patients with residual voluntary mental activity and provide them with a first line communication tool.
When targeting patients who, by definition, cannot use motor control, including oculomotor one, gaze-independant BCI have to be considered. In this manner, some visual BCI use the principle of steady-state visual evoked potentials, requiring to fixate a grid containing different colors, and to focus on one of them only [
15]. However, some patients with locked-in syndrome have a hard time to control it (see for example 4 out of 6 patients with LIS in Lesenfants et al., 2014 who perform at chance level [
15]). Indeed visual impairments are very common in the locked-in syndrome [
16]. Hence, targeting other sensory modalities can help overcome these pitfalls. Some translational studies suggest that the auditory modality could provide a way to reach these patients [
17‐
20]. In these four studies, one used pure tones as stimuli, which required the patients to learn a “code” (there were two different frequency streams, one standing for “yes” and the other one for “no”) [
18]. This kind of code is quite difficult for patients with possible memory impairment. That’s why other authors suggested the use of spoken words. Sellers et al. [
19] and Lulé et al. [
17] proposed an oddball protocol where the four words “yes”, “no”, “stop”, “go” were delivered in a random order. The patients with LIS or with disorders of consciousness were asked to count the target words, in order to elicit a P300 event-related potential. In Lulé et al. [
17], one out of two persons with a LIS could control the BCI with an online accuracy of 60%. An offline analysis showed that one patient with disorders of consciousness had 8 correct answers out of 14. The only signal considered for classification was the P300 response to deviant stimuli, thus neglecting the potential information carried by responses to standard sounds. A study by Hill et al. [
21] showed that it was possible to further use the attentional modulations of the N200 wave elicited by standard words (“yes” and “no”), on top of the ones associated with deviant stimuli. They obtained a fairly good binary classification with healthy subjects (77% ± 11 s.e. with 100 trials, chance level ~ 62% with an alpha risk of 1%). They tested this paradigm in two ALS patients at an advanced stage of the pathology, and obtained accuracies comparable to the ones in healthy subjects.
Considering these encouraging results, we implemented and tested a "yes/no" auditory-based BCI exploiting the attentional modulations of responses to both standard and deviant sounds. We first assessed the auditory BCI with healthy controls. Then, we tested it with a group of 7 patients with severe motor disability but with residual means of communication. This enabled us to (1) be sure that instructions were perceived and understood, (2) get feedback and adapt the paradigm to each patient whenever needed to maximize our chance of success. This is important at this stage, as gaze-independent BCI have rarely been tested in patients so far.
Discussion
Seventeen of 18 healthy subjects proved able to control the proposed auditory BCI. In contrast, only 2 out of 7 severely motor impaired patients proved able to control the interface online, and 3 out of 7 after careful offline signal processing.
The analysis of deviant evoked responses revealed that the presence of a classical P300 and its attentional modulation was associated with a good control of the interface, in both healthy subjects and patients. This could explain the poor BCI results observed in most patients, for whom no P300 was detected. Other studies uncovered this lower prevalence of P300 in patients with LIS [
32]. However, this lack of P300 response is quite surprising, since all patients could hear at least some of the deviant sounds when presented with the different stimuli. Hence this oddball auditory protocol lacks robustness with patients with severe motor impairment, and relying only on deviant sounds would not allow an accurate enough BCI communication.
For standard stimuli, the effect of attention on evoked potentials is reminiscent of an “attentional phase shift”, similar to the one observed in [
33,
34]. This attentional phase shift was robust and present in 15 of the 18 healthy subjects. However, it was not visible at the group level because of a phase difference in the shift from one subject to another, a variability which is also described in [
34]. This attentional shift or marker of sustained attention orienting was also present in patients who did control the BCI (Fig.
7). In these patients, there is probably a differential effect of attention on event-related responses, leading to a more negative event-related response when the subject pays attention to it, and/or a more positive response when the subject tries to ignore the “distractor” on the opposite side. Further studies will be needed to further explore how the attentional modulation is operating, either playing a role on the inhibition of distractor processing and/or enhancing target processing. This could be done for instance by contrasting active attention orienting with passive listening. We observed no obvious N100 evoked potential at the group level. This could be explained by the variability of the evoked potentials when using words as stimuli instead of sharp tones, as noticed by Hill et al. [
21]. Peak latencies then indeed vary a lot between, as well as within subjects.
An important finding is that patients with severe motor disability, although clearly conscious and with residual means of communication, present with poor performance of BCI control. Only 3 out of 7 patients were able to control the BCI with an accuracy above chance level. Together with our offline analysis of their electrophysiological responses, this suggest that BCIs that are validated in healthy subjects are unfortunately not readily usable by the targeted end users. Baring in mind that, in the long term, such interfaces are mostly meant to help people who have no means of communication, our findings raise crucial challenges for our community. Reasons behind the poor BCI performance in the majority of the patients have to be thoroughly explored in order to come up with efficient non-invasive solutions.
We can put forward several, non-mutually exclusive, hypotheses. First, the quality of the signal is, on average, lower in patients (due to several factors: mechanical ventilation, erratic muscle activity, electrical interference due to hospital beds, etc.). Second, there is more and more evidence that motor impairments come with cognitive impairments, whatever the etiology [
35‐
37]. Cognitive impairment are very prevalent in both the locked-in syndrome [
38] and in ALS [
36], even early in ALS course [
37]. In this context, it might be that our paradigm is cognitively too demanding for the patients: binaural listening requires not only focusing on the “ATTENDED” stream, but also inhibiting the “IGNORED” one. In addition to that, patients have to be able to understand fairly complex instructions, and sustain their attention for half an hour or so. However, all the patients included in this study could handle the complexity of communicating with a yes–no code using a letter board, which presupposes the preservation of some cognitive abilities, especially in terms of working memory and executive functions. Despite this ability, less than half of them proved abled to control the BCI.
It seems difficult to further simplify the protocol given the intrinsic and technical limitations of EEG [
35]. However, one possibly useful change to be tested would be to no longer present the "yes" and "no" streams concurrently, but alternately in the form of short separate blocks. This non-lateralized paradigm could be useful also for patients with unilateral deafness. This approach could make the attentional task easier, without too much extending the duration of a block. Patients would concentrate on sounds during blocks where the relevant answer is presented, while during irrelevant blocks they would divert their attention away from sounds (e.g. by imagining navigating in a familiar environment [
39]). This may reduce the mental workload and could help patients with cognitive impairments, especially frontal ones, which are quite frequent at an advanced stage of ALS and can occur in LIS too. Moreover, some studies suggest that it is possible for persons with motor impairment to improve their BCI performance with training over several sessions [
40].
Beyond improving the protocols, there is a need for better understanding the particularities of patients with severe motor impairment, which remain poorly explored at the moment, both at the neurophysiological and cognitive level [
36]. Here we chose an auditory protocol to overcome the oculomotor limitations of patients with severe motor impairment. Indeed, oculomotor impairments are known to be a predictor of weak control of visual BCIs [
41], even when stimuli are all presented at the same place in an SSVEP paradigm [
42]. A recent study with audio-visual stimulations also reported chance level accuracy with a patient in CLIS (no voluntary control of eye movements), despite the possibility to detect, offline, some differences between target and non-target responses, suggesting that the patient did try to do the task [
43]. Other markers using mental imagery (eg: sport imagery, navigation imagery) or motor attempt with people in LIS also failed to improve the control of BCI [
44]. Studies exploring user-centered design methods uncovered that temporal demand is considered to contribute the most significantly to workload [
44]. And some studies objectivated the impact of mental workload on ERP, decreasing for example the N200 and the late re-orienting negativity [
45]
In the same vein, it is striking to notice that, in our study, none of the patients with “classical” LIS, who present more often with oculomotor impairments than patients with ALS [
16], managed to control the BCI. And none of the patients with no ICA component (LIS2, LIS3 and ALS2) reflecting saccadic activity could control the BCI, either. Concomitantly, there is a bunch of evidence in the literature that eye-movement planning and spatial attention are tightly related [
46‐
48], although not completely similar [
49,
50]. Most of these studies relate to visual spatial attention, but attention is a cross-modal effort: for example, orienting attention toward a tactile target also triggers an automatic displacement of spatial attention in the visual modality [
51]. Hence it would be useful to test the impact of eye movements impairments on spatial auditory attention. Future studies should provide finer clinical information regarding these patients, namely about their oculomotor limitations, their ability to turn their head. Some adapted cognitive scale with a yes–no code that were developed for persons with LIS could be very useful in the BCI domain to better assess the cognitive profiles of the patients [
38]. This would help identifying those who could actually benefit from BCI, as well as identifying factors that prevent their use.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.