3rd BCI-UC Summary

The 3rd BCI-UC was held on January 27th 2022. The keynotes and the contributed talks can be re-watched below the abstracts.

 Schedule

 

 

CET (pm)
January 27th, 2022
03:00 - 03:10
Welcome & Opening: Moritz Grosse-Wentrup
03:10 - 04:10

Keynote: Neural Interfaces for Controlling Finger Movements
Cynthia Chestek
PhD. Associate Professor of Biomedical Engineering, Electrical Engineering, Neuroscience and Robotics. University of Michigan, Ann Arbor
chestekresearch.engin.umich.edu

04:10 - 04:30

Intention is all you need (to train a high-accuracy grasping iBCI)
Andres Agudelo-Toro
German Primate Center

04:30 - 04:50

Nine decades of electrocorticography: a comparison between epidural and subdural recordings
Simon Geukes
UMC Utrecht

04:50 - 05:00
Break
05:00 - 06:00

Panel discussion: Standardization in BCI reporting
Guillermo Sahonero Alvarez: Universidad Católica Boliviana San Pablo
Alberto Antonietti: Politecnico di Milano, Milano
Raffaele Ferrante: Tor Vergata University Rome
Luigi Bianchi: Tor Vergata University Rome
Pradeep Balachandran: Technical Consultant (Digital Health), Bangalore
sagroups.ieee.org/2731/

06:00 - 06:20

Setting the Tone for Industry Neurotechnology Development
Adam Molnar
Neurable Inc

06:20 - 06:40

A wireless, wearable Brain-Computer Interface for in-home neurorehabilitation
Colin Simon
Trinity College Dublin

06:40 - 07:00

Features describing what data-driven BCI methods have learnt can predict future motor-imagery BCI performance
Camille Benaroch
Inria Bordeaux Sud-Ouest

07:00 - 07:15
Break
07:15 - 07:35

Continuous movement decoding from non-invasive EEG
Gernot Müller-Putz
Graz University of Technology

07:35 - 07:55

Improving motor imagery detection with a Brain-Computer Interface based on median nerve stimulation
Sébastien Rimbert
Inria Bordeaux Sud-Ouest

08:00 - 09:00

Keynote: The Age of Neuroadaptivity
Thorsten Zander
Lichtenberg Professor for Neuroadaptive Human-Computer Interaction at TU Brandenburg
www.b-tu.de/fg-neuroadaptive-hci/team/leitung

09:00 - 09:15
Wrap up, Closing & Goodbye: Moritz Grosse-Wentrup

 Welcome & Opening


Keynote Speakers

Cynthia A. CHESTEK

PhD.
Associate Professor of Biomedical Engineering, Electrical Engineering, Neuroscience and Robotics.
University of Michigan, Ann Arbor
chestekresearch.engin.umich.edu

Title: Neural Interfaces for Controlling Finger Movements

Abstract:  Brain machine interfaces or neural prosthetics have the potential to restore movement to people with paralysis or amputation, bridging gaps in the nervous system with an artificial device. Microelectrode arrays can record from up to hundreds of individual neurons in motor cortex, and machine learning can be used to generate useful control signals from this neural activity. Performance can already surpass the current state of the art in assistive technology in terms of controlling the endpoint of computer cursors or prosthetic hands. The natural next step in this progression is to control more complex movements at the level of individual fingers. Our lab has approached this problem in three different ways. For people with upper limb amputation, we acquire signals from individual peripheral nerve branches using small muscle grafts to amplify the signal. Human study participants have been able to control individual fingers on a prosthesis using indwelling EMG electrodes within these grafts. For spinal cord injury, where no peripheral signals are available, we implant Utah arrays into finger areas of motor cortex, and have demonstrated the ability to control flexion and extension in multiple fingers simultaneously. Decoding "spiking band" activity at much lower sampling rates, we also recently showed that power consumption of an implantable device could be reduced by an order of magnitude compared to existing broadband approaches, and fit within the specification of existing systems for upper limb functional electrical stimulation. Finally, finger control is ultimately limited by the number of independent electrodes that can be placed within cortex or the nerves, and this is in turn limited by the extent of glial scarring surrounding an electrode. Therefore, we developed an electrode array based on 8 um carbon fibers, no bigger than the neurons themselves to enable chronic recording of single units with minimal scarring. The long-term goal of this work is to make neural interfaces for the restoration of hand movement a clinical reality for everyone who has lost the use of their hands.

Biography: Cynthia A. Chestek received the B.S. and M.S. degrees in electrical engineering from Case Western Reserve University in 2005 and the Ph.D. degree in electrical engineering from Stanford University in 2010. She is now an associate professor of Biomedical Engineering at the University of Michigan, Ann Arbor, MI, where she joined the faculty in 2012. She runs the Cortical Neural Prosthetics Lab, which focuses on brain and nerve control of finger movements as well as high-density carbon fiber electrode arrays. She is the author of 58 full-length scientific articles. Her research interests include high-density interfaces to the nervous system for the control of multiple degree of freedom hand and finger movements.

 

 recorded keynote Cynthia A. Chestek

Thorsten O. ZANDER

Lichtenberg Professor for Neuroadaptive Human-Computer Interaction at TU Brandenburg
www.b-tu.de/fg-neuroadaptive-hci/team/leitung

Title: The Age of Neuroadaptivity

Abstract:  The future of humanity depends to a large degree on what technologies we embrace, and what for. A major factor determining our use of technology is how well humans can communicate with it. Brain-Computer Interfaces (BCIs) provided a new form of such communication that initially was used to send direct commands without the need of any muscular activity. In 2011, the definition of Passive BCIs paved the way for rethinking the use of BCIs by assessing information about the operators’ state, removing the need for intentional communication. Now, a decade later, we see fundamentally new concepts of Human-Computer Interaction arising that are based on Passive BCIs. So-called Neuroadaptive Systems gain an understanding about their operator and automatically adapt to their needs – implying a convergence of human and artificial intelligence. Major corporations, militaries, and governmental bodies of different countries as well as a series of start-up companies are currently accelerating research and development in this area to such a degree that we are seeing exponential growth. 

I will discuss the potential resulting from different concepts of Neuroadaptive Technology and provide examples of early applications. This includes the use of these concepts in Human-Computer Interaction, Artificial Intelligence and Virtual Realities. The impact of this development on our societies from a legal and ethical perspective is hard to foresee; however, I will bring up some considerations and am looking forward to discuss them with the audience. Finally, I will present a perspective on how the rising age of Neuroadaptivity can benefit, but also harm the future of humanity.

Biography: "Thorsten O. Zander is Lichtenberg Professor for Neuroadaptive Human-Computer Interaction [https://www.b-tu.de/fg-neuroadaptive-hci/]  at Brandenburg University of Technology Cottbus-Senftenberg, Visiting Professor at the Higher School for Economics in Moscow, and the founder of Zander Laboratories [https://zanderlabs.com/] in Amsterdam, the Netherlands and Zander Labs Research in Cottbus, Germany. His research interests include using neurophysiological signals to decode the cognitive or affective state of humans interacting with technology. By developing novel Brain-Computer interfaces (BCIs), his groups aim to automatically assess information about the users’ state to augment and automatically adapt human-computer interaction and to improve the learning rate of artificial intelligence. He is considered to be a pioneer in the field of passive BCIs, which he defined in 2008, and he is the co-founder and co-leader of the Society for Neuroadaptive Technology [https://neuroadaptive.org/]. Furthermore, he is affiliated with the Swartz Center for Computational Neuroscience, University of California San Diego, is advising the OECD and is member of Microsoft’s Technical Leadership Advisory Board (TLAB) on BCI and Artificial Intelligence."

 recorded keynote Thorsten O. Zander


Contributed Talks

 

 

Talk 1

Title: Intention is all you need (to train a high-accuracy grasping iBCI)

Author & Affiliation: Andres Agudelo-Toro, German Primate Center

Abstract: Intracortical brain-computer interfaces (iBCI) convert spiking neural activity into movement signals of a prosthetic device through neural decoders. Similar to training a neural network, the decoder is progressively adapted using brain activity and the output trajectories of the cursor or robot. In the last 10 years, an important discovery has been that instead of using raw output trajectories, it suffices to use the attempted targets as input for the decoder training. This simple modification better reflects the intention of the subject and dramatically boosts decoding performance. The first of this class, the Recalibrated Feedback Intention-Trained (ReFIT) decoder (Gilja et al., 2012), and its subsequent variations have powered a continual row of performance improvements in iBCIs (Orsborn et al., 2012; Shanechi et al., 2017; Pandarinath et al., 2017; Nason et al., 2021). Despite its success, the ReFIT approach does not cover the full spectrum of human motor intentions. ReFIT was conceived with the consideration that the subject’s aim is to determine the course of the effector. This intention assumption suits cursor-like control, but does not take into account motor actions such as grasping. For example, when grabbing her keys, a subject unlikely thinks about the intermediate locations in joint space required to grab the key, but naturally goes through a sequence of discrete states to grab the object: hand preparation, hand opening, and execution of an appropriate grip type. The entire movement, including all Intermediate steps, is performed on a manifold in a high-dimensional kinematic space (e.g., 25 degrees of freedom of the hand) that is not immediately interpretable by the subject. Interestingly, this difference in intention seems to be hardcoded in the motor circuit. In experiments with monkeys performing a grasping task, we found that neural activity better correlates to the trajectory needed to achieve the grip than the directional information and speed needed to reach the target grip. Taking these findings into consideration, we developed a new training paradigm for the control of grasping in robotic prostheses. Instead of targets, our approach considers the full trajectory of transitions, which are crucial for achieving the right posture configurations. Our approach: Recalibrated Map to Attempted Path (ReMAP), redirects neural activity to attempted transitions in kinematic space. To test our approach, we trained two monkeys to perform a grasping task in a virtual environment built with the MuJoCo physics engine (DeepMind, Ltd.). When trained with either ReFIT or ReMAP, animals could perform the task equally well with no collisions. However, when environmental collisions were enabled, a task that is significantly more challenging for monkeys, the subjects could better grasp the targets using ReMAP. In this presentation, I will describe the advances that led to intention-based calibration of iBCIs and how our approach fits into this progress; how we further supercharged grip accuracy with a decoder powered by attractor dynamics inspired by variational autoencoders (VAEs); and the potential link between this hardcoded property of the neural response and the dynamical system perspective of motor cortex. Ultimately, we believe future iterations of ReFIT and ReMAP will be complementary parts of the prosthetic control toolbox. (Funding: DFG-FOR1847-B3, DFG-CRC889-C09, EU-Horizon-2020 B-CRATOS GA-965044) Co-authors: Jonathan Michaels (Brain and Mind Institute, Western University), Wei-An Sheng (Institut des Sciences Cognitives Marc Jeannerod), and Hans Scherberger (DPZ)

 recorded talk 1

Talk 2

Title: Nine decades of electrocorticography: a comparison between epidural and subdural recordings

Author & Affiliation: Simon Geukes, UMC Utrecht

Abstract: Over the last 15 years, BCIs and neuromodulation devices have received increasing attention, and their capabilities have been extended considerably. In the development of clinically viable systems, ECoG has arisen as a particularly interesting recording method. ECoG electrodes are typically placed below the dura mater (subdural), but can be placed on top of the dura (epidural) as well. However, the precise consequences of epidural or subdural placement are unknown. It can be surmised that epidural ECoG has a lower risk of serious complications within the central nervous system, compared to subdural ECoG, as the dura is left intact. At the same time, the dura may act as an additional signal barrier for epidural recordings, potentially reducing signal quality. To pave the way for optimal development of ECoG-based BCIs and neuromodulation devices, epidural and subdural placement of ECoG electrodes must be systematically compared. Here, we provide an overview of the available epidural ECoG studies. We discuss the differences between epidural and subdural recordings in terms of signal quality of short- and long-term recordings in humans and non-human primates. Also, we provide an overview of the serious complications that have occurred in long-term clinical trials with epidural and subdural placement. Taken together, the literature suggests that the dura does not have a large effect on the spatial resolution of the signal. However, epidural recordings do show lower signal power and amplitude than subdural recordings, particularly in high-density grids with small electrode diameters. The ability to decode or classify the recorded signals seems relatively unaffected by subdural or epidural electrode placement: similar to subdural ECoG, epidural ECoG can be used for real-time accurate control of complex prosthetics and exoskeletons. Additionally, we did not find evidence for a substantially larger number of serious complications associated with subdural ECoG compared to epidural ECoG. These results indicate that both epidural and subdural ECoG are well suited for high-fidelity neural signal recoding in humans, over longer periods of time. Thus, the best choice for ECoG placement does not need to be driven by complication rate or decodability of the signal, at least for current generations of clinical and high-density ECoG grids.

 recorded talk 2

Talk 3

Title: Setting the Tone for Industry Neurotechnology Development

Author & Affiliation: Adam Molnar, Neurable Inc.

Abstract: The field of brain-computer interfaces (BCI) is rapidly progressing both in academic and commercial settings. Historical precedent shows that technology can be used positively or negatively depending on intention and chance. Due to the disruptive nature of brain-computer interface technology, extra emphasis on its use is warranted. A framework to better plan for end-users of brain-computer interfaces may help ensure ethical development of the technology as well as prevent unintentional negative consequences. This paper reviews an ethical approach to BCI development, specifically in regards to its commercialization, user privacy, access, and proliferation.

 recorded talk 3

Talk 4

Title: A wireless, wearable Brain-Computer Interface for in-home neurorehabilitation

Author & Affiliation: Colin Simon, Trinity College Dublin

Abstract: Introduction: Despite Brain-Computer Interfaces (BCI) showing potential for upper limb rehabilitation following stroke, clinical adoption has been minimal. BCI featuring in-home, unsupervised training may facilitate clinical use of BCI as it reduces the burden on all healthcare participants. In this study we tested the feasibility of a wearable, 16 electrode wireless electroencephalography (EEG) system for in-home BCI training with healthy participants. Methods: Twelve participants undertook 6 days of EEG BCI training within their homes, receiving remote online instructions. We used a two-state motor imagery training protocol requiring different mental strategies to move an on-screen bar upwards or downwards. The BCI used common spatial patterns (CSP) and linear discriminant analysis (LDA) to classify mental states and provide feedback. The primary outcome was online classification accuracy of the mental state during BCI performance, and secondary outcome was the stability and distinctiveness of the EEG signal during mental states as well as the changes in alpha (8-13Hz) and beta (14-30 Hz) power due to training. Results: Participants achieved control over the BCI in 6 out of 12 cases. The distinctiveness of the EEG signal produced by participants between the states improved significantly over time and was associated with success. Successful participants were characterised by lateralised CSP weights and activity changes in the alpha and beta power bands. Stability increased over time but not in all mental states. Conclusion: This proof-of-concept study demonstrates that participants can achieve voluntary control over a BCI within 6 days using a wireless, wearable EEG system within their own home with minimal expert instruction or supervision.

 recorded talk 4

Talk 5

Title: Features describing what data-driven BCI methods have learnt can predict future motor-imagery BCI performance

Author & Affiliation: Camille Benaroch, Inria Bordeaux Sud-Ouest

Abstract: Motor-Imagery (MI) based Brain-Computer Interfaces (BCIs) identify users' intent by analyzing their brain activity when performing MI tasks, and thereby enable users to interact with the external world without any movement. Motor imagery generally leads to changes in brain rhythms originating from the sensory motor cortices, i.e., sensorimotor rhythms (SMRs) [1]. An SMR desynchronization, i.e., a decrease of signal power, is typically observed in the controlateral sensorimotor cortex during the execution, or imagination, of a hand movement [2]. During the calibration phase of an MI-BCI system, discriminative data-driven learning methods are commonly used to perform EEG feature extraction. One popular signal processing technique for EEG-based MI-BCIs is the Common Spatial Pattern (CSP) algorithm, which learn spatial filters that best discriminate between two MI classes [3]. Once the discriminative features identified, linear classifiers for instance, the linear discriminant analysis (LDA) are most often used to distinguish classes [4]. Although these feature extraction and classification algorithms are commonly used and have proven to be effective, they are almost exclusively data-driven. They include very little neurophysiological prior, and rather trust the (potentially noisy) EEG data recorded during the calibration phase. Are all properties of the features or classifiers learned from BCI data equally likely to be associated with good performances in practice? If not, what properties are more often associated with superior decoding performances? We studied and extracted predictors from two frequently used data-driven algorithms during the system calibration i.e., CSP and LDA. We first extracted characteristics from the trained CSP and LDA weights and therefore proposed new predictors of BCI performances. For instance, some characteristics were related to the laterality of the CSP filters and patterns or the variance of the LDA patterns' weight. Then, we studied the relationships between MI-BCI performances and subject-specific characteristics extracted from the CSP algorithm and LDA. Finally, we studied whether statistical models can predict and explain MI-BCI user performance across experiments, based on subject-specific characteristics extracted from the machine (i.e., from the results of the calibration algorithms). Our results suggested that a large majority of our proposed characteristics were strongly correlated to MI-BCI performances. For instance, the mean laterality of the CSP filters was positively correlated to MI-BCI (r=0.67,padj=10-7). In addition, those predictors enabled to predict future BCI accuracy much better than chance with an error of 7 to 8%. [1] Hari, R., & Salmelin, R. (1997). Human cortical oscillations: a neuromagnetic view through the skull. Trends in neurosciences, 20(1), 44-49. [2] Pfurtscheller, G., & Aranibar, A. (1979). Evaluation of event-related desynchronization (ERD) preceding and following voluntary self-paced movement. Electroencephalography and clinical neurophysiology, 46(2), 138-146. [3] Ramoser, H., Muller-Gerking, J., & Pfurtscheller, G. (2000). Optimal spatial filtering of single trial EEG during imagined hand movement. IEEE transactions on rehabilitation engineering, 8(4), 441-446. [4] Lotte, F., Congedo, M., Lécuyer, A., Lamarche, F., & Arnaldi, B. (2007). A review of classification algorithms for EEG-based brain–computer interfaces. Journal of neural engineering, 4(2), R1.

 recorded talk 5

Talk 6

Title: Continuous movement decoding from non-invasive EEG

Author & Affiliation: Gernot Müller-Putz, Graz University of Technology

Abstract: Making the paralyzed move’ is a dream for many researchers but even more for people suffering from a spinal cord injury (SCI) or other diseases leading to non-functional limbs and therefore a dramatic decrease in quality of life. A lesion in the cervical vertebra lead to dysfunction of breathing and all motoric and sensory functions. The restoration of hand and arm function has been a research topic since the late ´90s of the last century. Relatively soon, the ambition of 'reading' the intention of movement from brain activity and transferring it into real movement with the help of a brain-computer interface (BCI) emerged. This talk will introduce the approach of the Graz BCI group how to decode arm/hand movements from non-invasive EEG, starting from movement onset detection, trajectory decoding and error-processing. Detection of hand grasps and differentiation of grasp types will be discussed. What is the potential for people with SCI? Answering this and related questions will conclude the talk.

 recorded talk 6

Talk 7

Title: Improving motor imagery detection with a Brain-Computer Interface based on median nerve stimulation

Author & Affiliation: Sébastien Rimbert, Inria Bordeaux Sud-Ouest

Abstract: One of the most prominent BCI types of interaction is Motor Imagery (MI)-based BCI. Users control a system by performing MI tasks, e.g., imagining hand or foot movements detected from EEG signals. Indeed, movement and imagination of movement activate similar neural networks (Pfurtscheller et al., 2009), enabling the MI-based BCI to exploit the modulation of sensorimotor rhythms (SMR) over the motor cortex, respectively known as Event-Related Desynchronization~(ERD) and Event-Related Synchronization~(ERS), coming from the mu (7-13 Hz) and beta (15-30 Hz) frequency bands (Pfurtscheller et al., 1999). Many BCIs that are based on MIs opens up promising fields, particularly to control assistive technologies (McFarland et al., 2010), for entertainment (Marshall et al., 2013), for sport training (Mizuguchi et al., 2012), or even for post-stroke motor rehabilitation (Cervera et al., 2018) but two important challenges remain before using BCIs on a large scale. The first challenge is (i) to be able to detect the MI of the user without any temporal markers (often given by sound or visual cues). Indeed, it would be more appropriate and natural for the user to design an asynchronous BCI, i.e. without explicit time indications given to the participant to imagine a movement. Although there are some BCIs that are asynchronous because they do not use temporal markers or triggers, the literature clearly shows that their classification rate is lower than a synchronous BCI with triggers (Peirreira et al., 2018). The second challenge is therefore (ii) to achieve sufficient accuracy to ensure the reliability of a BCI device that could be used by the participants.

To address these challenges and successfully design a MI-based BCI, we argue that median nerve stimulation (MNS) is a very promising approach. Indeed, previous studies have shown that painless stimulation of the median nerve induces an ERD during stimulation while an ERS appears after stimulation (Salenius et al., 1997; Neuper et al., 2001; Rimbert et al., 2019). Most interestingly, a motor task performed during MNS would suppress the ERD and ERS patterns generated by this stimulation. The gating hypothesis suggests that ERDs and ERSs contract within a time window and cancel each other out. In this presentation, we will show that the MNS could be the keystone of a BCI specialized in the detection of movement intention. Indeed, we can envisage a routine system where the user/patient would be stimulated at the median nerve position, while a BCI device would analyze the ERD and ERS modulations in the motor cortex to check whether the patient is intending to move or not. Specifically, we will present that using a MNS-based BCI, the MI state is 12% more accurately detected compared to a standard BCI. For several subjects, the use of MNS improves performance by more than 20% compared to a standard method (Rimbert et al., 2019; Avilov et al., 2021). In this presentation we will show that MNS could be used for MI-BCI because this would allow the detection of an MI without any cues (e.g., sound or visual) with a high detection rate. These results are very promising and confirm that MNS can be used to improve the performance of BCI based on sensorimotor rhythms for several applications (e.g. control, stroke rehabilitation, sports training).

recorded talk 7

Panel discussion

Title: Panel discussion: Standardization in BCI reporting

Panel participants:

1) Guillermo Sahonero Alvarez

2) Alberto Antonietti

3) Raffaele Ferrante & Luigi Bianchi

4) All previous speakers and Pradeep Balachandran

Abstract: The description of Brain-Computer Interfaces (BCI) can lead to confusion because of the high heterogeneity of devices, protocols, and applications. Besides, different professional categories are involved: end-users, clinicians, therapists, and engineers; each one having different conceptions of BCI-related terms. This can cause misunderstandings and errors, and it makes it impossible to compare different systems and their performances. The IEEE P2731 working group has been working on a standardized glossary for BCI research, together with a functional model for BCI, and standards for BCI data sharing. We would like to present to the BCI community our efforts, in order to integrate their precious feedback. We are proposing a mini-symposium that foresees three talks, each one on the areas covered by the working group: 1) The BCI Functional Model: the importance of having a comprehensive functional model that can apply to virtually any BCI is that the same terminology, the same description, and the same language can be used even if different paradigms and applications are being discussed. It will be then easier to propose standard procedures such as benchmarking systems. 2) BCI Glossary: we have been working on a standardized glossary for BCI research. The first version of the BCI glossary, generated by the collective effort of the working group, includes 153 terms. They have been identified to be critical for describing in a standardized way BCI systems and their related aspects (e.g., the neurophysiological characteristics of the neural signals recorded). 3) Data format - BCI Database: we want to stimulate the discussion for the definition and selection of the information that should be stored in a file for effectively allowing the sharing of BCI data and tools amongst researchers. Establishing these requirements and procedures will accelerate BCI development and lay the foundation for accessible and scalable BCI technology as well as provide the foundations for the definition of a standard file format. 4) A panel discussion with the speakers, possibly having a lively interaction with the audience. Speakers: 1) Guillermo Sahonero Alvarez 2) Alberto Antonietti 3) Raffaele Ferrante and Luigi Bianchi 4) All previous speakers and Pradeep Balachandran

 recorded panel talks

 Closing