1st BCI-UC Summary
The 1st BCI-UC was held on July 23rd 2020 and attracted over 300 participants. The two keynotes and nine contributed talks, which were selected from 31 submissions by 190 registered voters casting a total of 427 votes, can be re-watched below.
Recordings
Session 1: Brain-Computer Interfacing in ALS
Keynote: TOWARDS THE DAILY LIFE IMPLEMENTATION OF ELECTROCORTICOGRAPHY-BCIS |
|
Four years of Utrecht NeuroProsthesis: The value of a BCI implant in late stage ALS |
|
Preferences of individuals with locked-in syndrome |
Session 2: The State-of-the-Art of BCI: Challenges, Methodologies and Tools
Continual Reconfiguration of Neural Activity and its Implications for Stable Decoding |
|
Getting BCI systems out of the lab: Current gaps and challenges in standardization |
|
Facing the small data reality |
|
Merging humans and machines with collaborative brain-computer interfaces |
|
Building Brain-Computer Interfaces with Python |
Session 3: Language-based BCI-Communication
High-performance brain-to-text communication via imagined handwriting |
|
Decoding speech production using intracortical electrode arrays in dorsal precentral gyrus |
|
Keynote: Decoding Speech from Invasive EEG |
|
Keynote Speakers
Mariska J Vansteensel
UMC Utrecht Brain Center, Department of Neurology & Neurosurgery, University Medical Center Utrecht, Utrecht, Netherlands;
Title: TOWARDS THE DAILY LIFE IMPLEMENTATION OF ELECTROCORTICOGRAPHY-BCIS
Abstract: Brain-computer interfaces (BCIs) employ neural signals to generate a control signal for a computer. As such, they may provide a muscle-independent communication channel for those who are unable to reliably use existing augmentative and alternative communication technology. At the UMC Utrecht, we investigate BCIs that are based on electrocorticography (ECoG): electrodes implanted on the cortical surface. ECoG-based BCIs have important advantages that are relevant for daily life implementation of BCIs, namely high signal quality and permanent availability and invisibility of the sensors. In my presentation, I will discuss some of the proof-of-concept research on ECoG-BCIs we conducted at UMC Utrecht. In addition, I will present findings obtained with the first people with locked-in syndrome who received with a fully implantable ECoG-BCI for home use, highlighting the successes, but also the issues that still need to be resolved before ECoG-BCIs can be more widely implemented as a treatment for the communication problems of people with severe motor impairment. Finally, I will report on our work in which we aim to increase the functionality of ECoG-BCIs by decoding multiple hand, mouth and face movements to accomplish multidimensional BCI control and faster and more intuitive communication.
Biography: Mariska Vansteensel received her PhD in the field of Neurophysiology in 2006 at Leiden University, The Netherlands. She joined the BCI research field in 2007 and is currently an Assistant Professor at the UMC Utrecht Brain Center, Utrecht, the Netherlands. The main focus of her research is the development and validation of an implantable Brain-Computer Interface (BCI) for communication in severely paralyzed people, based on electrocorticographic (ECoG) electrodes. Since 2013, she investigates the usability of a prototype ECoG-BCI in the daily life of people with severe motor impairment. The first outcomes of this research were widely covered in international media. Currently, she focuses on utilizing the detailed organization of the sensorimotor areas for higher-dimensional ECoG-BCI control. Her other research interests include the user centered design of BCIs and the representation of brain functions in healthy and diseased adults and children. Since 2018, she is board member of the BCI Society.
Christian Herff
School for Mental Health and Neuroscience, Maastricht University, Maastricht, Netherlands;
Title: Decoding Speech from Invasive EEG
Abstract: Millions of people world-wide lose or never gain the ability to speak, due to laryngeal cancer, brainstem stroke or cerebral palsy, to name a few. A neuroprosthetic device that directly translates brain activity into a speech output could enable these patients to communicate with friends and family. In this presentation, Christian Herff will show current work in the decoding of speech processes from invasively measured brain activity. Both speech recognition, in which a textual representation of speech is decoded, and speech synthesis, direct output of audio, will be discussed. In addition to recent successes, shortcomings and open challenges of current approaches will be highlighted.
Biography: Dr. Christian Herff is an assistant professor in the School for Mental Health and Neuroscience at Maastricht University where he leads the invasive BCI research line. His research interest lays in the application of machine learning technology to neurophysiological data for Brain-Computer Interfaces and neuroscience research. With a particular focus on the decoding of speech processes from intracranial data, he tries to improve the lives of severely paralyzed patients while simultaneously improving our understanding of complex higher order cognition. He emphasizes the ability to achieve interpretable results based on computational models. In particular, visualization of complex dynamic models, such as deep neural networks, is of interest to him. After his studies of computer science at Karlsruhe Institute of Technology, IIT Delhi and NTU Singapore, he obtained a PhD in Computer Science from University of Bremen. In his position at Maastricht University, he tries to combine state of the art methodology with clinical knowledge to advance invasive Brain-Computer Interfacing. Dr. Herffs research has features in many international online, print, radio and television outlets and was awarded funding from national and international funding bodies.
Contributed Talks
Talk 1
Title: Four years of Utrecht NeuroProsthesis: The value of a BCI implant in late stage ALS
Author & Affiliation: Anouck Schippers, UMC Utrecht Brain Center, Department of Neurology and Neurosurgery, University Medical Center, Utrecht, the Netherlands
Abstract: In the Utrecht NeuroProsthesis (UNP) project, we test the feasibility of home use of a fully implanted BCI system as a new method of communication for people with locked-in syndrome. More than four years have passed since the first implantation of the UNP system in an individual with late-stage ALS (in October 2015, described in detail in [1]), and during these years the participant has been able to use the system reliably and independently to communicate with the outside world. Here we present an evaluation of the added value of the UNP system to this patient during these years. Material, Methods and Results: Technical specs of the UNP system have been described earlier [1 and 2]. Even though there have been small changes in impedance and high frequency band power over time [2], the system can be used accurately and has provided stable BCI control for a period of more than 4 years, allowing her to communicate and to call her caregiver whenever she needs attention. Indeed, signal processing settings that are required for independent home use have not changed between September 2016 and April 2019, and after that there have only been slight changes in the high frequency signal settings. Initially, independent home use of the system was mainly outside of the house and on average only for a couple of hours per month. However, as of spring 2018, home use increased, ranging from an average of 37.7 hours per month between April and September 2018, to 148 hours of home use in April 2019 [2]. This increase coincided with the loss of eye movement control and thus the inability to use an eye tracker (the preferred assistive technology prior to the UNP), and demonstrates the value of the UNP in situations where other assistive technologies prove to be inadequate. The user even states (43 months after the implantation) “without the UNP I would be without words” [2], highlighting the importance of the system for her. Satisfaction with the system has increased in the course of time: a high satisfaction was already reported after several weeks of use, but this increased even further over a period of 3,5 years [1 and 2]. November 2019 marked the beginning of the fifth year of study participation of this participant. We aim to continue to improve the system to match her needs and wishes, with the goal to assist her and future users in their communication with the outside world. Significance: This overview illustrates the added benefit of our fully implanted system for people with severe motor impairment and shows that when disease progression eliminates the use of more typical assistive technologies, a fully implanted ECoG-based system may be increasingly valuable.
References
[1] Vansteensel, MJ, Pels, EGM, Bleichner, MG, Branco, MP, Denison T, Freudenburg, ZV, Gosselaar, P, Leinders, S, Ottens, TH, Van Den Boom, MA, Van Rijen, PC, Aarnoutse, EJ, & Ramsey, NF (2016). Fully implanted brain-computer interface in a locked-in patient with ALS. N Engl J Med, 375 (21): 2060-66. doi:10.1056/NEJMoa1608085
[2] Pels, E. G., Aarnoutse, E. J., Leinders, S., Freudenburg, Z. V., Branco, M. P., van der Vijgh, B. H., … & Ramsey, N. F. (2019). Stability of a chronic implanted brain-computer interface in late-stage amyotrophic lateral sclerosis. Clinical Neurophysiology, 130(10), 1798-1803. UMC Utrecht
Talk 2
Title: Preferences of individuals with locked-in syndrome
Author & Affiliation: Mariana Pedroso Branco, UMC Utrecht Brain Center, Department Neurology and Neurosurgery
Abstract: Communication BCIs (cBCIs) have been proposed as an alternative access technology (AT) for individuals with locked-in syndrome (LIS) (e.g., [1-4]). To make sure that cBCIs are fully accepted by future users, their opinion should be considered during all steps of the research and development process of cBCIs. The opinion of users regarding BCI applications (beyond communication) has been investigated in the past [5-6]. However, none of these earlier questionnaires asked the users’ opinions on different mental strategies for cBCI control, nor on when during their clinical trajectory they would like to be informed about AT and cBCIs. Material, Methods and Results: We have investigated the opinion of 28 (potential) Dutch cBCI users regarding three topics:1) which applications they would like to control with a cBCI, 2) which mental strategies they would prefer to use to control the cBCI and 3) the time point during their clinical trajectory at which they would like to be informed about ATs, including cBCIs. We grouped and compared the opinion of participants with respect to the etiology of the LIS, that is, due to neuromuscular diseases (e.g. amyotrophic lateral sclerosis) or due to sudden onset events (e.g., brainstem stroke). Participants were interviewed during a 3-hour home visit. In this visit the participants were informed about BCIs and the possible mental strategies for control with the help of animation videos. Participants answered the questions using their current communication channel and AT. Results revealed that individuals with LIS, independently of their etiology, consider (in)direct communication, general computer use and environmental control important applications a BCI should offer. Moreover, they preferred, in general, attempted speech and movement as a control strategy over reactive strategies (such as P300 and SSVEPs). Lastly, both groups had a strong preference to be informed about AT aids and BCI when they reach the locked-in state and need the AT the most. Discussion: Results show that users have a particular opinion about which cBCI applications and paradigms they prefer. These preferences should be taken into account when developing cBCI for home use. A possible limitation of this study is the fact that a large number of participants were not naïve to the concept of BCI before this study. Although this fact may influence the interpretation of the results, we explicitly described to the participants that this questionnaire was about an ideal home-use BCI and not the ones they experienced in the past. Significance: This survey encourages the involvement of users and provides information to stakeholders in BCI and AT development that is valuable for the research and development process, ultimately reducing the risk of technology abandonment.
References
[1] Kübler, A., F. Nijboer, J. Mellinger, T. M. Vaughan, H. Pawelzik, G. Schalk, D. J. McFarland, N. Birbaumer, and J. R. Wolpaw. “Patients with ALS Can Use Sensorimotor Rhythms to Operate a Brain-Computer Interface.” Neurology 64 (10): 1775–77, 2005.
[2] Vansteensel, Mariska J., Elmar G.M. Pels, Martin G. Bleichner, Mariana P. Branco, Timothy Denison, Zachary V. Freudenburg, Peter Gosselaar, et al. “Fully Implanted Brain–Computer Interface in a Locked-In Patient with ALS.” New England Journal of Medicine 375 (21): 2060–66, 2016.
[3] Wolpaw, J.R., R. S. Bedlack, D. J. Reda, R. J. Ringer, P. G. Banks, T. M. Vaughan, S. M. Heckman, et al.. “Independent Home Use of a Brain-Computer Interface by People with Amyotrophic Lateral Sclerosis.” Neurology 91 (3): e258–67, 2018..
[4] Milekovic, Tomislav, Anish A. Sarma, Daniel Bacher, John D. Simeral, Jad Saab, Chethan Pandarinath, Brittany L. Sorice, et al.. “Stable Long-Term BCI-Enabled Communication in ALS and Locked-in Syndrome Using LFP Signals.” Journal of Neurophysiology, 2018.
[5] Huggins, Jane E., Aisha A. Moinuddin, Anthony E. Chiodo, and Patricia A. Wren.. “What Would Brain-Computer Interface Users Want: Opinions and Priorities of Potential Users With Spinal Cord Injury.” Archives of Physical Medicine and Rehabilitation, The Fifth International Brain-Computer Interface Meeting Presents Clinical and Translational Developments in Brain-Computer Interface Research, 96 (3, Supplement): S38-S45.e5, 2015.
[6] Huggins, Jane E., Patricia A. Wren, and Kirsten L. Gruis. 2011. “What Would Brain-Computer Interface Users Want? Opinions and Priorities of Potential Users with Amyotrophic Lateral Sclerosis.” Amyotrophic Lateral Sclerosis 12 (5): 318–24, 2011.
Talk 3
Title: Continual Reconfiguration of Neural Activity and its Implications for Stable Decoding
Author & Affiliation: Michael E. Rule, University of Cambridge
Abstract: The relationship between brain-computer interfaces and neural plasticity remains enigmatic. While some plasticity is beneficial, allowing users to adaptively improve BCI control, plasticity continues even in the absence of overt learning. For example, neural codes in some brain areas targeted for motor neuroprosthetics have been found to reconfigure continuously, even for fixed behaviors. We analyze long-term recordings from parietal cortex in mice performing a spatial navigation task. While single cells are surprisingly unstable, population-level representations are more conserved, and most reconfiguration lies in the null-space of likely downstream readouts. This is consistent with the emerging view that the brain contains a mixture of stable, consolidated representations, and more volatile ones that support flexible learning. We conjecture that the requirement to maintain consistent internal models across multiple brain areas provides an error signal that can stabilize task representations at the population level. This would also enable gradual reconfiguration without disrupting neural computation. This motivates an dual approach to BCI design: decoders should detect and facilitate long-term stable representations, while also flexibly tracking more volatile ones.
Talk 4
Title: Getting BCI systems out of the lab: Current gaps and challenges in standardization
Author & Affiliation: Ricardo Chavarriaga, CLAIRE-Confederation of Laboratories for AI Research in Europe, IEEE Standards Association, Industry Connections group on Neurotechnologies for BMI, Zürich University of Applied Sciences (ZHAW), Geneva Center for Security Policy
Abstract: Interest in Brain-Machine Interfacing (BMI)/Brain Computer Interface (BCI) is consistently growing and state-of-the-art research is currently being tested on its intended end-users. Translation from laboratory proof-of concepts to viable clinical and assistive solutions, as well as consumer applications entails a large set of challenges. The possibility of deploying and commercializing BMI/BCI-based solutions requires researchers, manufacturers, and regulatory agencies to ensure these devices comply with well-defined criteria on their safety and effectiveness. Standardization of BMI/BCI systems is particularly difficult since they typically require integration of multiple sub-components comprising measuring and analysis of neural activity, and provision of feedback to the user through different means (including displays, virtual reality systems, haptic interfaces and exo-skeletons, among others). The fact that BMI/BCI applications may vary from clinical or assistive devices for people with severe disabilities to consumer-oriented systems for the general public makes even more difficult the identification of proper standardization and regulatory frameworks. The lack of specific standards hinders the interoperability, and regulatory compliance of new devices. In consequence, it becomes a barrier for industrial applications to access a wide market. The IEEE SA Industry Connections activity on Neurotechnologies for Brain-Machine Interfacing brings together diverse stakeholders including academia, industry and government agencies to evaluate the current situation regarding neurotechnology standardization. In this presentation I will present the conclusions of this group, recently published by IEEE Standards Association as a roadmap document, on the state-of-the-art and priority areas for standardization in BMI-related neurotechnologies.
Talk 5
Title: Facing the small data reality
Author & Affiliation: Michael Tangermann, Head of the Brain State Decoding Lab, Substitute Professor Autonomous Intelligent Systems Lab, Computer Science Dept., BrainLinks-BrainTools Excellence Cluster, University of Freiburg
Abstract: While machine learners feast on huge image data sets, researchers and practitioners in BCI have to get along with small brain signal recordings, which barely allow to learn correlations contained without drowning in noise. However, some tricks of the trade exist for dealing with these small data sets. While some of these tricks have made it to publications, they nevertheless are not used a lot – probably as they have not (yet) entered the standard toolboxes of BCI. I will introduce a few of them, ranging from pimping up covariance matrices to improve classification with LDA over pretext learning for neural networks to an ingenious exploitation of your data when algorithms shall be benchmarked on as large data sets as possible. And of course I am curious to learn about your tricks of the trade, too!
Talk 6
Title: Merging humans and machines with collaborative brain-computer interfaces
Author & Affiliation: Davide Valeriani, Massachusetts Eye and Ear / Harvard Medical School
Abstract: Brain-computer interfaces (BCIs) are devices that directly translate brain activity into commands, enabling user to interact with the world with their mind. Their main application scope is as assistive devices, to allow people with severe disabilities to communicate or operate actuators or different kinds. However, BCIs have also been used to enhance human performance in a range of cognitive tasks, including perceptual decision-making. More recently, BCIs have also been applied to groups of individuals (collaborative BCIs) to enhance critical decision-making, such as target identification from static pictures. These BCIs monitor electroencephalographic (EEG) signals of each decision-maker and use machine learning to assess how confident each person is in making his/her decision. These confidence estimates are then used to weigh individual responses and obtain a group decision. An alternative approach to augment decision-making would be to let machines make decisions. Advances in machine learning and pattern recognition have boosted the accuracy in several domains. However, in less-controlled, realistic environments, machine vision algorithms could also fail. This study explores the possibility of combining a residual neural network (ResNet), BCIs and human participants to improve face recognition in crowded environments. Ten human participants and a ResNet undertook the same face-recognition experiment, where a picture from a crowded scene was presented for 300 ms. Human and artificial agents were then asked to decide whether a particular face appeared on the scene. The experiment consisted of 6 blocks of 48 trials each. After data collection, BCIs were used to decode the decision confidence of humans from their EEG signals and reaction times. The ResNet also estimated its own confidence by computing the distance between the encodings of the face extracted from the image and the target face. Groups decisions were obtained weighing individual decisions by confidence estimates. Different types of groups were created, including either only humans (with or without the BCI) or groups of humans and the ResNet. Taken individually, the accuracy of the average participant was 72.3%, while the ResNet had an average accuracy of 84.4%. The ResNet performance were characterized by high specificity (98.6%) and low sensitivity (41.7%), while the average participant had lower specificity (77.4%) and higher sensitivity (56.9%). Groups of BCI-assisted humans and ResNet were significantly more accurate (up to 35%) than the ResNet alone, the average participant, and equally-sized groups of humans not assisted by the BCI. These results suggest that melding humans, BCI, and machine-vision technology may provide the best performance in critical decision-making in realistic scenarios.
Talk 7
Title: Building Brain-Computer Interfaces with Python
Author & Affiliation: Pierre Clisson, timeflux.io
Abstract: The field of Brain-Computer Interfaces is currently experiencing a momentum, attracting both researchers and hackers. At the same time, a growing number of people rely on the thriving Python datascience and machine learning ecosystem. Yet, until recently, there was no fully open source Python framework for building BCIs. Timeflux (https://timeflux.io) aims to fill this gap. Attendees will learn the core concepts driving Timeflux, how to describe processing pipelines using a simple syntax, how to create interfaces available from a web browser, and how to easily implement their own algorithms for both offline and online use.
Talk 8
Title: High-performance brain-to-text communication via imagined handwriting
Author & Affiliation: Frank Willett, Stanford University
Abstract: To date, a major focus of BCI research has been on restoring gross motor skills, such as reaching and grasping or point-and-click typing with a 2D computer cursor. However, rapid sequences of highly dexterous behaviors, such as handwriting or touch typing, might enable faster communication rates. Here, we demonstrate an intracortical BCI that can decode imagined handwriting movements from neural activity in motor cortex and translate it to text in real-time, using a novel recurrent neural network decoding approach. With this BCI, our study participant (whose hand was paralyzed) achieved typing speeds that exceed those of any other BCI yet reported: 90 characters per minute at >99% accuracy with a general-purpose autocorrect. These speeds are comparable to able-bodied smartphone typing speeds in our participant’s age group (115 characters per minute) and significantly close the gap between BCI-enabled typing and able-bodied typing rates. Finally, new theoretical considerations explain why temporally complex movements, such as handwriting, may be fundamentally easier to decode than point-to-point movements. Our results open a new approach for BCIs and demonstrate the feasibility of accurately decoding rapid, dexterous movements years after paralysis.
Talk 9
Title: Decoding speech production using intracortical electrode arrays in dorsal precentral gyrus
Author & Affiliation: Sergey Stavisky, Stanford University
Abstract: Efforts to build brain-computer interfaces (BCIs) to restore lost speech have rapidly accelerated in recent years, with a number of impressive demonstrations using electrocorticography (ECoG). In parallel, BCIs that decode attempted arm and hand movements for controlling a robotic arm and point-and-click typing communication have had the highest performance to date using intracortical electrode arrays. We believe that an intracortical approach is equally promising for speech BCIs and can complement ECoG recordings. However, progress has been slower in this domain due to limited animal models for speech, leading to the need for preliminary work to be done in humans. We have recently made progress prototyping an intracortical speech BCI thanks to our discovery that individual neurons in the dorsal “hand knob” area of the precentral gyrus are active during speech (Stavisky et al., 2019, 2020). Here, we recorded from a BrainGate2 pilot clinical trial participant, who has tetraplegia but can speak, as he spoke visually prompted words that broadly sample a comprehensive set of 39 English phonemes. Phoneme identities could be classified from the neural activity recorded by two 96-electrode arrays with 33.9% accuracy using a recurrent neural network decoder. Performance did not saturate with increasing training data quantity or electrode count. By demonstrating offline performance comparable to previous ECoG studies – despite these arrays being in an area that is very likely suboptimal for speech decoding – these results help lay the groundwork for a concerted speech BCI effort using intracortical measurements from more ventral cortical areas which we expect to have stronger speech-related modulation.