2nd BCI-UC Summary

The 2nd BCI-UC was held on February 10th & 11th 2021. The keynotes, contributed talks and the mini-symposium can be re-watched below soon.

 Schedule

CET
February 10th, 2021
15:00 - 15:10
Welcome & Opening: Moritz Grosse-Wentrup
Session 1: BCI Usability & Performance
15:10 - 16:10

Keynote: User-centered approach to improve BCI efficiency and usability: Stakes, Progress and Obstacles
Camille Jeunet, CNRS Research Scientist – Aquitaine Institute of Cognitive and Integrative Neuroscience (INCIA), Univ. Bordeaux / CNRS, France
recorded keynote

16:15 - 16:35

Polygon Area Metric for BCI Performance Assesment
Önder Aydemir, Karadeniz Technical University

16:40 - 17:00

Prediction of Motor Imagery Performance based on Pre-Trial Spatio-Spectral Alertness Features
Aysa Jafari Farmand, Istanbul Technical University, Sabanci University

17:05 - 17:45
Break
Session 2: From Lab to Practice

17:45 - 18:05

MindAffectBCI: a high performance framework for BCI experimentation and education
Jason Farquhar, MindAffect B.V.

18:10 - 18:30

How to get your hands on a real working BCI
Brendan Allison, UCSD

18:35 - 18:55

Decoding hand gestures using a single bipolar pair
Maxime Verwoert, Department of Neurology and Neurosurgery, UMC Utrecht Brain Center, University Medical Center Utrecht

19:00 - 20:00

Mini-Symposium: Commercial perspectives for BCI technologies
Host: Anja Meunier, Research Group Neuroinformatics, University of Vienna


CET
February 11th, 2021
Session 3: BCI Decoding Methods
15:00 - 16:00

Keynote: Recent Advances on Riemannian Transfer Learning
Marco Congedo, Directeur de Recherche, GIPSA-lab, CNRS, University Grenoble Alpes
recorded keynote

16:05 - 16:25

Riemannian geometry for outlier detection and discriminative dimensionality reduction
Maria Sayu Yamamoto, Tokyo University of Agriculture and Technology

16:30 - 16:50

Generalized neural decoders for transfer learning across participants and recording modalities
Steven Peterson, Bing Brunton Lab University of Washington

16:55 - 17:15

A Deep Transfer Learning Architecture overcoming negative transfer in Inter-subject EEG Decoding
Xiaoxi Wei, Brain&Behaviour Lab Imperial College London

17:20 - 18:00
Break
Session 4: BCI Applications

18:00 - 18:20

Training a thought-decoding brain-computer interface through passive listening
Jae Moon, University of Toronto, PRISM Lab, Holland Bloorview Kid’s Rehabilitation Hospital

18:25 - 18:45

Decoding visual scenes from visual cortex spikes with deep learning
Alexander McClanahan, UCF College of Medicine - Bioelectronics Lab with Dr. Brian Kim

18:50 - 19:10

Using Neurophysiological Signals to Measure Social Exclusion Induced by a Language Barrier
Nattapong Thammasan, University of Twente

19:15 - 19:35

Decoding hand grasped movement when used as BMI from EEG electrodes - revealing use of gamma oscillations
Sandeep Bodda, Amrita Mind Brain Center, Kerala

19:40 - 19:50
Closing & Goodbye: Moritz Grosse-Wentrup

 Welcome & Opening


Keynote Speakers

Camille JEUNET

CNRS Research Scientist – Aquitaine Institute of Cognitive and Integrative Neuroscience (INCIA), Univ. Bordeaux / CNRS, France

Title: User-centered approach to improve BCI efficiency and usability: Stakes, Progress and Obstacles

Abstract:  EEG-based Mental-Task BCIs (MT-BCIs) are extremely promising, notably to restore or improve motor and cognitive performances, e.g., in stroke patients and athletes. Nonetheless, several scientific challenges still have to be taken up before these technologies are usable and actually used outside laboratories. We will focus on those challenges related to human learning. It is estimated that 10 to 30% of users are unable to control an MT-BCI. Understanding how we learn to self-regulate specific brain patterns to acheive such control, and which factors influence this learning, is thus essential in order to design acceptable and efficient MT-BCI training procedures. I will present recent progress made in the field, but also discuss with you the obstacles encountered including, inter alia, our poor understanding of the high within- and between-subject variability in terms of BCI performances, the usually small sample sizes or the limited reporting of the instructions provided during MT-BCI training procedures. I will argue that collaborative approaches and open science could help us overcoming these obstacles, and present an intiative in this direction.

Biography: Camille Jeunet received her PhD in Cognitive Sciences in 2016 at the University of Bordeaux, France. After a post-doctoral fellowship in Inria (Rennes, France) and EPFL (Geneva, Switzerland), she joined the University of Toulouse (France) as a tenured CNRS* Research Scientist. In 2021, she has rejoined the University of Bordeaux where she leads an interdisciplinary research bringing together computer sciences, psychology and neurosciences in order to better understand human learning mechanisms in BCIs, and to improve BCI user-training. She aims to identify and model the cognitive, psychological and neurophysiological factors that influence BCI performance and learning in order then to design innovative training procedures and feedback, adapted to each user. She is particularly interested in using EEG-based BCIs to improve cognitive and motor skills in athletes and stroke patients. Camille Jeunet has received 3 PhD awards as well as several national fundings for her research. Since 2017, she is board member of the French BCI association, CORTICO.

 

 recorded keynote Camille Jeunet

Marco CONGEDO

Directeur de Recherche, GIPSA-lab, CNRS, University Grenoble Alpes

Title: Recent Advances on Riemannian Transfer Learning

Abstract:  Recent advances on Riemannian transfer learning achieved in Grenoble will be presented. We will consider both the unsupervised and semi-supervised cross-subject and cross-session framework. The former is obtained by recentering both the source and target trial covariance matrices around their geometric mean in the manifold of positive definite matrices (PDMs). The latter by further stretching and rotating the target PDMs so as to match as close as possible the distribution of the source PDMs (Riemannian Procrustes Analysis). Then, we will present the dimensionality transcending method, which allows operating transfer learning when the number and/or location of the electrodes used in the target and source data do not match. Leveraging on these advances we will demonstrate the construction of a meta-database, i.e., merging many heterogeneous databases obtained by different experiments with different number and/or location of electrodes within the same brain-computer interface modality.

Biography: Marco Congedo obtained the Ph.D. degree in Biological Psychology with a minor in Statistics from the University of Tennessee, Knoxville, in 2003. From 2003 to 2006 he has been a post-doc fellow at the French National Institute for Research in Informatics and Control (INRIA) and at France Telecom R&D. From 2007 to 2020 Dr. he has been a Research Scientist at the “Centre National de la Recherche Scientifique” (CNRS) in the GIPSA Laboratory, Grenoble, France. Since 2020 he is a Research Director in the same institution. In 2013 he obtained the ‘Habilitation à Diriger des recherches’ from Grenoble Alpes University.

Dr. Congedo is interested in human electroencephalography (EEG), particularly in real-time EEG neuroimaging (neurofeedback and brain-computer interface) and mathematical tools useful for the analysis and classification of EEG data such as inverse solutions, blind source separation and Riemannian geometry. He has authored and co-authored over 150 scientific publications on these subjects.

 recorded keynote Marco Congedo


Contributed Talks & Mini-Symposium

Talk 1

Title: Polygon Area Metric for BCI Performance Assesment

Author & Affiliation: Önder Aydemir,  Karadeniz Technical University

Abstract: Classifier performance assessment (CPA) is a challenging task for pattern recognition. In recent years, various CPA metrics have been developed to help assess the performance of classifiers. Although the classification accuracy (CA), which is the most popular metric in pattern recognition area, works well if the classes have equal number of samples, it fails to evaluate the recognition performance of each class when the classes have different number of samples. To overcome this problem, researchers have developed various metrics including sensitivity, specificity, area under curve, Jaccard index, Kappa and F-measure except CA. Giving many evaluation metrics for assessing the performance of classifiers make large tables possible. Additionally, when comparing classifiers with each other, while a classifier might be more successful on a metric, it may have poor performance for the other metric(s). Hence, such kinds of situations make it difficult to track results and compare classifiers. This study proposes a stable and profound knowledge criterion that allows the performance of a classifier to be evaluated with only a single metric called as polygon area metric (PAM). Thus, classifier performance can be easily evaluated without the need for several metrics. The stability and validity of the proposed metric were tested with the k-nearest neighbor, support vector machines and linear discriminant analysis classifiers on a total of 7 different datasets, five of which were artificial. The results indicate that the proposed PAM method is simple but effective for evaluating classifier performance.

 recorded talk 1

Talk 2

Title: Prediction of Motor Imagery Performance based on Pre-Trial Spatio-Spectral Alertness Features

Author & Affiliation: Aysa Jafari Farmand, Istanbul Technical University, Sabanci University

Abstract: Electroencephalogram (EEG) based brain-computer interfaces (BCIs) enable communication by interpreting the user intent based on measured brain electrical activity. Such interpretation is usually performed by supervised classifiers constructed in training sessions. However, changes in cognitive states of the user, such as alertness and vigilance, during test sessions lead to variations in EEG patterns, causing classification performance decline in BCI systems. This research focuses on effects of alertness on the performance of motor imagery (MI) BCI as a common mental control paradigm. It proposes a new protocol to predict MI performance decline by alertness-related pre-trial spatio-spectral EEG features. The proposed protocol can be used for adapting the classifier or restoring alertness based on the cognitive state of the user during BCI applications.

 recorded talk 2

Talk 3

Title: MindAffectBCI: a high performance framework for BCI experimentation and education

Author & Affiliation: Jason Farquhar, MindAffect B.V.

Abstract: MindAffect recently released our open-source BCI framework, mindaffectBCI (download from github.com/mindaffect).  This framework builds on our work developing a high performance communication BCI for patients which uses cheap commercial hardware.  In this talk I will present an overview of our BCI and present some examples of how it can be used to; 1) develop communication assistants for patients, 2) develop video games with brain-controlled inputs, 3) to activate and educate students in the neuroscience and artificial intelligence.

 recorded talk 3

Talk 4

Title: How to get your hands on a real working BCI

Author & Affiliation: Brendan Allison, UCSD

Abstract: Background and need: Nobody hates "bullshit BCIs" more than I do. I have many papers and talks that rail against them. Lately, I've seen more and more videos and written guides that are (1) biased toward one or more companies; (2) ignore critical information; (3) have factual errors and/or (4) were developed by people with amazingly little knowledge of our field. As BCIs become more prevalent among mainstream users and patients, misinformation will become much more damaging.

Talk overview: This talk will help develop a guide to helping people who are new to BCIs and would like to use a BCI. The talk and guide will include options such as purchasing a BCI, leasing or borrowing one, and using one that is owned by a university or student organization. I will also include a few slides with options to find out if you want a BCI; for example, people can download software and sample data before getting a BCI. If allowed by the organizers, I may record the talk and post it so that my talk recording can also serve as a guide.

My goal is to provide a guide to post publicly that people can see for free. It might be helpful to students, professors, makers, and people that use BCI2000, Open BCI, Open Vibe, EEGLab, and similar options I doubt I will submit it for peer-review, though I am not certain. I want to avoid preferences for any specific company or open-source option. People should be able to choose a BCI that meets their needs and preferences. Since this is for beginners, I won't go in to much detail about invasive or medical options.

 recorded talk 4

Talk 5

Title: Decoding hand gestures using a single bipolar pair

Author & Affiliation: Maxime Verwoert, Department of Neurology and Neurosurgery, UMC Utrecht Brain Center, University Medical Center Utrecht, Utrecht, Netherlands

Abstract: Electrocorticography (ECoG) opens the possibility of distinguishing between complex and fine movements based on sensorimotor cortical activity, which could provide a new multi-channel communication device for locked-in patients. Decoding multiple complex and fine movements is typically accomplished using high spatial-density ECoG grids [1,2]. However, when considering minimally invasive BCI implants, the size of the subdural ECoG implant, and therefore the number of implanted electrodes, must be minimized. Hence, it becomes vital to investigate to what extent one can reduce the number of implanted electrodes without compromising the number of degrees-of-freedom.
Here, we used data of seven epilepsy patients who had ECoG coverage with standard clinical grids over the sensorimotor cortex, to test whether four sign language hand gestures can be decoded using a strip of 4 electrodes spaced 1 cm apart. For that, we identified multiple strips of 4 consecutive electrodes covering the hand region of the sensorimotor cortex. We investigated which reference montage, unipolar or bipolar, yields the best classification results within a strip. We compared the decoding accuracy between 1) a combination of 4 unipolar single electrodes (unipolar strip), 2) a combination of 6 bipolar pairs (bipolar strip) and 3) one single bipolar pair (bipolar pair). For decoding, we employed a template-match decoding algorithm to classify the gestures using the temporal information of each unipolar and bipolar channel, limited to the power in the 60-130 Hz range. Each trial was compared with the average temporal profile (template) per gesture in a leave-one-out cross-validation scheme. Euclidean distance was used as a metric for comparison.
The results indicate that four hand gestures can be equally decoded using unipolar strips (mean 67.4 ± 11.8%), bipolar strips (mean 66.6 ± 12.1%) and bipolar pairs (mean 67.6 ± 9.4%), tested with a one-way repeated-measures ANOVA (F(2, 12) = 0.189, ns). The mean classification accuracies were calculated across subjects using the best classification score per method within subjects. Classification was significantly above chance level for all reference montages (25%, determined by permutation tests). We conclude that four hand gestures can be accurately decoded using a single bipolar pair and is therefore a good candidate for a minimally invasive motor-based BCI. Furthermore, this work encourages the use of ECoG grids as a robust and reliable BCI platform for fine movement decoding.

[1] – Bleichner, M. G., Freudenburg, Z. V., Jansma, J. M., Aarnoutse, E. J., Vansteensel, M. J., & Ramsey, N. F. (2016). Give me a sign: decoding four complex hand gestures based on high-density ECoG. Brain Structure and Function, 221(1), 203–216.
[2] – Branco, M. P., Freudenburg, Z. V., Aarnoutse, E. J., Bleichner, M. G., Vansteensel, M. J., & Ramsey, N. F. (2017). Decoding hand gestures from primary somatosensory cortex using high-density ECoG. NeuroImage, 147(May 2016), 130–142.

 recorded talk 5

Talk 6

Title: Riemannian geometry for outlier detection and discriminative dimensionality reduction

Author & Affiliation: Maria Sayu Yamamoto, Tokyo University of Agriculture and Technology, Japan, Inria Bordeaux Sud-Ouest / LaBRI, Potioc team, France and LISV, Univ. Paris-Saclay, France

Abstract: The Riemannian approach, describing EEG using covariance matrices and analyzing them by exploiting the properties of the Riemannian manifold where the covariance matrices belong, has contributed to enhancing the reliability of BCI. Nonetheless, BCI is still suffering from limitations including the sensitivity to artifacts. Typically, artifacts are often caused by eye blinks or muscle activity during EEG recording and those artifacts can produce EEG outliers. In order to avoid outliers to lead to erroneous user’s BCI command, outliers should be ideally removed from the dataset. In addition to this limitation of BCI itself, it has also been reported that the Riemannian approach has the drawback to be less effective for EEG datasets with a larger number of channels.
In this presentation, I will propose two promising solutions for addressing those problems. In the first part of the presentation, I will propose a robust Riemannian outlier detection method “RiSC (Riemannian Spectral Clustering)” utilizing spectral clustering. RiSC detects outlier by clustering EEG covariance matrices into non-outliers and outliers by considering the Riemannian geometry of the space, instead of setting a threshold traditionally required.  In the second part of the presentation, a novel similarity-based classification on the Riemannian manifold for handling high-dimensional EEG will be proposed. The proposed method “MUSUME (MUltiple SUbspace Mdm Estimation)” projects high dimensional covariance matrices to multiple low-dimensional spaces. Ultimately, MUSUME finds the most discriminant space for similarity learning from those multiple spaces. Experimental evaluations revealed both proposed methods outperform existing methods.
This presentation will bring the detail of RiSC and MUSUME with the results of the comparative experiments with existing methods and finally, I will show prospects for applying these methods to actual BCI applications.

 recorded talk 6

Talk 7

Title: Generalized neural decoders for transfer learning across participants and recording modalities

Author & Affiliation: Steven Peterson, Postdoctoral Scholar in the Department of Biology at the University of Washington (in the lab of Bing Brunton)

Abstract: Advances in neural decoding have enabled brain-computer interfaces to perform intricate, clinically-relevant tasks.
However, such decoders are frequently trained on specific participants, days, and recording sites, limiting their practical long-term usage. Therefore, a fundamental challenge is to develop neural decoders that can robustly train on pooled, multi-participant data and generalize to unseen participants. Here, we introduce a new decoder, HTNet, that fuses deep learning with neural signal processing insights. Specifically, HTNet augments a pre-existing convolutional neural network decoder with two innovations: (1) a Hilbert transform that computes spectral power at data-driven frequencies and (2) a layer that projects electrode-level data onto predefined brain regions. This projection step is critical for intracranial electrocorticography (ECoG), where electrode locations are not standardized and vary widely across participants. We trained HTNet to decode arm movements using pooled multi-participant ECoG data and tested performance on an unseen ECoG or scalp electroencephalography (EEG) participant; these pretrained models were also subsequently fine-tuned to each test participant. We show that HTNet significantly outperformed state-of-the-art decoder accuracies by 8.5% when tested on unseen ECoG participants and by 14.5% when tested on unseen participants that used a different recording modality (EEG). We then used transfer learning and fine-tuned these trained models to unseen participants, even when limited training data was available. Importantly, we fine-tuned trained HTNet decoders with as few as 50 ECoG or 20 EEG events and still achieved decoding performance approaching that of randomly-initialized decoders trained on hundreds of events. Furthermore, we demonstrate that HTNet extracts interpretable, physiologically-relevant features. By generalizing to new participants and recording modalities, robustly handling variable electrode placements, and fine-tuning with minimal data, HTNet is more applicable across a broader range of neural decoding applications than current state-of-the-art decoders.

recorded talk 7

Talk 8

Title: A Deep Transfer Learning Architecture overcoming negative transfer in Inter-subject EEG Decoding

Author & Affiliation: Xiaoxi Wei, Brain&Behaviour Lab  Imperial College London

Abstract: Convolutional neural networks (CNNs) have become a powerful technic to learn an EEG decoder without inductive biases and have been widely used in many EEG decoding fields [1,2,3,4,5]. However, it is still challenging to train CNNs to learn from dissimilar distributions, such as multiple subjects’ EEG, because the dissimilarity causes CNNs to misrepresent each of them instead of learning a richer representation. This is known as the negative transfer problem [6]. As a result, CNNs cannot utilise multiple subjects' EEG to enhance model performance directly.
This study has shown the negative transfer of a benchmark CNN for motor imagery decoding in the literature [5] on the BCI Competition IV2a dataset [7] and our online recorded dataset. On the BCI Competition IV2a dataset, the benchmark CNN's average classification accuracy dropped significantly from 82% to 73.4% when trained with data from multiple subjects instead of one subject. Similarly, on our dataset, the average accuracy decreased from 54.7% to 48.8%.
To address this problem, we have proposed a multi-branch deep transfer network, the Separate-Common-Separate Network (SCSN), based on splitting the network's feature extractors for individual subjects. We also explore the possibility of applying Maximum-mean discrepancy [8,9] to the SCSN to better align distributions of features from each feature extractor. Results show that the proposed SCSN (81.8\%, 53.2\%) and SCSN with Maximum-mean discrepancy (81.8\%, 54.8\%) are able to overcome the negative transfer problem in the benchmark CNN (73.4\%,48.8\%) on both datasets using multiple subjects. Our proposed network shows potential to avoid the negative transfer problem in EEG decoding and utilise larger datasets to train a powerful EEG decoder in future studies.

References
[1]Walker, Ian, Marc Deisenroth, and Aldo Faisal. "Deep convolutional neural networks for brain computer interface using motor imagery." Imperial college of science, technology and medicine department of computing (2015): 68.
[2]Acharya, U. Rajendra, et al. "Deep convolutional neural network for the automated detection and diagnosis of seizure using EEG signals." Computers in biology and medicine 100 (2018): 270-278.
[3]Oh, Shu Lih, et al. "A deep learning approach for Parkinson’s disease diagnosis from EEG signals." Neural Computing and Applications (2018): 1-7.
[4]Dai, Mengxi, et al. "EEG classification of motor imagery using a novel deep learning framework." Sensors 19.3 (2019): 551.
[5]Schirrmeister, Robin Tibor, et al. "Deep learning with convolutional neural networks for EEG decoding and visualization." Human brain mapping 38.11 (2017): 5391-5420.
[6]Pan, Sinno Jialin, and Qiang Yang. "A survey on transfer learning." IEEE Transactions on knowledge and data engineering 22.10 (2009): 1345-1359.
[7]Tangermann, Michael, et al. "Review of the BCI competition IV." Frontiers in neuroscience 6 (2012): 55.
[8]Long, Mingsheng, et al. "Learning transferable features with deep adaptation networks." arXiv preprint arXiv:1502.02791 (2015).
[9]Hang, Wenlong, et al. "Cross-subject EEG signal recognition using deep domain Adaptation network." IEEE Access 7 (2019): 128273-128282.

recorded talk 8

Talk 9

Title: Training a thought-decoding brain-computer interface through passive listening: a primer

Author & Affiliation: Jae Moon, University of Toronto, PRISM Lab, 
Holland Bloorview Kid’s Rehabilitation Hospital/ Bloorview Research Institute

Abstract: Covert speech, the silent production of words in the mind, has been studied increasingly in the BCI research community due to the potential to restore communication through the power of thought. However, a major barrier to clinical translation is the exhausting nature of the task, with individuals having to mentally rehearse words many times for the system to learn a reliable control signal. Hence, there is a growing need for these BCIs to become more accessible and user-friendly to individuals. Fortunately, many studies have shown that covert speech and speech perception are related in terms of source localization and topographical activation patterns. Therefore, the purpose of this project is to train a covert speech BCI on the basis of passively hearing words. However, in order to achieve this modelling, it must first be understood how the neural oscillations, a mechanism of information processing in the brain, are similar across tasks. Oscillatory engagement was determined through a series of studentized continuous wavelet transforms. It was found that covert speech mainly engages higher frequencies such as beta and gamma, whereas perception multiplexes at all frequencies with comparable beta and gamma engagement as covert speech. Phase-amplitude coupling analysis revealed endogenous coupling for perception but not for covert speech, which indicates the coordinated processing of syllabic and phonemic units of speech. Crucially, cross-task coupling analysis revealed that covert speech’s gamma activity couples to perception’s theta-band, suggesting a common encoding process in the gamma bands of the two tasks. Thus, we conclude that the gamma-band can be used to model covert speech through perception. Producing such a model can mean that BCIs can be trained to decode thoughts by individuals passively hearing words. This novel BCI can unlock a more user-friendly and engaging method of training. In the future, individuals may have a more versatile means of communication by training these BCIs with audiobooks.

 recorded talk 9

Talk 10

Title: Decoding visual scenes from visual cortex spikes with deep learning

Author & Affiliation: Alexander McClanahan, University of Central Florida

Abstract: Neural decoding has co-evolved in recent years the arrival of CMOS-fabricated electrophysiology probes and miniaturized neural amplifier chips, both of which have enabled large scale neural recording. Machine learning-based neural decoding has shown incredible feats in recent years; notably, the decoding of: spatial coordinates of a rodent via place cell spikes, and motor activity. We explore the utility of deep learning in decoding images from neural spikes in V1 & LGN.
Electrophysiology recordings and stimulus presentations were obtained from the Allen Institute for Brain Sciences Visual Coding: Neuropixels Dataset using the AllenSDK. Three deep learning models (shallow, deep, recurrent neural networks) were trained on spike counts across thousands of mouse V1 & LGN neurons and over 100,000 natural scene stimulus presentations across 26 sessions. Models were tested on held-out test spikes and evaluated for image decoding accuracy. Accuracies were then averaged across 26 recording sessions. Decoding accuracies were subsequently compared across various spike time bin durations and anatomical regions of the mouse visual system.
All three deep learning architectures performed better than chance (1/119, or, 0.84%) in decoding natural scene labels purely from spiking data recorded in V1 & LGN.  The average decoding accuracies across all recording sessions of each neural network architecture are depicted are quantified as follows: shallow (64.4% ± 16.4), deep (67.7% ± 15.3%), recurrent (62.1% ± 16.5%). The deep neural network slightly outperformed shallow and recurrent neural networks on average across recording sessions.

 recorded talk 10

Talk 11

Title: Using Neurophysiological Signals to Measure Social Exclusion Induced by a Language Barrier

Author & Affiliation: Nattapong Thammasan, University of Twente

Abstract: Universities and other workplaces are becoming more and more international. This calls for good communication and teamwork between international teams. However, people often fall back on their mother tongue when they cannot express themselves well enough in another language, thus speaking a language that most people around them cannot understand. This linguistic ostracism could cause feelings of social exclusion, anger, and sadness, and even trust issues, instead of the social inclusion which should be aimed for when working together as a team. This paper will endeavour to determine how, and to what extent, this social exclusion induced by a language barrier reflects in neurophysiological signals (Electroencephalogram (EEG), Heart Rate (HR), Galvanic Skin Response (GSR)). To this end, an experiment has been conducted in which three participants worked together as a team to solve seven small riddles. During this experiment, two participants communicated with each other in a language the third participant did not understand, thereby ignoring the third participant and causing feelings of social exclusion. Based on the existing literature, it was expected that the two participants that could understand each other and worked together as a team would show a higher level of synchronisation in the measured neurophysiological signals (EEG, HR, GSR) compared to the socially excluded participant with either of the other two participants. The level of synchrony for the EEG modality was computed using the Phase-Locking Value (PLV) method and the synchrony for the HR and GSR modality was computed with the Pearson Correlation Coefficient (PCC). The results showed that, out of the three measured modalities, the EEG modality was best suited for measuring this synchrony and social exclusion. The data was compared both per brain region (frontal, central, parietal, temporal, occipital) and per channel (32 channels). For the regional comparison, the theta and alpha frequency bands had the strongest result, while the gamma band achieved the strongest results for the channel comparison. The statistical analysis indicates that the central brain region (and channel C4 in particular) looks the most promising, with statistically significant results before False Discovery Rate (FDR) correction. Both the HR and GSR analyses indicate that there is a difference between the socially excluded participant and the other two participants although there is a large variability between the results. However, these differences are only significant for a few individual experiments. Therefore, even though the EEG, HR, and GSR results all indicate that there is indeed a difference in the synchrony between the socially excluded participant and the other two participants, and that social exclusion can thus, to some extent, be measured using these three modalities, further investigation is needed to draw definitive conclusions.

 recorded talk 11

Talk 12

Title: Decoding hand grasped movement when used as BMI from EEG electrodes –revealing use of gamma oscillations

Author & Affiliation: Sandeep Bodda, Amrita Mind Brain Center, Amrita Vishwa Vidyapeetham,  Kerala,

Abstract: Recent advancements in neuro-prosthetics show us promising ideas to improve the quality of life for people with motor impairments. Futuristic devices that are controlled by thought using BCI will be a great help for Activities of Daily Living (ADL). Decoding such motor tasks and their significance in non-invasive brain-computer interfaces were well studied from the recent past for rehabilitation and to understand the dysfunctions. In this study, we are allowing a new type of protocol to understand the reconstruction mechanisms in a simple movement by exploring event-related potentials (ERPs) derived from electroencephalography (EEG) signals recorded during the (attempted) execution of hand movements in healthy subjects. Ten randomized volunteering subjects were recruited and were asked to perform a visual cue-enabled grasped-moment task of phases with a duration of five seconds. we explored patterns of inter-regional couplings during, before, and after grasped-movement tasks also the motor-related cortical potentials during reach, grasp, and grasped-movement actions using both left- and right hand and in both directions. We obtained focal potential over central electrodes, the characterized potential was readiness potential with high amplitude peak and correlated with other electrode regions. The oscillatory changes compared with premovement or initiation condition and relax(rest) condition has indicated that apart from the mu or beta rhythms gamma oscillations were also responsible for grasped hand movement. The gamma oscillations in the central regions C3 and C4 electrodes have shown a 34% decreased during the grasped movement, compared to movement initiation whereas after the movement the gamma oscillations have recovered 10% from the grasped movement. These neural correlates readiness potential and oscillatory signatures used as features for machine learning algorithms evaluating grasped movement and intention or no movement and attained 80% accuracy of prediction rate.

 recorded talk 12


Mini-Symposium

Title: Commercial perspectives for BCI technologies

Host: Anja Meunier, Research Group Neuroinformatics, University of Vienna

Panel participants:

Vinay Jayaram, Facebook Reality Labs

Nataliya Kosmyna, Ph.D, braini.io

Dr. Brian Murphy, BrainWaveBank Ltd    

Abstract: To the general public, brain-computer interfaces have long been known mainly as a subject of science fiction. In the recent years, however, neurotechnology has become a trending topic, a development which is reflected in the growing number of companies active in BCI research. As the field grows, it is important to recognise both the potential and possible pit-falls of commercialisation. During this mini-symposium, representatives of leading neurotechnology companies will discuss the commercial perspectives of BCIs for both medical and consumer applications, ethical implications of neurotechnology and the obstacles in the way of widespread BCI use.

 Mini symposium introduction

 Vinay Jayaram

 Nataliya Kosmyna

 Brian Murphy


 Closing