Meet Jordy

As I am a member of MindAffect it is time to introduce myself and my work at MindAffect. I am Jordy and I am raised in Gemert, which a very beautiful village in the south of the Netherlands. I currently live in Nijmegen quite close to the Radboud University and MindAffect. I first arrived in Nijmegen when I started doing my bachelor’s in artificial intelligence (AI). Nijmegen has a unique approach to AI with a great focus on natural intelligence. Think about it: in AI we want to create intelligent machines. We could do so by making more and more complex systems from scratch, or by “just” looking around us and getting insight from those intelligent systems that we already know. The idea is to study natural intelligence – like the brain – and use this knowledge as inspiration to improve AI. This is directly related to my interest in computer science as well as psychology and cognitive neuroscience (CNS). Not totally unexpected then, I obtained my master’s degree in both AI and CNS. At the moment, I am a PhD candidate at the Donders Institute, working on fundamental CNS questions with a focus on visual perception. Still, my AI background is well visible throughout my PhD research, and part-time I am involved in MindAffect.

My involvement in MindAffect dates back to my bachelor’s when I met Peter and Jason and their research. I was very eager to work on brain computer interfaces (BCIs), both because it is a cool technology by itself, but also because it literally combines brains and computers, the two things that drive my curiosity a lot. Together with Peter and Jason, we started the ‘noise-tagging’ project, which is the early version of the BCI that is developed at MindAffect today. 

As part of this noise-tagging project we published an article in 2015 demonstrating the potential of our core BCI algorithms [1]. In short, the BCI presents the user with several buttons and each of these flash with a unique sequence of flashes. The brain responds very clearly to a brief flash so each of these unique sequences evokes very specific brain activity too. We record this brain activity with electroencephalography (EEG) and have a model that learns the user-specific response to a flash with machine learning techniques. This model can generate a prediction of what the responses of any sequences of flashes would look like. We can ask this model to find for us the flash sequence that most resembles the brain activity, and as such find the button that the user was willing to select. In the article we show we can do this reliably and with only a little amount of data.

In 2017 we got nominated for the BCI Award. This is an award in the BCI field for outstanding BCI research. For this we wrote a book chapter that describes several improved analysis pipelines for our BCI [2]. Amongst these improvements was a great finding: a way to use our BCI without the need for calibration. Specifically, as noted above, the model needs to learn the user-specific brain response to a flash. Normally, this is done by a training session in which data is acquired to calibrate the model. Instead, now we have developed a method that does not need such calibration time: a true plug-and-play method. We are currently scientifically validating and evaluating this method with human experiments. 

In the meantime, we also work on making the BCI practical, for instance by reducing the number of electrodes by optimally placing them around the visual cortex at the back of the head. See the related recently published publication [3]. And additionally, we work on changing the sensory modality that is used to drive the BCI. Currently, this is mostly the visual domain, as the BCI relies on the visual flashes. Instead, we can also use auditory tones as sequences or little tactile stimulators to open up even more new ways of interaction. All these endeavors are really exciting, so stay tuned!

Feel free to contact me at jordy@mindaffect.nl or follow me on Twitter @ThielenJordy.

[1] Thielen, J., van den Broek, P., Farquhar, J., & Desain, P. (2015). Broad-Band visually evoked potentials: re (con) volution in brain-computer interfacing. PloS One10(7), e0133797.

[2] Thielen, J., Marsman, P., Farquhar, J., & Desain, P. (2017). Re (con) volution: Accurate response prediction for broad-band evoked potentials-based brain computer interfaces. In Brain-Computer Interface Research (pp. 35-42). Springer, Cham.

[3] Ahmadi, S., Borhanazad, M., Tump, D., Farquhar, J., & Desain, P. (2019). Low channel count montages using sensor tying for VEP-based BCI. Journal of Neural Engineering.