How to get this cutting-edge technology out of the lab
Our first commercial offering revolves around the challenges of training Motor BCI participants, and their decoders during longitudinal and complex behavioral studies.
As eloquently articulated by Dobkin (2007), the neural instabilities of neurobiological origin pose a threat to the clinical readiness of Motor BCI:
“Successful deployment of BCI technology depends on the incorporation of cues and feedback during training and practice, as well as a mathematical algorithm to transform neural activity, especially from intracortical bursts, into a control signal. […] Adaptations in the control of electrical potentials used for BCI may arise from changes in neuronal tuning to parameters of movement, in the variability of neuronal firing as practice and reward proceed, in Hebbian strengthening of neuronal ensembles with remapping of representations for movements, in recruitment of remote or correlated activity from ensembles within a network, and in other self-regulation and learning-associated processes.”
Similarly, the instabilities induced by the recording device also hamper performance, as assessed by Perge (2013) regarding intracortical microelectrode arrays:
“Nevertheless, commonly observed signal instabilities could have arisen from array movement, tissue reaction, array material degradation inside the body, or connector issues externally. Consistent with the contribution of these physical factors, electrode impedance and the number of recorded action potentials have been observed to decrease over months (Parker et al., 2011, Prasad and Sanchez, 2012), and spike amplitudes and root-mean-squared noise show day-to-day and within day changes and an overall signal amplitude decrease on average by ∼2–4%/month (Chestek et al., 2011, Linderman et al., 2006, Santhanam et al., 2007). Whatever the cause, and whether amplitudes increase or decrease, signal changes can be substantial, as 60% of the waveforms recorded with silicon platform arrays in monkey have been reported to change across a 15 day interval (Dickey et al., 2009).”
In Transferring Decoded Biomechanics onto an Avatar and Movement Is Complex, I further discussed how the relationship between cortical activity and movement variables may change due to the visual information about the environment and objects in it, the context in which a task is performed, and more.
In State-of-the-Art Motor BCIs we observe, that while lower-dimensional tasks may be calibrated in a matter of seconds or remain calibrated for months, achieving stable decoding of fine movements with many simultaneous DoF across the body requires large amounts of labeled training data.
💿Suppose task design can keep up the labeling, the exposure of Deep Learning models to multiple sessions and participants over extended periods can lead to decoder generalization (across subjects and movements) and long-term stability through manifold alignment and Transfer Learning.
The market entry of novel chronic recording modalities with high channel counts (Neural Data for Clinical Motor BCIs) and the influx of computational researchers in the field, further stimulate the demand for well-labeled high-dimensional Motor BCI experiments.
For these Motor BCI researchers and their participants, a quality immersive Virtual Reality system will help:
Model complex movement tasks the participant can most intuitively attempt to mimic,
Precisely control participant audiovisual stimuli,
Automate the instruction and supervision during tasks,
Prolong use with an engaging and distraction-free immersive environment,
Quickly develop, deploy, and standardize custom tasks,
Interpret the effect of visual behavior using native eye-tracking,
Maximize the sense of embodiment during closed-loop tasks,
Validate the performance of activities of daily living in real-life environments,
Test drive the control of external devices,
Reduce costs, risks, and friction associated with frequent robotic operation,
And ultimately solve the high-dimensional data collection challenge.
The main obstacle in developing any commercial (therefore scalable) offering is continuously validating its value. If anything, this article attempts to accomplish just that from various perspectives. However, perhaps the most essential perspective has been lacking — that of an experienced Motor BCI User.
Ian Burkhart’s life-affirming journey, including his participation in Motor BCI experiments for nearly half a decade, is one such perspective. Here is what he had to say about Virtual Reality:
Video 22; uCat, 2023: Interview with Ian Burkart, 2023 — Ian’s insight profoundly impacted the design of uCat’s user experience.
uCat System
Whether the upstream neural signals will be acquired using an ECoG, iBCI, sEEG, or any other modality is ultimately at the discretion of our research partner.
A Virtual Reality system, like the uCat System, does not need to consume neural data, and therefore, does not pre-impose any upstream signal acquisition or data modeling requirements other than a schema for the expected decoded motor intents (”DMIs”).
Instead, we expect the uCat System to receive DMIs — as a formatted stream of the individual movements decoded from the User’s attempted speech and movements of their head, face, arms, hands, legs, and so on — and mapping the behavior onto the User’s Avatar.
Much of our design follows the thinking in Expressive Virtual Body ("Avatar"), although will only reveal selected parts today.
The uCat System can be divided into three core components:
uCat Client App, responsible for: a) Presenting the audiovisual experience to the User, b) Capturing VR events and logs, c) Requesting movement variables from a VR runtime to animate an Avatar,
uCat Expression Plugin, responsible for: a) Receiving a low-latency stream of DMIs from a Motor BCI, b) Computing the physics of the Avatar’s musculoskeletal dynamics, c) Transforming the movement DMIs into poses for the Avatar skeleton, d) Transforming the speech DMIs as input into a virtual audio input device, e) Feeding the resulting movement variables to a VR Runtime (overriding other control modalities),
uCat Server, responsible for: a) Hosting a simple User database, b) Networking during multi-user sessions.
The prototype unveiled today wholly focuses on the VR user experience inside the uCat Client App (component 1a).
Developing these components enables other commercial use cases that may appeal more to some Users or their carers. For example, the uCat Expression Plugin, which represents DMIs in formats consumable by different applications, may feed the audio it synthesizes into a local speaker so that the User can engage with others in the room. Consequently, transparent solutions like the uCat System accelerate the emergence of a vibrant application ecosystem for Motor BCI hardware, further catering to the specific needs of individual Users.
Diagram 1; uCat, 2024: uCat System High-Level Component Diagram; 1) uCat Client App, receiving and applying movement variables to the User’s virtual Avatar, 2) uCat Expression Plugin, obtaining DMIs as kinematic and textual/audio stream from the Motor BCI, applying them to a realistic Avatar simulation, and overriding the runtime’s control input, 3) uCat Server (currently not under development) managing networking and social aspects of multiplayer VR and storing simple User data.
Part 18 of a series of unedited excerpts from uCat: Transcend the Limits of Body, Time, and Space by Sam Hosovsky*, Oliver Shetler, Luke Turner, and Cai Kinnaird. First published on Feb 29th, 2024, and licensed under CC BY-NC-SA 4.0.
uCat is a community of entrepreneurs, transhumanists, techno-optimists, and many others who recognize the alignment of the technological frontiers described in this work. Join us!
*Sam was the primary author of this excerpt.
Comments