I recently attended a talk on brain machine interfaces (BMI) given by José M. Carmena from Berkeley. Carmena described his current research into thought control of computers via brain implanted electrode arrays. The subjects were macaque monkeys, and the task was controlling a cursor, directing towards a target and “grasping” it. Successful execution of the task triggered a reward for the monkey, juice.
Initially, the monkey was subjected to the task through motor manipulation of a control stick. This phase allowed the monkey to learn the task, as well as to identify motor neurons that could serve as stable output for the thought control phase. Additionally, a simple linear regression model was trained on those neurons in order to decode spike output into action commands for the cursor. In the later stage, the control stick was omitted, and the monkeys controlled the cursor directly through thought. After several days of learning the monkeys achieved a high success rate, even at the start of the trial. This suggests that single day learning was being consolidated into longer term motor memory, as occurs for normal every day motor tasks. The typical example for humans is riding a bicycle, a skill that once learned persists.
The brain’s natural plasticity is what allows it learn a direct a BMI in just days. Neural rewiring occurs in order to progressively adapt the input-output feedback loop. The monkey observes the cursor on the screen, this is the input, and through thought manipulates it, the output.
Of course, the task at hand was relatively simple, and performance was lower than through natural motor control of the stick. But still, it is remarkable that with a relatively crude interface (sampling a handful of neurons) and a simple regression model, the task is learned to high accuracy. It’s the brain that is doing most of the heavy lifting, suggesting that there is great potential for brain computer communication.
So what lies ahead? Firstly, better physical interfacing with the brain. Current approaches are limited to crude short lived electrode arrays that sample perhaps hundreds of neurons in a very invasive way. Secondly, better feedback loops. One of the reasons the monkey’s performance is higher in the control stick phase of the task is because it obtains very valuable feedback, not just from the cursor on the screen, but from its own body; the position and orientation of its arm/hand. All this feedback is missing in the thought control phase. So one way forward is to not just connect to neurons for output, but possibly to inject input as motor feedback.
But things don’t stop there. Injecting input into the brain need not only be a matter of feedback for control, but as a way to input sensory information in general. In one experiment, a rat was subject to artifcial stimuli corresponding to a virtual object, via directly stimulating neurons responsible for processing touch information from its whiskers (yes, its whiskers you read that right). So one could say that the rat “felt” the touch of a non existent object. Taking the idea even further, one could conceivably create entirely new senses, not linked to a specific part of the body, in order to sense surroundings in ways that do not correspond to anything that our bodies currently perceive.
This last line of thought is what I found to be most interesting, although I have to say it was not stated explicitly by Carmena in his talk, it’s speculation extrapolating the idea of virtual objects beyond current senses. But this is just one possibility. In general, brain computer communication opens up a lot of possibilities, many of which currently belong to the realm of science fiction. But as the speaker said, things that are being experimented today, like thought control, were the stuff of science fiction twenty years ago..