The Missing Layer in Brain-Computer Integration
- Mijail Serruya
- 7 days ago
- 4 min read
The brain-computer interface field has captured serious investor imagination. Companies are developing interfaces using implanted electrodes, ultrasound, and engineered molecules, and hundreds of millions of dollars have flowed in over the past three years. The investors funding this race are not primarily motivated by helping people with ALS or spinal cord injury — they are betting on a mass consumer market: thought-to-text, memory upload, AI symbiosis for healthy adults. As a physician-scientist who has spent 25 years developing brain-computer interfaces, I ask whether and how that path can genuinely help people with disabilities. Walking that path will require adding a layer the current investment thesis has not yet addressed. That layer begins with a question the field has not fully answered: what are we connecting the brain to?
Every major BCI company frames its competitive advantage in terms of bandwidth: more channels, more neurons recorded, faster data rates, greater spatial coverage. The implicit assumption is that the fundamental barrier to meaningful brain-computer integration is information throughput — that if we could read from enough neurons and write to enough sites, that AI will solve the rest. This framing treats the brain like a CPU waiting for better peripheral ports. Getting the ports right is necessary. It is not sufficient.
What differentiates human cognition from that of our closest primate relatives? It is not that we have better eyes, ears, or fingers. Our sensory and motor bandwidth is not meaningfully superior to that of chimpanzees. What changed across primate evolution was internal architecture: the multiplication of thalamocortical loops, the expansion of prefrontal and parietal association cortices, and the scaling of cortico-subcortical circuit ratios. Cognitive power scales not with throughput but with combinatorial assembly capacity — the ability to form increasingly complex conceptual combinations from a small set of canonical circuit motifs refined over hundreds of millions of years.
Adding recording channels to the brain does not change its computational architecture. It provides more windows through which to observe the same underlying processes. A decoder reading from ten thousand neurons in motor cortex still interfaces with a system whose output is fundamentally constrained by the architecture of the corticospinal tract and the whole brain — not by the number of recording channels. A computer with a thousand USB ports and an outdated CPU is less useful than a computer with two ports and a powerful CPU.
This is not a theoretical concern — it is already visible in the clinic. Fifty years of attempting to repurpose deep brain stimulation systems built for Parkinson's disease to other indications — a rational approach given an established safety record — has shown that shifting the anatomical target does not straightforwardly yield efficacy. The companies now building sensorimotor BCIs will face the same pressure to scale toward aphasia, executive dysfunction, and memory impairment. The architectural foundation for that expansion needs to be laid now, in parallel with the hardware race, not as an afterthought once the devices are approved.
So as the I/O engineering matures, the question of what the brain connects to becomes the central one. I propose that the useful unit of the next investment wave is not the input/output device but what I call the auxilium: an extracranial auxiliary brain system linked bidirectionally to key hubs in the patient's brain. The auxilium has two functions. The first is virtual white matter — reconstructing disrupted pathways so that areas disconnected by stroke, injury, or disease can resume coordinated computation. The second is virtual gray matter — providing extracranial computational modules that expand the brain's processing capacity beyond what the skull contains. Those modules can be instantiated in neuromorphic hardware, digital simulation, or engineered biological neural tissue such as cerebral organoids sustained on multi-electrode arrays. Think of this as an AWS for neural computation: modular, scalable, remotely accessible processing that the brain can learn to use as its own.
We already have closed-loop neural devices that detect a signal at one brain location and trigger a response at another. Approved systems for epilepsy and Parkinson's disease use this principle, as does research showing that detecting a “unsuccessful encoding" signature in one brain region and delivering a salience-boosting stimulus elsewhere can improve verbal memory. These are genuine advances. But in every current case, the external system acts as either a reversible neuromodulator — suppressing pathological activity — or a nonspecific plasticity enhancer. None uses the intermediary as a functional substitute for lost gray and white matter: actively reconnecting areas that have been disconnected by disease or injury, or replacing damaged or lost areas entirely, in real time.
That is the distinction the auxilium is designed to exploit, and it is testable today. People with epilepsy undergoing stereoencephalographic monitoring with electrodes in multiple brain regions simultaneously. With appropriate consent, we can ask whether routing a decoded signal from one region through an external computational module — one that respects the oscillatory and circuit architecture of the original pathway — and returning it as stimulation to a downstream target produces functional recovery that pure neuromodulation does not.
The companies that learn the most about brain architecture — not just the input-output device — will have a decisive advantage when the hardware matures. The auxilium is also the architecture that generalizes across the clinical spectrum. The physician of 2035 treating a patient with traumatic brain injury, progressive multiple sclerosis, or complex aphasia will need a modular, customizable tool informed by how the brain actually computes. The I/O wave is building that tool's interface. The auxilium is its operating system.
The steps needed to benefit people with disabilities are requisite not just to de-risk safety concerns for a consumer market, but to map out what the technology can actually deliver. The question is whether those devices are built on a foundation capable of addressing the full range of neurological disease and injury — or only its most telegenic corner. The consumer market and the clinical mission point toward the same architectural destination. Physicians should not stand on the sidelines of this conversation. The time to shape the foundation is now, while it is still being poured.




Comments