Tag: brain-computer interface

  • Neuralink in 2026: What the Human Patients Can Actually Do

    In January 2024, a 29-year-old quadriplegic named Noland Arbaugh underwent a two-hour surgery at the Barrow Neurological Institute in Phoenix during which a robotic system threaded 64 ultra-thin polymer filaments — each thinner than a human hair — carrying 1,024 electrodes into the motor cortex of his brain. The device they were connected to, Neuralink’s N1 implant, is a wireless, rechargeable chip roughly the size of a quarter that sits flush against the skull, invisible from the outside. On his first day using the device, Arbaugh broke the world record for brain-computer interface cursor control speed, hitting 4.6 bits per second. By May 2024, he’d pushed that to 8.0 bits per second. By the end of the year, Neuralink claimed he’d exceeded 9 bits per second — approaching the median able-bodied mouse user’s roughly 10 bits per second. He was playing chess, browsing the internet, drawing digital images, playing Civilization VI and Mario Kart, sending messages, and livestreaming on X, all by thinking about moving his fingers. He hadn’t moved his fingers since a diving accident dislocated two vertebrae in 2016. “Y’all are giving me too much,” Arbaugh said in an early update. “It’s like a luxury overload. I haven’t been able to do these things in 8 years, and now I don’t know where to even start allocating my attention.”

    That was Patient 1. As of early 2026, Neuralink has implanted 21 people.

    What the first patients experienced

    The story of the first year of human Neuralink implants is a story about a device that works, a device that broke, and a device that was fixed — in that order. About a month after Arbaugh’s surgery, the thread retraction problem hit. Several of the ultra-thin electrode threads pulled back from Arbaugh’s brain tissue, reducing the number of active electrodes to roughly 15% of the original 1,024. Performance degraded sharply. Arbaugh described the prospect of losing the device’s benefits as emotionally devastating — he’d had eight years of quadriplegia, six weeks of restored digital independence, and was now watching that independence degrade in real time. The FDA had flagged thread retraction as a potential risk during the approval process. Reuters reported that Neuralink had observed similar retraction in animal testing. The fact that the most predictable failure mode was the one that actually materialized was not reassuring.

    Neuralink’s response was a software workaround. Engineers modified the decoding algorithms to extract more signal from fewer electrodes, compensating for the hardware loss through computational gain. By July 2024, the threads had stabilized — no further retraction — and Arbaugh’s performance had recovered to competitive levels. For subsequent patients, Neuralink modified its surgical technique. The second patient, identified publicly only as “Alex,” received his implant in July or August 2024 and did not experience thread retraction. Alex — who has a spinal cord injury — has used the device for CAD design work, gaming, and daily computer tasks. A third patient was disclosed by Elon Musk in January 2025 during an online interview. By mid-2025, nine patients had been implanted. Two of them received their implants on the same day in late July 2024 — a scheduling milestone that signaled Neuralink’s surgical capacity was scaling faster than the typical early-stage medical device trial, where patients are separated by weeks or months for safety monitoring.

    The most consequential patient story after Arbaugh belongs to Brad Smith, an ALS patient who is completely non-verbal and cannot move anything except his eyes. Smith relies on a ventilator to stay alive. Before Neuralink, his communication options were limited to eye-tracking systems with slow, frustrating interfaces. After receiving the N1 implant, Smith used the device to control a computer cursor, navigate a MacBook Pro, and — in an April 2025 video posted on X — narrate his own story using an AI-generated replica of his pre-ALS voice, cloned from past recordings and controlled in real time through the brain-computer interface. The practical consequence of combining a Neuralink BCI with voice-cloning AI is that a person who has lost the ability to speak can produce speech that sounds like them, in real time, by thinking about what they want to say. Whether that qualifies as “communicating using telepathy,” as Neuralink has described it, depends on your tolerance for marketing language. What it definitely qualifies as is a functional communication channel that didn’t exist for Smith before the implant.

    What the device actually is — and isn’t

    Arbaugh’s 10-hour-per-day usage by August 2025 — 18 months post-surgery — is the most important data point in the entire PRIME study, because it’s a usage metric, not a performance metric. Usage measures whether a real person with a real disability finds the device useful enough to use it all day. The answer, for Arbaugh, is yes: he uses the Neuralink to study, read, game, schedule interviews, manage everyday tasks, and communicate. He has re-enrolled in college and started a business. The device needs to be charged roughly every five hours, which means he charges it during breaks the way someone charges a phone — an annoyance, not a dealbreaker. Calibration is a more significant friction. Arbaugh has described spending as long as 45 minutes recalibrating the mapping between his imagined movements and the cursor — a process that degrades over hours and days as neural patterns shift. Neuralink’s engineering team has been iterating on the calibration software throughout the trial, and the recalibration time has reportedly decreased, but it remains the single biggest UX friction in the system.

    The N1 implant is wireless — a meaningful distinction from competitors like Blackrock Neurotech, whose Utah Array system requires a wired connection through the skull to an external receiver. Wireless operation means patients can use the device without being tethered to equipment, which is the difference between a research tool and something that functions in daily life. The tradeoff is battery life, power management, and data throughput — the wireless link constrains how much neural data can be transmitted in real time, which in turn constrains the decoding algorithms’ resolution.

    What the device is not — at least not yet — is a general-purpose neural interface. The N1 implant records from the motor cortex, which handles planned movements. The current decoding pipeline translates imagined finger and hand movements into cursor position. It does not read thoughts. It does not access memory. It does not interface with emotions or subjective experience. It maps one specific category of neural activity — motor intention — to one specific category of output — cursor control. Within that narrow channel, it works remarkably well. The breadth of what Arbaugh does with cursor control — gaming, browsing, studying, communicating — demonstrates that cursor control on a standard computer is a surprisingly powerful restoration of independence for someone who previously needed a mouth stick placed by a caregiver to interact with a screen.

    The competitive landscape

    The framing that Neuralink is “first” requires qualification. A 2025 systematic review estimated that approximately 80 people worldwide had received implantable brain-computer interfaces before Arbaugh’s surgery. BrainGate, the academic BCI consortium led by Brown University, has been implanting patients since 2004 using Blackrock Neurotech’s Utah Array — a rigid silicon electrode array that preceded Neuralink’s flexible threads by two decades. Arbaugh was the first recipient of a Neuralink implant. He was not the first person to control a cursor with a brain implant. What Neuralink brought to the field was engineering scale: wireless operation, 1,024 electrodes (versus BrainGate’s roughly 100), robotic surgical insertion, and — critically — the funding and marketing infrastructure to run a multi-country clinical trial at a pace academic labs cannot match.

    The competitors are not standing still. Synchron, an Australian-American company, takes a less invasive approach — its Stentrode device is inserted through the jugular vein and lodged in a blood vessel adjacent to the motor cortex, avoiding open brain surgery entirely. Synchron has its own human patients and its own clinical trial. Precision Neuroscience uses a thin, flexible electrode array called Layer 7 that sits on the brain’s surface rather than penetrating it, and can be removed without permanent tissue damage. Blackrock Neurotech has two decades of implant data and is developing its own wireless system. Paradromics is building a high-bandwidth BCI called Connexus designed for thousands of simultaneous channel recordings.

    Each approach trades off invasiveness against signal quality. Neuralink’s penetrating electrodes produce the highest-resolution recordings but carry the highest surgical risk and face challenges like thread retraction. Synchron’s endovascular approach is safer but records from fewer neurons at lower resolution. Precision’s surface electrodes are reversible but may not capture the single-neuron resolution that enables the fastest cursor control. The field is converging on the same functional goals — motor control restoration, communication for non-verbal patients, and eventually sensory restoration — through fundamentally different engineering strategies.

    What’s coming next

    Neuralink’s pipeline beyond the N1 motor cortex implant includes two FDA Breakthrough Device designations that define where the company is heading. In September 2024, the Blindsight implant — designed to stimulate the visual cortex to restore limited vision in people who have lost both eyes or their optic nerve — received Breakthrough Device status. Musk has claimed Blindsight will enable blind people to see, though IEEE Spectrum and other expert outlets have noted that the resolution achievable with current electrode density is likely to produce something closer to phosphene patterns than natural vision. Human trials for Blindsight were projected for late 2025 or early 2026. In May 2025, Neuralink received a second Breakthrough Device designation for a speech restoration system targeting people with ALS, stroke, cerebral palsy, and spinal cord injuries — a system that would decode attempted speech movements from motor and language areas, potentially enabling more natural communication than cursor-based text output.

    The operational scale is also shifting. Neuralink’s PRIME trial expanded from the United States to Canada (with Toronto’s University Health Network performing Canada’s first Neuralink surgeries in August and September 2025), the United Kingdom, and the United Arab Emirates. The trial enrolled 21 participants by early 2026. A $650 million Series E round in June 2025 valued the company at $9 billion. Neuralink has announced plans for high-volume production and automated surgical procedures targeting 2026 — a transition from artisanal neurosurgery to something closer to industrial medical device deployment. Whether the surgical robot, the implant reliability, and the regulatory pathway support that transition at Musk’s stated timeline is the open question. If any theme emerges from Neuralink’s first two years of human data, it’s that the device works better than skeptics expected and slower than Musk promised — which, for a medical device that is literally inside someone’s brain, is probably the right place to be.

    The honest assessment

    Neural engineering expert Kip Ludwig, quoted by Reuters after Arbaugh’s initial demonstration, said the results were promising but not a breakthrough — that the technology remained at an early stage. Neuroscientist Miguel Nicolelis noted that similar multi-electrode recordings had been achieved in his laboratory in the early 2000s. Both points are technically accurate and contextually incomplete. What Neuralink has done that prior BCI research did not is produce a wireless, fully implanted, cosmetically invisible device that a quadriplegic person uses for 10 hours a day to manage his daily life, attend college, and run a business — and then demonstrated it could be replicated across 21 patients in four countries within two years. The individual technical components are not novel. The integration into a device that functions as a consumer product for people with severe disabilities — rather than as a laboratory research tool — is novel. Whether the thread retraction problem, the calibration friction, the five-hour battery life, and the motor-cortex-only decoding pipeline are solvable engineering problems or fundamental constraints will determine whether Neuralink becomes a medical device company or remains an expensive research project. The first two years of human data suggest the former, but the history of medical devices that looked promising at 21 patients and failed at 2,100 is long enough that no honest assessment would call the outcome settled.

    This is the kind of technology our Neuroprosthetics course was built to explain — where a chip the size of a quarter and 1,024 electrodes thinner than a human hair gave a man who hadn’t moved his fingers in eight years the ability to beat the world record for BCI cursor control on his first day, and then spent the next 18 months teaching us what “working” actually means when the device is inside a living brain.

  • Brain-to-Brain Communication: Where the Science of Direct Neural Links Actually Stands

    In 2019, researchers at the University of Washington published a paper in Scientific Reports describing BrainNet—a system that allowed three people, seated in separate rooms with no ability to see, hear, or talk to each other, to collaboratively play a Tetris-like game using only their brain signals. Two “senders” could see the game board and decided whether a falling block needed to be rotated. They communicated their decisions to a “receiver” who couldn’t see the board but controlled the game. No words. No gestures. No screens shared between them. The senders’ decisions were extracted via EEG, transmitted over the internet, and delivered to the receiver’s visual cortex via transcranial magnetic stimulation, where they appeared as flashes of light—phosphenes—that the receiver interpreted as instructions. Five groups of three people tested the system and achieved 81 percent accuracy.

    That’s the headline. Here’s the fine print: the information transmitted was binary. Yes or no. Rotate or don’t rotate. One bit of data per transmission cycle. The senders communicated their decisions by staring at lights flashing at different frequencies—15 hertz for one answer, 17 hertz for the other—which entrained their brain’s electrical output at the corresponding frequency, readable by EEG. The receiver experienced either a flash of light (rotate) or no flash (don’t rotate). The “brain-to-brain communication” was, functionally, a very elaborate way to send the equivalent of one binary digit from one head to another. IEEE Spectrum described an earlier version of this approach as “telepathic Morse code.”

    This is what brain-to-brain communication actually looks like in 2026: technically real, scientifically genuine, and approximately as far from telepathy as a tin-can telephone is from a 5G network.

    What exists

    The field has produced a series of legitimate demonstrations, each constrained by the same fundamental bottleneck: you can get information out of a brain with reasonable resolution using EEG or implanted electrodes, but you can deliver information into a brain noninvasively only through crude channels—magnetic pulses that trigger phosphenes (perceived flashes of light) or vague sensations. The input side is the constraint. Reading a brain is hard. Writing to a brain is harder by orders of magnitude.

    The 2014 Starlab experiment was the first reported human brain-to-brain transmission. A sender in India imagined moving his hands or feet to encode binary data through EEG. The signal was emailed to France, where a TMS device delivered pulses to a blindfolded receiver’s visual cortex, producing phosphenes. The receiver reported the flashes verbally, and the team decoded the message. The transmitted words: “hola” and “ciao.” The transmission rate was approximately two bits per minute. The entire process took over an hour.

    BrainNet in 2019 scaled the architecture to three people and demonstrated something genuinely interesting beyond the binary channel: when the researchers injected noise into one sender’s signal, the receiver learned to preferentially weight the more reliable sender—a trust calibration process that happened entirely through brain-to-brain signals without any conscious strategy. The receiver’s brain was doing signal integration across two noisy sources, the same computation that underlies sensory integration in normal perception.

    Invasive brain-computer interfaces—Neuralink, Synchron, Blackrock Neurotech—are advancing rapidly on the reading side. Neuralink implanted its first human chip in January 2024 under its PRIME study, enabling a paralyzed patient to type and control a cursor through thought alone. Synchron’s Stentrode sits inside a blood vessel near the brain, avoiding open surgery. The PRIME study has a primary completion date of 2026 and full study completion projected for 2031. These systems are brain-to-computer interfaces, not brain-to-brain—they translate neural signals into digital commands for external devices. But they represent the reading infrastructure that any brain-to-brain system would eventually need.

    On the AI-assisted decoding side, researchers at the University of Texas in 2023 used fMRI scans and large language models to decode continuous thought into coherent text—not single words or binary choices but streams of semantic content, capturing the gist of what a person was thinking about during a story or imagined narrative. Meta has developed noninvasive brain-scanning systems paired with AI models that can decode silently spoken words from brain activity. These aren’t brain-to-brain systems, but they’re solving the bandwidth problem on the reading end: extracting richer, more nuanced information from neural signals than EEG-based approaches can achieve.

    What doesn’t exist

    Telepathy—the transmission of complex thoughts, images, emotions, or experiences from one mind to another—is not close. The demonstrations that exist transmit binary decisions through artificial sensory channels. The receiver doesn’t “hear” the sender’s thought. The receiver sees a flash of light and interprets it according to a pre-agreed code. The brain-to-brain interface is a translation chain: thought → EEG signal → digital encoding → internet transmission → TMS pulse → phosphene → interpretation. At every link in that chain, information is lost. What arrives in the receiver’s brain is not a thought. It’s a stimulus—a magnetically induced visual artifact that carries one bit of information about the sender’s decision.

    The gap between this and actual telepathy is not a gap that incremental engineering will close, because the limiting factor isn’t the technology between the brains. It’s the fundamental problem of neural encoding: we don’t know, for any given thought, which specific neural firing patterns represent it, how those patterns vary between individuals, or how to induce a specific firing pattern in a target brain that would be experienced as the same thought. Brains aren’t standardized hardware. The neural code for “rotate the block” in one person’s motor cortex is not the same pattern in another person’s motor cortex. Translating one person’s neural representation into a stimulus that would produce the same internal experience in another person requires a mapping between two unique neural architectures—a problem neuroscience hasn’t solved and isn’t close to solving.

    What BCI companies are building toward is not telepathy but increasingly high-bandwidth brain-to-computer interfaces that could, in principle, be linked: Brain A → computer → Brain B. Neuralink’s implant reads neural signals at thousands of channels. Future implants will read more. AI decoding systems are getting better at extracting semantic content from neural data. But the write side—delivering complex, precise, meaningful information directly into neural tissue in a way that the receiving brain interprets as a coherent experience—remains the unsolved problem. TMS can trigger phosphenes and crude sensory impressions. It cannot implant a sentence, an image, an emotion, or a memory.

    The timeline problem

    Coverage of brain-to-brain communication tends to imply a trajectory: binary transmission today, sentences tomorrow, telepathy eventually. The trajectory is real in the same way that the Wright Brothers’ 12-second flight in 1903 implied commercial aviation—the physics supports the possibility, but the engineering required to get from demonstration to deployment is measured in decades, not years, and the technical obstacles on the write side are qualitatively different from the obstacles on the read side.

    Reading a brain is an information extraction problem: the neural signals are there, and the challenge is building sensors sensitive enough and algorithms smart enough to decode them. This problem is yielding to better hardware and better AI. Writing to a brain is an information implantation problem: you need to induce specific patterns of activity in specific neural populations at specific times, through skull and tissue, without disrupting the brain’s existing activity. Noninvasive methods (TMS, focused ultrasound, transcranial electrical stimulation) affect large regions of cortex with limited spatial precision. Invasive methods (optogenetics, direct electrical stimulation) can target individual neurons but require surgery, gene therapy, or implanted hardware.

    The honest assessment in 2026: brain-to-computer interfaces are advancing on a trajectory that will produce clinically meaningful products for paralysis, communication disorders, and sensory prosthetics within the current decade. Brain-to-brain communication, in the sense of transmitting complex mental content between two people, requires solving the neural write problem at a resolution and precision that current technology can’t achieve and that current neuroscience can’t specify. The demonstrations are real. The extrapolation to telepathy is premature by a margin that is difficult to estimate because the bottleneck isn’t engineering velocity. It’s a scientific knowledge gap about how brains encode experience—a gap that better instruments may close but that no existing roadmap guarantees.

    Neuralink named its first consumer product “Telepathy.” The name is aspirational in the way that calling the first automobile a “teleporter” would have been aspirational. The product lets a paralyzed person control a cursor with their thoughts. That’s extraordinary and useful. It’s not telepathy. The distance between the two is the distance between reading a book and writing one—and in neuroscience, we’re still learning to read.

    We cover brain-to-brain communication alongside spinal cord stimulation, retinal implants, and the full landscape of neural interface technology across our Neuroprosthetics course—including why the hardest problem in connecting two brains isn’t getting the signal out. It’s getting the signal in.

  • Can Brain-Computer Interfaces Restore Movement After Paralysis? The Current Evidence

    There is a man with ALS who has been using a brain-computer interface at home, independently, for over two years. Four microelectrode arrays sit in his left motor cortex, recording from 256 electrodes. He uses the system to control his personal computer—typing, browsing, communicating—through a multimodal BCI that decodes both his attempted speech into text and his attempted hand movements into cursor movements and clicks. In structured tests, the system is 99 percent accurate at outputting his intended words. Over 4,800 hours of use, he has communicated more than 237,000 sentences at roughly 56 words per minute. He works full-time.

    That’s not a laboratory demonstration. That’s not a press release. That’s a BrainGate2 clinical trial participant living his life with a brain-computer interface, reported at Neuroscience 2025 and representing the most sustained, independent, real-world use of a speech and movement BCI ever documented. And it’s one data point in a field that, after two decades of incremental academic progress, is now moving fast enough that the clinical evidence is outpacing most people’s mental model of what’s possible.

    So: can BCIs restore movement after paralysis? The honest answer requires separating three very different things that get conflated in headlines—restored communication (controlling a cursor or generating speech), restored functional movement (moving a paralyzed limb), and restored independent mobility (walking). The evidence is strongest for the first, genuinely promising for the second, and early but real for the third.

    Communication: the problem that’s closest to solved

    The clearest clinical wins in BCI right now are in restoring communication for people who’ve lost the ability to speak or type. This is where BrainGate has the deepest data.

    A March 2026 study published in Nature Neuroscience demonstrated that two BrainGate participants—one with ALS, one with a cervical spinal cord injury—could type on a standard QWERTY keyboard layout by attempting finger movements. Not imagined cursor movements. Not an abstract mental task. Actual attempted typing—the participants thought about pressing specific keys with specific fingers, the implanted microelectrode arrays recorded the neural patterns associated with each attempted movement, and a decoder translated those patterns into keystrokes in real time. The system achieved speeds approaching 90 characters per minute, which is in the range of normal phone typing for a non-disabled person.

    A separate BrainGate participant at UC Davis achieved 97 percent accuracy on a speech BCI that translates attempted speech into text—the most accurate speech neuroprosthesis ever reported, published in the New England Journal of Medicine. The system reconstructed the patient’s voice from pre-disease recordings, so the synthesized output sounds like him, not like a generic computer voice. That distinction matters more than the engineering might suggest—hearing your own voice come back to you after disease has taken it is not a technical specification, it’s a human experience.

    Neuralink’s participants have demonstrated cursor control, web browsing, and social media use through the N1 implant, with 21 patients now enrolled globally. Synchron’s endovascular BCI—threaded through the jugular vein, no craniotomy required—has enabled an ALS patient to control an iPad, an Apple Vision Pro, and Amazon Alexa using thought alone, all through native accessibility protocols on consumer devices. These are real outcomes in real patients. The communication problem for severe paralysis is not solved, but the clinical evidence now clearly demonstrates that implanted BCIs can restore functional digital communication at speeds that make them practical for daily life.

    Functional movement: the harder problem

    Restoring communication means decoding neural signals and routing them to a computer. Restoring movement means decoding neural signals and routing them back into the body—either to a robotic limb, a functional electrical stimulation system, or a spinal cord stimulator that reactivates the patient’s own muscles below the injury. The decoding part is the same. The output part is enormously more complex.

    BrainGate participants have controlled robotic arms using neural signals since the early 2010s—the 2012 demonstration where a woman with tetraplegia used a BCI-controlled robotic arm to drink coffee from a bottle was a watershed moment in the field. Nathan Copeland, implanted in 2015, used a BCI-controlled robotic arm to fist-bump President Obama in 2016 and later demonstrated bidirectional BCI capability—not just controlling the arm with his brain, but receiving tactile sensation feedback through intracortical microstimulation of his somatosensory cortex. He could feel when the robotic hand touched an object. That sensory feedback loop—reaching, grasping, and feeling what you’ve grasped—is where BCIs start to approximate actual limb function rather than just cursor control applied to a mechanical arm.

    A landmark Neuroscience 2025 report provided the most extensive human safety data ever published on intracortical microstimulation for artificial touch. Five participants received millions of electrical stimulation pulses to their somatosensory cortex over a combined 24 participant-years. The stimulation evoked stable, high-quality tactile sensations in the hand without serious adverse effects. More than half the electrodes continued functioning reliably even after a decade of implantation in one participant. That’s the kind of long-duration safety data the field has needed—demonstrating that you can stimulate the brain to create artificial sensation chronically, over years, without breaking things.

    The most dramatic functional movement results, however, are coming not from BCIs alone but from the combination of BCIs with spinal cord stimulation—and this is where the story gets genuinely exciting.

    Spinal cord stimulation: the other half of the equation

    ONWARD Medical’s ARC-EX system received FDA clearance in December 2024—the first non-invasive spinal cord stimulation device cleared for spinal cord injury. The system places electrodes on the skin at the back of the neck and delivers programmed electrical stimulation to the cervical spinal cord. In the pivotal Up-LIFT trial, published in Nature Medicine, 90 percent of participants with chronic incomplete tetraplegia showed improved upper-limb strength or function. Eighty-seven percent reported improved quality of life. Four participants demonstrated changes in their neurological level of injury, and three improved their AIS (American Spinal Injury Association Impairment Scale) classification—including one participant who moved from complete to incomplete spinal cord injury. That last detail is worth pausing on: a person classified as having a complete injury—no motor or sensory function below the level of the lesion—regained measurable function.

    ONWARD’s implantable system, ARC-IM, goes further. Epidural leads are placed directly on the spinal cord and deliver targeted stimulation that can restore stepping movements in people with complete paraplegia. The research, led by Grégoire Courtine and Jocelyne Bloch at EPFL and Lausanne University Hospital, has produced videos that are almost surreal to watch—people who have been told they will never walk again, standing up and taking steps with epidural stimulation active. A 2025 paper in Science Translational Medicine demonstrated that high-frequency epidural stimulation reduced spasticity and facilitated walking recovery in patients with spinal cord injury, establishing another mechanism by which electrical stimulation of the spinal cord can restore function that was thought to be permanently lost.

    The next logical step—and ONWARD is actively developing this—is pairing spinal cord stimulation with a brain-computer interface. The ARC-BCI system would use a cortical implant to decode the patient’s intended movements, then route those decoded intentions to the spinal cord stimulator, which would activate the appropriate muscles in the correct sequence to produce natural-feeling movement. Brain thinks “step forward.” Decoder translates the intention. Stimulator activates the leg muscles. The patient walks. Not with a robotic exoskeleton strapped to the outside of their body, but with their own legs, driven by their own neural intentions, bridged across the injury by electronics.

    This hasn’t been demonstrated in a full clinical trial yet. It’s in feasibility studies. But every component has been individually validated in humans: the cortical decoder works, the spinal cord stimulator works, and the closed-loop integration is an engineering challenge, not a science challenge. The gap between “each piece works separately” and “the integrated system works reliably in daily life” is real—and it’s the kind of gap that takes years to close—but it’s a gap measured in engineering iterations, not fundamental breakthroughs.

    What “restored movement” actually looks like in practice

    Here’s the part that gets lost in the headlines. When a BCI study reports “restored movement,” the movement being restored is typically not what a healthy person would recognize as normal motor function. A BCI-controlled robotic arm reaches more slowly, grasps less precisely, and fatigues faster (in terms of signal quality, not muscle fatigue) than a biological arm. Spinal-cord-stimulation-assisted walking involves extensive preparation, careful calibration, and a level of concentration from the patient that makes it exhausting rather than automatic. These are real functional gains—the difference between being able to grasp a cup and not being able to grasp a cup is enormous when you’re the person holding the cup—but they’re not the seamless restoration of pre-injury function that the promotional materials sometimes imply.

    The trajectory matters more than the current state. The BrainGate participant typing at 90 characters per minute in 2026 is operating a system that typed at roughly 15 characters per minute a decade ago. The speech BCI achieving 97 percent accuracy in 2025 is operating a system that achieved roughly 70 percent accuracy five years earlier. The decoders are getting better because the AI is getting better, the electrode technology is improving, and the cumulative participant-hours of data are feeding algorithms that learn to interpret neural patterns with increasing precision. The slope of this curve matters as much as the current position on it.

    The honest timeline

    BCIs that restore functional communication for people with severe paralysis will be commercially available medical devices within three to five years—probably led by Synchron’s endovascular approach or a Neuralink-derived product, with BrainGate’s academic work continuing to push the frontier of what’s decodable. BCIs that restore basic upper-limb movement—grasp, reach, manipulation—through robotic arms or functional electrical stimulation are probably five to ten years from routine clinical use. Integrated BCI-plus-spinal-cord-stimulation systems that restore walking for people with paraplegia are further out—likely a decade or more from anything resembling standard clinical practice—but the foundational work is human-validated and advancing.

    None of this is speculation. It’s extrapolation from clinical data that exists, published in Nature, Nature Neuroscience, the New England Journal of Medicine, and Science Translational Medicine. The field has moved past proof of concept and into the phase where the questions are about reliability, scalability, durability, and insurance coverage—which are the boring questions that mean the technology is real.

    We cover the full landscape of brain-computer interfaces and neuroprosthetics—from the earliest experiments to every company and approach described above—across 48 lectures in our Neuroprosthetics & Brain-Computer Interfaces course. If the BrainGate typing data or the spinal cord stimulation results changed what you thought was possible, the course goes considerably deeper.