Tag: neuralink

  • Neuralink in 2026: What the Human Patients Can Actually Do

    In January 2024, a 29-year-old quadriplegic named Noland Arbaugh underwent a two-hour surgery at the Barrow Neurological Institute in Phoenix during which a robotic system threaded 64 ultra-thin polymer filaments — each thinner than a human hair — carrying 1,024 electrodes into the motor cortex of his brain. The device they were connected to, Neuralink’s N1 implant, is a wireless, rechargeable chip roughly the size of a quarter that sits flush against the skull, invisible from the outside. On his first day using the device, Arbaugh broke the world record for brain-computer interface cursor control speed, hitting 4.6 bits per second. By May 2024, he’d pushed that to 8.0 bits per second. By the end of the year, Neuralink claimed he’d exceeded 9 bits per second — approaching the median able-bodied mouse user’s roughly 10 bits per second. He was playing chess, browsing the internet, drawing digital images, playing Civilization VI and Mario Kart, sending messages, and livestreaming on X, all by thinking about moving his fingers. He hadn’t moved his fingers since a diving accident dislocated two vertebrae in 2016. “Y’all are giving me too much,” Arbaugh said in an early update. “It’s like a luxury overload. I haven’t been able to do these things in 8 years, and now I don’t know where to even start allocating my attention.”

    That was Patient 1. As of early 2026, Neuralink has implanted 21 people.

    What the first patients experienced

    The story of the first year of human Neuralink implants is a story about a device that works, a device that broke, and a device that was fixed — in that order. About a month after Arbaugh’s surgery, the thread retraction problem hit. Several of the ultra-thin electrode threads pulled back from Arbaugh’s brain tissue, reducing the number of active electrodes to roughly 15% of the original 1,024. Performance degraded sharply. Arbaugh described the prospect of losing the device’s benefits as emotionally devastating — he’d had eight years of quadriplegia, six weeks of restored digital independence, and was now watching that independence degrade in real time. The FDA had flagged thread retraction as a potential risk during the approval process. Reuters reported that Neuralink had observed similar retraction in animal testing. The fact that the most predictable failure mode was the one that actually materialized was not reassuring.

    Neuralink’s response was a software workaround. Engineers modified the decoding algorithms to extract more signal from fewer electrodes, compensating for the hardware loss through computational gain. By July 2024, the threads had stabilized — no further retraction — and Arbaugh’s performance had recovered to competitive levels. For subsequent patients, Neuralink modified its surgical technique. The second patient, identified publicly only as “Alex,” received his implant in July or August 2024 and did not experience thread retraction. Alex — who has a spinal cord injury — has used the device for CAD design work, gaming, and daily computer tasks. A third patient was disclosed by Elon Musk in January 2025 during an online interview. By mid-2025, nine patients had been implanted. Two of them received their implants on the same day in late July 2024 — a scheduling milestone that signaled Neuralink’s surgical capacity was scaling faster than the typical early-stage medical device trial, where patients are separated by weeks or months for safety monitoring.

    The most consequential patient story after Arbaugh belongs to Brad Smith, an ALS patient who is completely non-verbal and cannot move anything except his eyes. Smith relies on a ventilator to stay alive. Before Neuralink, his communication options were limited to eye-tracking systems with slow, frustrating interfaces. After receiving the N1 implant, Smith used the device to control a computer cursor, navigate a MacBook Pro, and — in an April 2025 video posted on X — narrate his own story using an AI-generated replica of his pre-ALS voice, cloned from past recordings and controlled in real time through the brain-computer interface. The practical consequence of combining a Neuralink BCI with voice-cloning AI is that a person who has lost the ability to speak can produce speech that sounds like them, in real time, by thinking about what they want to say. Whether that qualifies as “communicating using telepathy,” as Neuralink has described it, depends on your tolerance for marketing language. What it definitely qualifies as is a functional communication channel that didn’t exist for Smith before the implant.

    What the device actually is — and isn’t

    Arbaugh’s 10-hour-per-day usage by August 2025 — 18 months post-surgery — is the most important data point in the entire PRIME study, because it’s a usage metric, not a performance metric. Usage measures whether a real person with a real disability finds the device useful enough to use it all day. The answer, for Arbaugh, is yes: he uses the Neuralink to study, read, game, schedule interviews, manage everyday tasks, and communicate. He has re-enrolled in college and started a business. The device needs to be charged roughly every five hours, which means he charges it during breaks the way someone charges a phone — an annoyance, not a dealbreaker. Calibration is a more significant friction. Arbaugh has described spending as long as 45 minutes recalibrating the mapping between his imagined movements and the cursor — a process that degrades over hours and days as neural patterns shift. Neuralink’s engineering team has been iterating on the calibration software throughout the trial, and the recalibration time has reportedly decreased, but it remains the single biggest UX friction in the system.

    The N1 implant is wireless — a meaningful distinction from competitors like Blackrock Neurotech, whose Utah Array system requires a wired connection through the skull to an external receiver. Wireless operation means patients can use the device without being tethered to equipment, which is the difference between a research tool and something that functions in daily life. The tradeoff is battery life, power management, and data throughput — the wireless link constrains how much neural data can be transmitted in real time, which in turn constrains the decoding algorithms’ resolution.

    What the device is not — at least not yet — is a general-purpose neural interface. The N1 implant records from the motor cortex, which handles planned movements. The current decoding pipeline translates imagined finger and hand movements into cursor position. It does not read thoughts. It does not access memory. It does not interface with emotions or subjective experience. It maps one specific category of neural activity — motor intention — to one specific category of output — cursor control. Within that narrow channel, it works remarkably well. The breadth of what Arbaugh does with cursor control — gaming, browsing, studying, communicating — demonstrates that cursor control on a standard computer is a surprisingly powerful restoration of independence for someone who previously needed a mouth stick placed by a caregiver to interact with a screen.

    The competitive landscape

    The framing that Neuralink is “first” requires qualification. A 2025 systematic review estimated that approximately 80 people worldwide had received implantable brain-computer interfaces before Arbaugh’s surgery. BrainGate, the academic BCI consortium led by Brown University, has been implanting patients since 2004 using Blackrock Neurotech’s Utah Array — a rigid silicon electrode array that preceded Neuralink’s flexible threads by two decades. Arbaugh was the first recipient of a Neuralink implant. He was not the first person to control a cursor with a brain implant. What Neuralink brought to the field was engineering scale: wireless operation, 1,024 electrodes (versus BrainGate’s roughly 100), robotic surgical insertion, and — critically — the funding and marketing infrastructure to run a multi-country clinical trial at a pace academic labs cannot match.

    The competitors are not standing still. Synchron, an Australian-American company, takes a less invasive approach — its Stentrode device is inserted through the jugular vein and lodged in a blood vessel adjacent to the motor cortex, avoiding open brain surgery entirely. Synchron has its own human patients and its own clinical trial. Precision Neuroscience uses a thin, flexible electrode array called Layer 7 that sits on the brain’s surface rather than penetrating it, and can be removed without permanent tissue damage. Blackrock Neurotech has two decades of implant data and is developing its own wireless system. Paradromics is building a high-bandwidth BCI called Connexus designed for thousands of simultaneous channel recordings.

    Each approach trades off invasiveness against signal quality. Neuralink’s penetrating electrodes produce the highest-resolution recordings but carry the highest surgical risk and face challenges like thread retraction. Synchron’s endovascular approach is safer but records from fewer neurons at lower resolution. Precision’s surface electrodes are reversible but may not capture the single-neuron resolution that enables the fastest cursor control. The field is converging on the same functional goals — motor control restoration, communication for non-verbal patients, and eventually sensory restoration — through fundamentally different engineering strategies.

    What’s coming next

    Neuralink’s pipeline beyond the N1 motor cortex implant includes two FDA Breakthrough Device designations that define where the company is heading. In September 2024, the Blindsight implant — designed to stimulate the visual cortex to restore limited vision in people who have lost both eyes or their optic nerve — received Breakthrough Device status. Musk has claimed Blindsight will enable blind people to see, though IEEE Spectrum and other expert outlets have noted that the resolution achievable with current electrode density is likely to produce something closer to phosphene patterns than natural vision. Human trials for Blindsight were projected for late 2025 or early 2026. In May 2025, Neuralink received a second Breakthrough Device designation for a speech restoration system targeting people with ALS, stroke, cerebral palsy, and spinal cord injuries — a system that would decode attempted speech movements from motor and language areas, potentially enabling more natural communication than cursor-based text output.

    The operational scale is also shifting. Neuralink’s PRIME trial expanded from the United States to Canada (with Toronto’s University Health Network performing Canada’s first Neuralink surgeries in August and September 2025), the United Kingdom, and the United Arab Emirates. The trial enrolled 21 participants by early 2026. A $650 million Series E round in June 2025 valued the company at $9 billion. Neuralink has announced plans for high-volume production and automated surgical procedures targeting 2026 — a transition from artisanal neurosurgery to something closer to industrial medical device deployment. Whether the surgical robot, the implant reliability, and the regulatory pathway support that transition at Musk’s stated timeline is the open question. If any theme emerges from Neuralink’s first two years of human data, it’s that the device works better than skeptics expected and slower than Musk promised — which, for a medical device that is literally inside someone’s brain, is probably the right place to be.

    The honest assessment

    Neural engineering expert Kip Ludwig, quoted by Reuters after Arbaugh’s initial demonstration, said the results were promising but not a breakthrough — that the technology remained at an early stage. Neuroscientist Miguel Nicolelis noted that similar multi-electrode recordings had been achieved in his laboratory in the early 2000s. Both points are technically accurate and contextually incomplete. What Neuralink has done that prior BCI research did not is produce a wireless, fully implanted, cosmetically invisible device that a quadriplegic person uses for 10 hours a day to manage his daily life, attend college, and run a business — and then demonstrated it could be replicated across 21 patients in four countries within two years. The individual technical components are not novel. The integration into a device that functions as a consumer product for people with severe disabilities — rather than as a laboratory research tool — is novel. Whether the thread retraction problem, the calibration friction, the five-hour battery life, and the motor-cortex-only decoding pipeline are solvable engineering problems or fundamental constraints will determine whether Neuralink becomes a medical device company or remains an expensive research project. The first two years of human data suggest the former, but the history of medical devices that looked promising at 21 patients and failed at 2,100 is long enough that no honest assessment would call the outcome settled.

    This is the kind of technology our Neuroprosthetics course was built to explain — where a chip the size of a quarter and 1,024 electrodes thinner than a human hair gave a man who hadn’t moved his fingers in eight years the ability to beat the world record for BCI cursor control on his first day, and then spent the next 18 months teaching us what “working” actually means when the device is inside a living brain.

  • Brain-to-Brain Communication: Where the Science of Direct Neural Links Actually Stands

    In 2019, researchers at the University of Washington published a paper in Scientific Reports describing BrainNet—a system that allowed three people, seated in separate rooms with no ability to see, hear, or talk to each other, to collaboratively play a Tetris-like game using only their brain signals. Two “senders” could see the game board and decided whether a falling block needed to be rotated. They communicated their decisions to a “receiver” who couldn’t see the board but controlled the game. No words. No gestures. No screens shared between them. The senders’ decisions were extracted via EEG, transmitted over the internet, and delivered to the receiver’s visual cortex via transcranial magnetic stimulation, where they appeared as flashes of light—phosphenes—that the receiver interpreted as instructions. Five groups of three people tested the system and achieved 81 percent accuracy.

    That’s the headline. Here’s the fine print: the information transmitted was binary. Yes or no. Rotate or don’t rotate. One bit of data per transmission cycle. The senders communicated their decisions by staring at lights flashing at different frequencies—15 hertz for one answer, 17 hertz for the other—which entrained their brain’s electrical output at the corresponding frequency, readable by EEG. The receiver experienced either a flash of light (rotate) or no flash (don’t rotate). The “brain-to-brain communication” was, functionally, a very elaborate way to send the equivalent of one binary digit from one head to another. IEEE Spectrum described an earlier version of this approach as “telepathic Morse code.”

    This is what brain-to-brain communication actually looks like in 2026: technically real, scientifically genuine, and approximately as far from telepathy as a tin-can telephone is from a 5G network.

    What exists

    The field has produced a series of legitimate demonstrations, each constrained by the same fundamental bottleneck: you can get information out of a brain with reasonable resolution using EEG or implanted electrodes, but you can deliver information into a brain noninvasively only through crude channels—magnetic pulses that trigger phosphenes (perceived flashes of light) or vague sensations. The input side is the constraint. Reading a brain is hard. Writing to a brain is harder by orders of magnitude.

    The 2014 Starlab experiment was the first reported human brain-to-brain transmission. A sender in India imagined moving his hands or feet to encode binary data through EEG. The signal was emailed to France, where a TMS device delivered pulses to a blindfolded receiver’s visual cortex, producing phosphenes. The receiver reported the flashes verbally, and the team decoded the message. The transmitted words: “hola” and “ciao.” The transmission rate was approximately two bits per minute. The entire process took over an hour.

    BrainNet in 2019 scaled the architecture to three people and demonstrated something genuinely interesting beyond the binary channel: when the researchers injected noise into one sender’s signal, the receiver learned to preferentially weight the more reliable sender—a trust calibration process that happened entirely through brain-to-brain signals without any conscious strategy. The receiver’s brain was doing signal integration across two noisy sources, the same computation that underlies sensory integration in normal perception.

    Invasive brain-computer interfaces—Neuralink, Synchron, Blackrock Neurotech—are advancing rapidly on the reading side. Neuralink implanted its first human chip in January 2024 under its PRIME study, enabling a paralyzed patient to type and control a cursor through thought alone. Synchron’s Stentrode sits inside a blood vessel near the brain, avoiding open surgery. The PRIME study has a primary completion date of 2026 and full study completion projected for 2031. These systems are brain-to-computer interfaces, not brain-to-brain—they translate neural signals into digital commands for external devices. But they represent the reading infrastructure that any brain-to-brain system would eventually need.

    On the AI-assisted decoding side, researchers at the University of Texas in 2023 used fMRI scans and large language models to decode continuous thought into coherent text—not single words or binary choices but streams of semantic content, capturing the gist of what a person was thinking about during a story or imagined narrative. Meta has developed noninvasive brain-scanning systems paired with AI models that can decode silently spoken words from brain activity. These aren’t brain-to-brain systems, but they’re solving the bandwidth problem on the reading end: extracting richer, more nuanced information from neural signals than EEG-based approaches can achieve.

    What doesn’t exist

    Telepathy—the transmission of complex thoughts, images, emotions, or experiences from one mind to another—is not close. The demonstrations that exist transmit binary decisions through artificial sensory channels. The receiver doesn’t “hear” the sender’s thought. The receiver sees a flash of light and interprets it according to a pre-agreed code. The brain-to-brain interface is a translation chain: thought → EEG signal → digital encoding → internet transmission → TMS pulse → phosphene → interpretation. At every link in that chain, information is lost. What arrives in the receiver’s brain is not a thought. It’s a stimulus—a magnetically induced visual artifact that carries one bit of information about the sender’s decision.

    The gap between this and actual telepathy is not a gap that incremental engineering will close, because the limiting factor isn’t the technology between the brains. It’s the fundamental problem of neural encoding: we don’t know, for any given thought, which specific neural firing patterns represent it, how those patterns vary between individuals, or how to induce a specific firing pattern in a target brain that would be experienced as the same thought. Brains aren’t standardized hardware. The neural code for “rotate the block” in one person’s motor cortex is not the same pattern in another person’s motor cortex. Translating one person’s neural representation into a stimulus that would produce the same internal experience in another person requires a mapping between two unique neural architectures—a problem neuroscience hasn’t solved and isn’t close to solving.

    What BCI companies are building toward is not telepathy but increasingly high-bandwidth brain-to-computer interfaces that could, in principle, be linked: Brain A → computer → Brain B. Neuralink’s implant reads neural signals at thousands of channels. Future implants will read more. AI decoding systems are getting better at extracting semantic content from neural data. But the write side—delivering complex, precise, meaningful information directly into neural tissue in a way that the receiving brain interprets as a coherent experience—remains the unsolved problem. TMS can trigger phosphenes and crude sensory impressions. It cannot implant a sentence, an image, an emotion, or a memory.

    The timeline problem

    Coverage of brain-to-brain communication tends to imply a trajectory: binary transmission today, sentences tomorrow, telepathy eventually. The trajectory is real in the same way that the Wright Brothers’ 12-second flight in 1903 implied commercial aviation—the physics supports the possibility, but the engineering required to get from demonstration to deployment is measured in decades, not years, and the technical obstacles on the write side are qualitatively different from the obstacles on the read side.

    Reading a brain is an information extraction problem: the neural signals are there, and the challenge is building sensors sensitive enough and algorithms smart enough to decode them. This problem is yielding to better hardware and better AI. Writing to a brain is an information implantation problem: you need to induce specific patterns of activity in specific neural populations at specific times, through skull and tissue, without disrupting the brain’s existing activity. Noninvasive methods (TMS, focused ultrasound, transcranial electrical stimulation) affect large regions of cortex with limited spatial precision. Invasive methods (optogenetics, direct electrical stimulation) can target individual neurons but require surgery, gene therapy, or implanted hardware.

    The honest assessment in 2026: brain-to-computer interfaces are advancing on a trajectory that will produce clinically meaningful products for paralysis, communication disorders, and sensory prosthetics within the current decade. Brain-to-brain communication, in the sense of transmitting complex mental content between two people, requires solving the neural write problem at a resolution and precision that current technology can’t achieve and that current neuroscience can’t specify. The demonstrations are real. The extrapolation to telepathy is premature by a margin that is difficult to estimate because the bottleneck isn’t engineering velocity. It’s a scientific knowledge gap about how brains encode experience—a gap that better instruments may close but that no existing roadmap guarantees.

    Neuralink named its first consumer product “Telepathy.” The name is aspirational in the way that calling the first automobile a “teleporter” would have been aspirational. The product lets a paralyzed person control a cursor with their thoughts. That’s extraordinary and useful. It’s not telepathy. The distance between the two is the distance between reading a book and writing one—and in neuroscience, we’re still learning to read.

    We cover brain-to-brain communication alongside spinal cord stimulation, retinal implants, and the full landscape of neural interface technology across our Neuroprosthetics course—including why the hardest problem in connecting two brains isn’t getting the signal out. It’s getting the signal in.

  • Brain-Computer Interfaces in 2026: Where the Technology Actually Stands

    When I read headlines about brain-computer interfaces—and there’s a new one roughly every 72 hours, each seemingly announcing that the future has arrived—I’m reading them with the same part of my brain that reads MRI reports and discharge summaries. I’m looking for the mechanism, the sample size, the follow-up period, and the part of the press release that got quietly omitted. And what I keep finding is a field where the actual science is genuinely remarkable, the engineering is legitimately impressive, and the gap between what’s been demonstrated and what’s being promised is roughly the width of the Grand Canyon.

    So here’s where brain-computer interfaces actually stand in March 2026. Not the press release version. Not the “we’re five years from The Matrix” version. The clinical reality, company by company, with the caveats attached.

    What a BCI actually does (the 30-second version)

    Your motor cortex generates electrical signals when you intend to move. In a healthy nervous system, those signals travel down your spinal cord, through peripheral nerves, and to your muscles. In someone with a spinal cord injury or ALS, the signals still fire at the top—the brain is still doing its job—but the wiring downstream is broken. A brain-computer interface picks up those electrical signals directly from the cortex and routes them to an external device instead of to muscles. You think about moving your hand, the electrodes record the neural activity, a decoder translates it into a digital command, and a cursor moves on a screen or a robotic arm reaches for a cup. That’s it. That’s the core mechanism. Everything else is engineering.

    The engineering, of course, is where it gets complicated—and where the companies diverge in ways that matter enormously for which patients actually benefit, when, and at what risk.

    Neuralink: The one you’ve heard of

    Neuralink gets roughly 95% of the media coverage in this space despite being neither the first nor the furthest along clinically. What they have is Elon Musk, which in the attention economy is worth more than a decade of peer-reviewed publications. The N1 implant is a coin-sized device with 1,024 electrodes distributed across 64 ultra-thin polymer threads, inserted into the motor cortex by a custom surgical robot. The threads are thinner than a human hair—about 5 microns—and that’s genuinely impressive from a materials science standpoint. The pitch is high electrode count plus wireless transmission plus a cosmetically invisible implant that sits flush with the skull.

    As of early 2026, Neuralink has 21 participants enrolled in its global clinical trials, up from 12 in September 2025. The first patient, Noland Arbaugh—quadriplegic from a diving accident—demonstrated the ability to control a cursor, play video games, browse the internet, and post on social media using the implant. The third patient, Brad Smith, who has ALS and is ventilator-dependent, can type using the device. These are real outcomes. They matter. A person who cannot move anything below the neck using thought alone to navigate a computer is not a small thing.

    But—and this is where my press release detector starts going off—Musk announced on December 31, 2025, that Neuralink would begin “high-volume production” of BCI devices and move to “almost entirely automated surgical procedures” in 2026. He also announced the Blindsight implant for restoring vision in the blind would begin its first patient trial this year. If you’ve followed Musk’s timeline promises across any of his companies—Tesla Full Self-Driving, the Cybertruck, the Boring Company’s Vegas Loop—you know that announced timelines and delivered timelines have a relationship best described as aspirational. “High-volume production” in 2026 from a company with 21 trial participants is a claim that requires a lot of intermediate steps that haven’t been publicly demonstrated: manufacturing consistency, surgical standardization, long-term safety data, and FDA clearance for anything beyond an investigational device. None of those things have happened yet.

    The first patient also had a documented issue: some electrode threads retracted from the cortex after implantation, reducing the number of functional electrodes. Neuralink adjusted its approach for subsequent patients, but this is exactly the kind of biocompatibility challenge that doesn’t show up in the demo reel. Brain tissue is not a circuit board. It’s alive, it moves, it forms scar tissue around foreign objects, and it does not appreciate being punctured by 64 threads, no matter how thin they are.

    Synchron: The one that doesn’t require brain surgery

    Synchron is doing something fundamentally different, and I think the approach deserves more attention than it gets. Their Stentrode device is an endovascular BCI—it’s threaded up through the jugular vein and deployed inside a blood vessel on the surface of the motor cortex, using the same catheter techniques that interventional neurologists (my people) use every day to treat strokes and aneurysms. No craniotomy. No opening the skull. No penetrating brain tissue. The median procedure time in their COMMAND trial was 20 minutes.

    The tradeoff is resolution. The Stentrode has 16 electrodes compared to Neuralink’s 1,024. It’s sitting inside a blood vessel, not directly in cortical tissue, so the signals it picks up are less granular—think of it as the difference between sitting in the front row at a concert versus listening from the parking lot. You can still hear the music, but you’re not picking up the individual instruments. For the current application—point-and-click cursor control, typing, navigating apps—that’s sufficient. An ALS patient in their trial became the first person to control an iPad with a BCI, using Apple’s native Switch Control accessibility feature. He later connected to an Apple Vision Pro and Amazon Alexa using only his thoughts. These are consumer devices, unmodified, working with a brain implant through standard accessibility protocols. That’s a very different value proposition than a research demo in a lab.

    Synchron’s COMMAND trial—six patients, 12-month follow-up—met its primary safety endpoint with no device-related serious adverse events to the brain or vasculature. They raised $200 million in Series D funding in late 2025, backed by Bezos, Gates, and the Qatar Investment Authority, and are preparing pivotal trials for 2026 ahead of commercial approval. They’ve also announced a next-generation whole-brain interface with significantly higher channel counts, though details won’t arrive until later this year.

    The strategic bet Synchron is making is that a lower-resolution device implanted through an existing, well-understood medical procedure will reach more patients faster than a higher-resolution device that requires brain surgery. From a regulatory and clinical adoption standpoint, that bet has a lot of logic behind it. Interventional neuroradiologists already know how to navigate catheters through cerebral vasculature. The training curve is short. The risk profile maps onto procedures we’ve been doing for decades.

    BrainGate and Blackrock Neurotech: The ones that were here first

    The BrainGate consortium—a collaboration across Mass General Brigham, Brown University, Stanford, and several other institutions—has been running clinical trials of implantable BCIs since 2004, which is two decades before Neuralink implanted its first patient. Their technology is built on Blackrock Neurotech’s Utah Array, a grid of 96 silicon microelectrodes that penetrates the cortex. It’s the workhorse of the field. Nathan Copeland, implanted in 2015, holds the record for longest continuous use of a brain-computer interface. In 2016, he used a robotic arm to fist-bump Barack Obama, which remains the single best piece of BCI marketing ever produced despite not being marketing at all.

    In 2025, a BrainGate team at UC Davis demonstrated that a man with ALS could speak through a BCI-driven voice synthesizer reconstructed from recordings of his pre-disease voice. More recently, a BrainGate study published in Nature Neuroscience showed two paralyzed patients typing on a standard QWERTY keyboard layout using attempted finger movements—not imagined cursor control, actual attempted typing—at speeds approaching 90 characters per minute. That’s not far from the average typing speed of a non-disabled person on a phone. The decoder is getting better because the algorithms are getting better, and the algorithms are getting better because we have more data from more patients over longer periods. This is the boring, incremental, deeply important work that doesn’t generate Musk-level headlines but is actually what moves the field forward.

    Precision Neuroscience: The one designed to come back out

    Founded by Neuralink co-founder Benjamin Rapoport, Precision takes yet another approach. Their Layer 7 Cortical Interface is an ultra-thin film—one-fifth the thickness of an eyelash—studded with 1,024 platinum microelectrodes. It sits on the brain surface without penetrating tissue, slipped through a small slit in the skull. The critical difference: it’s designed to be safely removable. Every other invasive BCI is essentially permanent. Precision’s device can come out without damaging the cortex, which is a significant advantage for a field where the long-term effects of brain implants are still being studied.

    Precision received FDA 510(k) clearance—the first full regulatory clearance for any of the new commercial BCI technologies—for implants lasting up to 30 days, and they’ve performed 38 human implant procedures. Their current application is short-term brain mapping and neural data collection, not yet chronic implantation for daily use. But the data they’re collecting is building the training sets for neural decoding algorithms that could eventually power long-term devices.

    What’s actually hard (the part nobody puts on the slide deck)

    Here’s what I think about when I read BCI press releases, and what I wish the coverage spent more time on:

    Signal degradation over time. The brain forms glial scar tissue around penetrating electrodes. The signal quality starts high and degrades over months to years as the immune response walls off the foreign object. This is the single biggest unsolved problem in chronic invasive BCIs, and no company has publicly demonstrated a solution that works at scale over five-plus years.

    The decoder problem. Current BCIs require calibration—sometimes daily—because the neural signals drift. The relationship between a specific pattern of neural activity and an intended movement isn’t fixed. It shifts as neurons adapt to the implant, as the user’s neural strategies change, and as electrodes move or degrade. Making decoders that stay accurate without constant recalibration is an active area of research, and AI is helping, but it’s not solved.

    Surgical risk. Any procedure involving the brain carries risk. Infection, hemorrhage, device failure requiring revision surgery—these are not theoretical concerns. They’re the everyday calculus of neurosurgery. Neuralink’s thread retraction in their first patient is a concrete example. Synchron’s endovascular approach has a lower risk profile precisely because it doesn’t penetrate the brain, but even catheter-based procedures have complication rates.

    The regulatory path. These devices are in early feasibility trials. The road from there to FDA-approved commercial products typically takes years—pivotal trials with larger patient populations, long-term follow-up data, manufacturing quality controls, post-market surveillance plans. Musk saying “high-volume production in 2026” doesn’t change the regulatory timeline. The FDA moves at the speed of evidence, not the speed of X posts.

    Where this goes

    The honest assessment: BCIs for people with severe paralysis will almost certainly become commercially available within the next three to five years. Not as consumer products—as medical devices, implanted by neurosurgeons, prescribed for specific clinical indications, and covered (eventually) by insurance. The first approved products will probably do what the current trials are demonstrating: cursor control, typing, basic device navigation. Not telepathy. Not memory enhancement. Not uploading your consciousness to the cloud.

    The longer-term applications—speech decoding for people who’ve lost the ability to talk, sensory feedback for prosthetic limbs, treatment of neuropsychiatric conditions—are genuinely possible but further out. They require higher-resolution interfaces, better algorithms, and a much deeper understanding of neural coding than we currently have.

    What I want people to take away from this is that the technology is real, the progress is meaningful, and the patients benefiting from these devices are experiencing something that would have been science fiction 20 years ago. But the field is in its early clinical phase, and the distance between “a paralyzed person can move a cursor” and “BCIs are a consumer product” is enormous—measured not in engineering breakthroughs but in safety data, regulatory milestones, and the slow, grinding, essential work of proving that these devices help more than they harm over the long term.

    We cover the full history, science, and engineering of brain-computer interfaces—from the earliest EEG experiments to every company and approach described above—across 48 lectures in our Neuroprosthetics & Brain-Computer Interfaces course. If this piece made you want the granular version, that’s where it lives.