First of all, the conclusion in one word
Decode means ``to infer something from an observed signal,'' and emulate means ``the internal state evolves over time and responds consistently to changes in conditions or interventions.'' Even if the visual outputs are similar, it does not necessarily mean that they areoperating by the same causal mechanism.
I am not going to deal with philosophy or legal systems here. From only the aspects of technology and natural science, we will clarify the conditions under which decode can be read as emulate.
The weakness of the previous version is that while the principle difference between decode and emulate was correct, it did not reach the level of site rule to stop open-vocabulary non-invasive decode, streaming / voice-synthesis neuroprosthesis, and connectome-constrained prediction, which were promoted in the primary literature of 2025. This update promotes language prior, tail latency / silence / recalibration burden, fixed decoder interval, same-neuron tracking audit, and parameter degeneracy to mandatory audit items at the decode/emulate boundary.
The previous page split non-invasive language decode, speech neuroprosthesis, and connectome-constrained prediction, but it still left one invasive route too implicit: connectomics-informed therapeutic decoding. Merk et al. (2025) showed across-patient movement decoding without patient individual training in 56 implanted patients and 1,480 ECoG channels, and extended the same platform to emotion and seizure-related use cases with connectomics-informed channel selection. That is an important step toward symptom-linked therapeutic control, but it is not the same as subject-free universal decoding or state-complete reconstruction. Zhu et al. (2026) then showed in 18 Parkinson's disease and 18 dystonia patients that eyes-closed physiology can shift basal-ganglia theta/alpha feedback signals enough that a fixed threshold risks treating benign state change as pathology. Therefore, this site now separates communication throughput, transfer-assisted initialization, connectomics-informed across-patient therapeutic decoding, physiological-state guard, and adaptive rescue rather than treating them as one invasive decode ladder.
The shortest difference
| Viewing Points | Decode | Emulate | Minimum required verification |
|---|---|---|---|
| What to reproduce | Infer states, stimuli, meanings, motor intentions, etc. from observations. | Internal states evolve over time and produce future outputs and intervention responses. | Evaluate not only supervised prediction accuracy but also time evolution and condition changes. |
| Strengths | It is easy to achieve high performance under the observed conditions, and it is easy to connect directly to practical BCI. | A stronger case can be made for intervention, counterfactuals, and closed-loop control. | OOD generalization, perturbation matching, and closed-loop stability are evaluated separately. |
| Misreadings that will increase in 2025 | It is easy to read open-vocabulary word decode and streaming voice output as "free thinking mind reading" or "internal reproduction". | It is easy to read connectome-constrained prediction and local closed loop as evidence of whole-brain emulation. | LM-only / no-brain / shuffle baseline, subsystem scope, and state completeness are issued at the same time. |
| Misreading when insufficient | It is easy to translate correlative translation into "internal reproduction". | It is easy to misread it as "faithful reproduction" just by matching the output. | Separately audits the completeness and identifiability of state variables. |
| Typical failure modes | Even if the accuracy is high within a subject or within a task, it will collapse under unlearning conditions or on a different day. | Even if the behavior seems to match, different internal parameters may produce the same output. | Disclose the division unit, number of recalibrations, post-intervention error, and abstention conditions. |
| Minimum log to keep | candidate set, presence or absence of LM, subject cooperation, proofreading and abstention rate, and cross-day degradation. | Perturbation log, P50/P95/P99 latency, silence/abstention, recalibration burden, residuals of latent state. | Leave speed, accuracy, stability, and hidden state as separate columns. |
2026-03-17 Addendum: Arrange representative papers in the same coordinate system
One area where there was a lot of room for improvement with the current site was that representative papers showing natural text/voice output could easily be compared on the same axis. However, looking at the primary literature, semantic reconstruction of perceived / imagined content, known-onset word decoding, prompt-conditioned language continuation, attempted speech communication, and streaming voice synthesis are all different in training depth, subject route, prior scaffold, and time axis. Therefore, on this page we normalize the main papers into a single comparison table, fixing what is directly shown and what is not yet shown.
| Representative paper | Signals and issues | Limitations directly shown by primary literature | ceiling on this site |
|---|---|---|---|
| Tang et al. (2023) semantic reconstruction |
Reconstruct the semantic representation of perceived speech / imagined speech / silent video from fMRI. | Within-subject, the recovered time-points reached 65-82%, but with cross-subject, it remained at 1-5%, and the learning gain also plateaued at about 7.5 hours. Countermeasures like counting by sevens and naming animals reduced recovery to 0-50%. | Subject-cooperative task-limited semantic reconstruction. It will not be promoted to subject-free thought reader or unrestricted mental-state readout. |
| Défossez et al. (2023) speech-segment retrieval from M/EEG |
Identify the correct 3 s speech segment from non-invasive M/EEG recorded while participants passively listened to natural speech. | The route decoded perceived speech segments from more than 1,000 possibilities with much stronger performance in MEG than EEG, and the model predictions primarily tracked lexical and contextual semantic representations. It still depended on a fixed candidate bank of speech segments at test time. | Candidate-bank segment retrieval for perceived speech. It will not be promoted to free-form generation, production decode, or unrestricted thought reading. |
| d'Ascoli et al. (2025) open-vocabulary word decoding |
Estimating word identity under known word onset from M/EEG of 723 people and 5 million words. | With the design using sentence-level context, performance was strongly dependent on additional training data, test averaging, MEG > EEG, and reading > listening. Therefore, even with "open-vocabulary", task structure and modality advantage remain. | Open-vocabulary word decode with known-onset and perception-heavy conditions. I will not promote it to free thought reading or unrestricted language generation. |
| Ye et al. (2025) generative language reconstruction + LLM |
Feed fMRI-derived representations and a text prompt into an autoregressive LLM to generate a continuation rather than rerank a fixed candidate list. | BrainLLM beat a permuted-brain control across three fMRI datasets, but performance still depended on prompt length, LLM size, and data volume; no-prompt generation remained harder by language-similarity metrics even when brain input helped relative to the control. | prompt-conditioned generative language reconstruction. It does not advance to brain-only text generation or hidden-state recovery. |
| Willett et al. (2023) high-performance speech neuroprosthesis |
Decodes attempted speech from intracortical array and returns large vocabulary text output. | With a vocabulary of 125,000 words, we achieved 62 words/min and 23.8% WER for attempted speech decoding of a participant-specific invasive route. | It is a high-bandwidth communication subsystem. It does not promote semantic autonomy or whole-brain state reconstruction. |
| Littlejohn et al. (2025) Wairagkar et al. (2025) streaming brain-to-voice / instantaneous voice synthesis |
Synthesize a voice similar to own-voice from the invasive signal using streaming / low-latency. | Littlejohn showed streaming brain-to-voice every 80 ms, and Wairagkar showed less than 10 ms inference and silence fallback. On the other hand, in Wairagkar, the decoder fixed on post-implant day 165 deteriorated significantly after about 15 days. | Invasive communication route from same-session to short-horizon. Chronic deployability or long-term stability of a fixed decoder is not claimed without a Temporal Validity Card. |
Even if the "natural language output" looks the same, semantic reconstruction, candidate-bank segment retrieval, known-onset word decode, prompt-conditioned continuation, attempted speech communication, and streaming voice synthesis are different routes. If we do a side-by-side comparison without separating these areas, it is easy to misinterpret deep single-subject fMRI, broad multi-subject M/EEG, participant-specific invasive BCI, and generation systems with LLM scaffold as evidence of the same strength. Therefore, on this site, when looking at natural sentence output, we first return task regime, training depth / subject route, prior scaffold, and fixed decoder horizon, and read it based on the type of evidence rather than the flashiness of the medium.
2026-03-17 Addendum: chronic ceiling of invasive communication route
| Wall | What the primary literature now supports | How to read this page | Claims not raised yet |
|---|---|---|---|
| same-session streaming ceiling | Littlejohn et al. (2025) is streaming brain-to-voice, Wairagkar et al. (2025) advances instantaneous voice synthesis and silence fallback. | Read as evidence of strong L2-L3 communication subsystem. | We do not claim long-term retention or chronic deployability of fixed decoders. |
| recalibration ceiling | Wilson et al. (2025) tested one-month unsupervised recalibration and Pun et al. (2024) showed that chronic human intracortical recording instability is strongly associated with decreased BCI performance. | As long as time since last supervised calibration, recovery time, and recalibration burden are recorded in separate logs, it can be read as a preliminary step toward long-term operation. |
We do not write that "it worked on that day" as "it held true over a long period of time without recalibration." |
| same-neuron tracking ceiling | Steinmetz et al. (2021) is stable recording with motion correction, Pachitariu et al. al. (2024) advanced sorting centered on drift / split / merge, and van Beest et al. (2025) advanced probabilistic cross-day neural tracking. | In the microelectrode system, the same-neuron claim is read as an estimate with sorting version + drift correction + unit-match probability. |
Successful chronic decoding cannot be written as direct reading of a stable single-neuron mechanism. |
When promoting invasive speech BCI to a higher level, we will not only consider same-day streaming performance, but also how many days the fixed decoder lasts, how much it relies on manual recalibration, how did we estimate same-neuron tracking for microelectrode systems, and how did we audit the implant age / material / geometry / tissue-response proxy. For a long background, see Wiki: state, trait, drift.
2026-04-01 Addendum: therapeutic invasive decoding has a different ceiling
| Wall | What the primary literature now supports | How to read this page | Claims not raised yet |
|---|---|---|---|
| across-patient network-prior ceiling | Merk et al. (2025) used 56 patients and 1,480 ECoG channels across four cohorts to decode movement without patient individual training, and extended the platform to emotion and seizure-related use cases with connectomics-based channel selection. | Read as connectomics-informed symptom-linked invasive decoding for therapeutic control or decoder initialization. | Do not raise this to subject-free universal decoding, unrestricted internal-state readout, or WBE-level state completeness. |
| personalized-controller ceiling | Oehrn et al. (2024) identified stimulation-entrained gamma markers of high versus low dopaminergic states in four patients with Parkinson's disease and used those data-driven markers to drive adaptive DBS. | Read as personalized biomarker / controller selection within one disorder and one implant/control regime. | Do not treat one successful adaptive controller as a generic therapeutic decoder law across symptom families, implant targets, or daily contexts. |
| physiological-state guard ceiling | Zhu et al. (2026) showed in 18 Parkinson's disease and 18 dystonia patients that eyes-closed physiology can shift basal-ganglia theta/alpha biomarkers enough that fixed thresholds risk mistaking benign state changes for pathology. | Read adaptive control only after the paper discloses a state-recognition / contextual guard, not from one oscillatory threshold alone. | Do not write one band-limited biomarker as a stable disease-state meter across vigilance or everyday physiological context. |
The important progress here is real: therapeutic decoding may need less patient-specific training, and symptom-linked adaptive control is becoming more data-driven. But the object is still named symptom/state decoding under declared implant coverage, network prior, and controller policy. That is much narrower than universal invasive decode, and much narrower again than emulate.
2026-03-17 Addendum: 4 scaffolds that make it easy to expand speech decode
| Scaffold | What the primary literature now supports | How to read this page | Claims not raised yet |
|---|---|---|---|
| task / vocabulary scaffold | d'Ascoli et al. (2025) advanced open-vocabulary word decoding, but also showed that performance changes significantly depending on word onset, task structure, modality, amount of training data, and test averaging. | Read as an advance in conditional language/communication decoding. | I will not post it as free thought reading or state-complete reconstruction. |
| causal / non-causal scaffold | Chen et al. (2024) showed that in 48-participant EEG speech decoding, an offline non-causal model can boost performance using post-onset auditory feedback, while a real-time causal model can boost performance. We have shown that model has stricter constraints. | Read offline retrospective decode and causal real-time route separately. | Offline gain with look-ahead cannot be directly written as deployable streaming loop. |
| transfer / adaptation scaffold | Singh et al. (2025) advanced transfer learning of phonemic speech decoding with a group-derived decoder with distributed minimally invasive recordings, but shared task structure and speech network coverage are still prerequisites. | Read as decode engineering to boost clinical scalability. | I will not promote zero-shot as a general thought decoder or subject-free universal reader. |
| LLM / prompt scaffold | Ye et al. (2025) generated text continuations by inputting fMRI-derived expressions into a prompt and large language model. | Read as prompt-conditioned generative language reconstruction. | Do not directly equate output fluency with brain-only reconstruction or hidden-state recovery. |
A common misinterpretation of current news and demonstrations is to interpret the four advances of natural output, large vocabulary, effective transfer learning, andLLM documentation as ``we were able to read the internal state of the brain fairly directly.'' However, what primary literature directly supports is the advancement of decode engineering to the extent that task scaffolding, causal deployment conditions, adaptation routes, and language prior are clearly specified, not the state completeness itself required for WBE.
Boundary cases seen in primary literature
| Example | What we have achieved now | Why not just emulate |
|---|---|---|
| Tang et al. (2023) non-invasive semantic decoding |
We showed semantic recovery of continuous language from fMRI and demonstrated decoding across perceived speech, imagined speech, and silent video. | Subject cooperation is required for both learning and application, and translation of the observed semantic representation. We do not demonstrate internal causal structure or replication of intervention responses. |
| Défossez et al. (2023) non-invasive speech-segment retrieval |
Three-second speech segments were identified from non-invasive MEG/EEG while participants passively listened to natural speech, with much stronger performance in MEG than EEG. | This is evidence for candidate-bank retrieval of perceived speech segments, not for unrestricted word-level or sentence-level generation without a fixed comparison bank. |
| d'Ascoli et al. (2025) open-vocabulary non-invasive word decoding |
We advanced individual word decoding from non-invasive recordings with 723 people and showed that the amount of training data, test averaging, modality, and task dependence greatly affect performance. | Although this is an advance in open-vocabulary, dependence on word onset, task structure, and participant conditions remains. The progress here is in decoding the communication route, not in state-complete reconstruction. |
| Chen et al. (2024) EEG speech decoding with causal / non-causal comparison |
We compared speech decoding using EEG of 48 participants and clarified the difference in the apparent gain of the acausal model and the constraints of the causal path for real-time. | This is primary evidence for separating offline retrospective score and deployable real-time route, but it does not indicate general thought reading. |
| Singh et al. (2025) transfer learning with distributed brain recordings |
Using distributed minimally invasive recordings, they show that a group-derived decoder improves the reliability of phonemic speech decoding. | It is an advance in transfer learning that assumes a shared task structure and calibration route, rather than a subject-free universal decoder. It does not directly lead to WBE internal state identification or unrestricted decoding. |
| Willett et al. (2023) invasive speech BCI |
Large-vocabulary speech decoding of 62 words/min was demonstrated with a vocabulary of 125,000 words from the intracortical array. | Even at high bandwidth, the main focus is on decoding attempted speech. It does not show autonomous internal generation or causal agreement with changing conditions. |
| Littlejohn et al. (2025) / Wairagkar et al. (2025) streaming brain-to-voice / voice synthesis |
Littlejohn et al. showed streaming brain-to-voice every 80 ms, and Wairagkar et al. showed sub-10 ms inference, silence fallback, and short horizon of a fixed decoder for a neural-to-voice algorithm. | This is strong L2-L3 evidence of a communication subsystem, but not whole-brain emulation. In addition to speed, long-term deployability cannot be determined unless it also provides tail latency, dropout, silence/false speech, recalibration burden, and fixed decoder horizon. |
| Ye et al. (2025) generative language reconstruction + LLM |
fMRI-derived brain representations and text prompts were combined inside an autoregressive LLM to generate language continuations. | This is an advance in generative language interfaces, but the fluency of the output still depends strongly on prompt and LLM scaffold, and no-prompt generation remains harder. From here, you cannot immediately proceed to brain-only reconstruction or emulation. |
| Flesher et al. (2021) bidirectional closed-loop BCI |
By returning tactile feedback to motor decode, the time required for the robot grasp task was reduced from 20.9 seconds to 10.2 seconds. | This is a locally closed loop demonstration that is even stronger than decode, but it targets the sensorimotor subsystem. Rather than whole-brain emulation, it is appropriate to read this as evidence close to L3 in local circuits. |
| Merk et al. (2025) / Zhu et al. (2026) connectomics-informed therapeutic decoding |
Across-patient invasive decoding without patient individual training was shown for movement, while physiological-state decoding was shown for adaptive-DBS feedback guards under named disease and implant regimes. | This is strong progress in symptom- and state-conditioned therapeutic decoding, but it remains implant-target-, label-, and controller-conditioned. It does not show a universal invasive decoder, unrestricted internal-state readout, or emulation of causal brain dynamics. |
| MICrONS (2025) / Billeh et al. (2020) / Beiran & Litwin-Kumar (2025) stimulus-conditioned digital twin / connectome-constrained model |
Sequential same-brain connectomics datasets, multiscale models, and connectome-constrained recurrent networks have advanced local conditional prediction under named tasks and recordings. | This is an important foundation in a direction similar to emulate, but the scope remains local and regime-bounded. Furthermore, as MICrONS (2025), Beiran & Litwin-Kumar (2025), and Prinz et al. (2004) show, output matching alone cannot be said to be faithful reproduction or the only solution because degeneracy remains, including unmeasured parameters, unrecorded neurons, and omitted state families. |
2026-03 Literature audit: 4 alternative readings prohibited here
| Dangerous transliteration | Why is it dangerous | Boundaries currently supported by primary literature |
|---|---|---|
| open-vocabulary non-invasive decode → unrestricted thought reading | The word decoding in 2025 is certainly a step forward, but it is strongly influenced by task structure, candidate sets, participant cooperation, and modality differences. | What can be said relatively strongly from Tang (2023) and d'Ascoli (2025) is that conditional language / communication decoding has progressed. From there, it is not possible to go directly to general unique restoration of internal states or retrieval of WBE-required states. |
| streaming speech neuroprosthesis → emulate / WBE | Streaming and voice synthesis are great achievements of the communication subsystem, but being able to speak quickly and having internal causality are two different things. | Littlejohn (2025) and Wairagkar (2025) pushed the L2-L3 of the invasive communication route, and as Wilson (2025) shows, long-term recalibration burden is another barrier. |
| non-causal offline gain → deployable real-time loop | Decoders that can use future context or post-onset auditory feedback can have an advantage over causal decoders that can be used in closed loops. | What we can say relatively strongly from Chen (2024) is that real-time claims cannot be made unless the causal path is reported separately. |
| connectome-constrained prediction → unique internal mechanism | Even if a connectome or same-brain function is included, internal dynamics can degenerate if unmeasured biophysical parameters and hidden states remain. | What can be said relatively strongly from MICrONS (2025), Billeh (2020), and Beiran & Litwin-Kumar (2025) is that structural constraints help prediction. From there, you cannot proceed directly to state-complete reconstruction or a unique internal model. |
Six gates before replacing decode with emulate
| Gate | Why is it necessary | Minimum evidence you want |
|---|---|---|
| G1: Does brain-derived information exceed prior | If the language prior or candidate set is strong, the neural contribution can be mistaken based on the fluency of the output alone. | LM-only, no-brain, time-shuffle, trial-shuffle, candidate set size, participant cooperation disclosure. |
| G2: Are you following the causal deployment path? | When using future frames, teacher forcing, post-onset auditory feedback, and acausal windows, offline scores overestimate the real-time ceiling. | Disclosure of causal/non-causal, look-ahead window, feedback contamination guard, and online inference path. |
| G3: Should it be kept on a different day from the unlearned condition? | Even if the accuracy is high based on the same subject, same day, and same task, the mechanism does not necessarily match. | OOD conditions, cross-day, separate stimulus set, out-of-subject evaluation, fixed decoder deterioration curve, abstention rate. |
| G4: Respond to intervention | If you call yourself emulate, you need to guess not only the observation but also the branch after perturbation. | Predictive matching for stimulus changes, ICMS/TMS, pharmacology, and task rule changes. |
| G5: Is it stable with closed loop and long-term operation? | If the output changes the next input, offline accuracy no longer applies. Furthermore, within-session speed and long-term deployability are another issue. | end-to-end latency at P50/P95/P99, tail latency, silence / abstention, dropout, fixed decoder interval, time since last supervised calibration, recalibration burden, recovery time, and, for therapeutic decoding, the controller family plus any physiological-state guard. |
| G6: Are there enough state variables and have you audited degeneracy? | The same output can have different sets of internal parameters. If we hide the state deficit and the degeneracy of the model family, it becomes an overstatement. | In addition to auditing connectome-only baseline and augmentation comparison, family comparison, uncertainty, cell type / synaptic state / delay / neuromodulation / glia, we also record sorting version, drift correction, and unit-match probability for chronic microelectrode systems. |
Operation rules for this site
Rule
- Conditions for writing decode:This is when the demonstration focuses on predicting meanings, stimuli, actions, and sentences from observed signals, and even if a neural contribution that exceeds
LM-onlyor shuffle baseline is shown, intervention matching or causal real-time path is not shown. - Conditions written as L2 to L3 of communication subsystem: Even if a local loop is established such as speech BCI or tactile BCI, clearly state that the target is a limited subsystem, include causal decoder path, latency / silence / fixed decoder interval / recalibration burden, and if it is a microelectrode system, also include unit identity audit. </li>
- Conditions written as therapeutic invasive decoding:If the paper aims at symptom-linked adaptive stimulation or therapeutic state recognition, name the symptom/state label, implant target / coverage, whether the model works without patient individual training or only after warm-up, any connectomics / network prior, the controller family, and any physiological-state guard before raising the claim ceiling.
- Conditions for writing local emulation:When a local circuit shows both a closed loop and a causal intervention, and what is replaced is specified in a limited manner.
- Conditions for writing close to WBE:Only when six points are met: exceedance of prior, causal deployment guard, OOD/cross-day generalization, perturbation matching, closed-loop long-term stability, and integrity audit of state variables.
- When only matching output: Use expressions such as avatar, behavioral clone, decoder, and language interface, not emulate.
- Treatment of connectome-constrained success: Predictive gain, stimulus-conditioned digital-twin models, and connectome-constrained success are positioned as advancements in structural/functional scaffolds, and are not described as the only solution or state-complete reconstruction. </li> </ul> </div> </section>
References
- Tang, J., LeBel, A., Jain, S., et al. (2023). Semantic reconstruction from non-invasive brain recordings. Nature Neuroscience, 26, 858–866. doi:10.1038/s41593-023-01304-9
- Défossez, A., Caucheteux, C., Rapin, J., et al. (2023). Decoding speech perception from non-invasive brain recordings. Nature Machine Intelligence, 5, 1097–1107. doi:10.1038/s42256-023-00714-5
- d'Ascoli, S., Bel, C., Rapin, J., et al. (2025). Towards decoding individual words from non-invasive brain recordings. Nature Communications, 16, 10521. doi:10.1038/s41467-025-65499-0
- Chen, Z., Yao, D., Wang, M., et al. (2024). A neural speech decoding framework leveraging deep learning and speech synthesis. Nature Machine Intelligence, 6, 1816–1827. doi:10.1038/s42256-024-00837-5
- Singh, V., Papangelou, A., Sharma, M., et al. (2025). Transfer learning via distributed brain recordings enables reliable speech decoding. Nature Communications, 16, 5364. doi:10.1038/s41467-025-63825-0
- Willett, F. R., Kunz, E. M., Fan, C., et al. (2023). A high-performance speech neuroprosthesis. Nature, 620, 1031–1036. doi:10.1038/s41586-023-06377-x
- Littlejohn, K. T., Dabagia, M., Ladwig, A., et al. (2025). A streaming brain-to-voice neuroprosthesis to restore naturalistic communication. Nature Neuroscience, 28, 902–912. doi:10.1038/s41593-025-01905-6
- Wairagkar, M., Card, N. S., Singer-Clark, T., et al. (2025). An instantaneous voice-synthesis neuroprosthesis. Nature, 644, 145–152. doi:10.1038/s41586-025-09127-3
- Ye, Z., Ai, Q., Liu, Y., de Rijke, M., Zhang, M., Lioma, C., & Ruotsalo, T. (2025). Generative language reconstruction from brain recordings. Communications Biology, 8, 346. doi:10.1038/s42003-025-07731-7
- Merk, T., Li, N.-F., Butenko, K., et al. (2025). Invasive neurophysiology and whole brain connectomics for neural decoding in patients with brain implants. Nature Biomedical Engineering. doi:10.1038/s41551-025-01467-9
- Oehrn, C. R., Roediger, J., Diehl, A., et al. (2024). Chronic adaptive deep brain stimulation versus conventional stimulation in Parkinson's disease: a blinded randomized feasibility trial. Nature Medicine, 30, 2613–2622. doi:10.1038/s41591-024-03196-z
- Zhu, G.-Y., Merk, T., Butenko, K., et al. (2026). Decoding the impact of visual states on adaptive deep brain stimulation feedback signals in movement disorders. npj Parkinson's Disease, 12, 61. doi:10.1038/s41531-026-01273-3
- Wilson, G. H., Stein, E. A., Kamdar, F., et al. (2025). Long-term unsupervised recalibration of cursor-based intracortical brain-computer interfaces using a hidden Markov model. Nature Biomedical Engineering. doi:10.1038/s41551-025-01536-z
- Pun, T. K., Khoshnevis, M., Hosman, T., et al. (2024). Measuring instability in chronic human intracortical neural recordings towards stable, long-term brain-computer interfaces. Communications Biology, 7, 1363. doi:10.1038/s42003-024-06784-4
- Steinmetz, N. A., Aydin, C., Lebedeva, A., et al. (2021). Neuropixels 2.0: A miniaturized high-density probe for stable, long-term brain recordings. Science, 372(6539), eabf4588. doi:10.1126/science.abf4588
- Pachitariu, M., Sridhar, S., Pennington, J., & Stringer, C. (2024). Spike sorting with Kilosort4. Nature Methods, 21, 914–921. doi:10.1038/s41592-024-02595-5
- van Beest, E. H., Bimbard, C., Fabre, J. M. J., et al. (2025). Tracking neurons across days with high-density probes. Nature Methods, 22, 778–787. doi:10.1038/s41592-024-02440-1
- Flesher, S. N., Downey, J. E., Weiss, J. M., et al. (2021). A brain-computer interface that evokes tactile sensations improves robotic arm control. Science, 372(6544), 831–836. doi:10.1126/science.abd0380
- MICrONS Consortium, et al. (2025). Functional connectomics spanning multiple areas of mouse visual cortex. Nature, 640, 435–447. doi:10.1038/s41586-025-08790-w
- Beiran, M., & Litwin-Kumar, A. (2025). Prediction of neural activity in connectome-constrained recurrent networks. Nature Neuroscience, 28, 1323–1334. doi:10.1038/s41593-025-02080-4
- Billeh, Y. N., Cai, B., Gratiy, S. L., et al. (2020). Systematic Integration of Structural and Functional Data into Multi-scale Models of Mouse Primary Visual Cortex. Neuron, 106(3), 388–403.e18. doi:10.1016/j.neuron.2020.01.040
- Prinz, A. A., Bucher, D., & Marder, E. (2004). Similar network activity from disparate circuit parameters. Nature Neuroscience, 7, 1345–1352. doi:10.1038/nn1352
</article> </main>Why is this distinction important
Without this distinction, individual advances such as ``we were able to produce sentences from brain signals,'' ``a little bit better with closed loop,'' and ``the stimulus-conditioned digital twin worked'' would be mistakenly interpreted as an overall achievement of WBE. At Mind-Upload, in order to avoid this leap forward, we put the complaint ladder and verification foundation first.
Next
Click here if you would like to see the level of assertion and the strength of required evidence.
How to read claims and evidence →