Wiki

Wiki: Closed Loop, Delay, Jitter, Safe Stop

Closed-loop time requirements are not a single number; they vary by loop type

Mind Uploading Research Project

Public Page Updated: 2026-03-31 Learning guide / evidence refresh

How to use this page

Read this first to avoid getting lost

This page is a wiki that organizes delay, jitter, drift, safety stop, body/environment boundary, and long-horizon deployability in Mind-Upload's L3 'closed loop' using primary literature. The purpose is to clarify that even when offline accuracy is high, the required timing budget depends on the loop band and actuator, and low latency alone does not tell you which sensory, motor, interoceptive, reafferent, or slow internal-milieu routes were actually preserved, whether a fixed decoder survived across time, how much rescue-mode programming was needed, or, for burst-driven neuromodulation, which biomarker and controller were actually operating.

  • Closed-loop time requirements vary by loop type, not by a single ms value.
  • Low latency is not the same as reproducing the relevant body/environment boundary, because the fast loop and the slow internal milieu are different audits.
  • A same-session fast loop, a fixed decoder that survives across days, an adaptively rescued loop, and a chronically deployable loop are different achievements.
  • Online improvement is not one object: user-side learning, decoder updates, and application-side shaping must be separated before gains are read as stability.
  • Even if the event marker is less than 1 ms, it is a different matter from guaranteeing end-to-end for the entire system.
  • Phase error is more important than ms for phase-targeting, and for adaptive DBS the timing story now has to be separated from fixed-decoder durability, programming burden, and eligibility.
  • For phase-targeted stimulation, low mean latency is still not enough: oscillation presence / power / SNR gate, causal estimator benchmark, circular targeting error, and no-stim / missed-trigger rates must be separated.
  • For burst-driven neuromodulation, the main question is no longer burst timing alone: biomarker family, controller mode, movement / medication state, sensing compatibility, and biomarker-linked comparator have to be separated.
  • Streaming speech BCI needs to record not only average delay but also tail latency, output-path audit, silence/hold-last-output, and fixed-decoder horizon in separate logs.
  • Adaptive-DBS papers need to log rescue-mode optimization, clinic/home transfer, eligibility, continuation, and biomarker/controller choice separately from symptom benefit.
Best for
People who want to read about L3 closed-loop evaluation and real-time operation based on literature rather than general information
Reading time
14-22 minutes
Accuracy note
Here, we do not set a ``fixed threshold common to all loops.'' We also do not treat a fast loop as boundary-complete, temporally durable, or chronically deployable by default. Judgments are written on the premise that end-to-end timing indicators, retained/substituted body/environment routes, slow internal-milieu routes, fixed-decoder interval, co-adaptation regime, rescue-mode adaptation burden, deployment slices, and, for phase-targeting loops, oscillation estimability plus causal-versus-post-hoc targeting benchmarks, and, for burst-driven loops, biomarker family plus controller policy are disclosed explicitly.

Relatively clear at this stage

What we know now

  • Offline accuracy and closed-loop stability are separate claims and cannot be audited with the same score.
  • Even a fast loop can remain boundary-incomplete if self-motion, predicted reafference, tactile feedback, respiration, arousal, circadian phase, glucocorticoid state, insulin / metabolic regime, or other subject-defining routes stay omitted or undisclosed.
  • Latency and jitter tolerances vary for state feedback, ERP/command BCI, streaming communication, phase-locked stimulation, and burst-driven neuromodulation.
  • Unless you actually measure input, processing, output, and return end-to-end, you won't know the timing of actual operation.
  • Closed-loop gains can come from co-adaptation of the user, decoder, and application rather than from a stable fixed decoder alone.
  • Fixed-decoder durability and rescue-mode recalibration are separate evidence objects; one can fail while the other still rescues behavior.
  • Reliable phase locking is not the same as a reliable physiological or behavioral effect, and neither one fixes a stable optimal phase across sessions.
  • Burst-driven neuromodulation is not one controller family: beta power, beta burst duration, entrained gamma, dyskinesia-linked narrowband gamma, and movement-responsive decoder policies do not constrain the same symptom axis or operate on the same timescale.
  • Subthalamic beta is modulated by movement, dopaminergic medication, and stimulation itself, so a beta feedback signal tuned in one regime is not automatically valid in another.
  • Speed-up within-session alone is not enough; it also leaves fixed-decoder horizon, recalibration burden, clinic/home transition, and programming burden.
  • Chronic adaptive-DBS symptom benefit, eligibility, and long-run continuation are different axes and should not be collapsed into one deployment verdict.

Still unresolved beyond this point

What we still do not know

  • It is unclear how far the closed-loop bandwidth required for WBE spans which loop types.
  • It is not yet possible to generalize the precision required for phase-specific control to all tasks in non-invasive human experiments.
  • It is not yet fixed which biomarker/controller pairing best generalizes across bradykinesia, gait impairment, dyskinesia control, and chronic home use in adaptive DBS.
  • What counts as an acceptable fixed-decoder horizon before rescue-mode adaptation becomes a different operating regime still depends on task and modality.
  • How fast or slow co-adaptation should be to help the user without hiding instability still depends on loop type, modality, and task.
  • How a phase-targeting protocol should adapt when the optimal phase drifts within-session or across sessions still depends on band, task, and subject.
  • How burst-driven loops should adapt when biomarker controllability changes with movement, medication cycle, contact choice, or artifact remains unsettled.
  • What is considered 'unstable' or 'impractical' in terms of drift, recalibration frequency, eligibility, continuation, and programming burden during long-term operation depends on the task.

Learn the basics

Check the basics in the wiki

What the wiki is for

The wiki is a learning aid. For the project's official current synthesis, success criteria, and operating rules, always return to the public pages.

The shortest conclusion

A closed loop is a system in which the output changes the next input. However, there is more than one timing required. The dominant time scales and breakdown methods are different foralpha neurofeedback,P300/ERP BCI,streaming speech neuroprosthesis,phase-locked stimulation, andadaptive DBS. Therefore, it is dangerous to place a common 1 ms threshold or common 10 ms threshold as the correct answer for the whole site.

What was fixed first in this organization

On this page, instead of talking about "how fast is enough" in an abstract way, we first fix which loop type we are dealing with, what is the delay that breaks in the loop, and what was actually measured with hardware. Event marker acceleration, LSL synchronization, phase tracking, and stopping rules are separate layers.

Timing audit is not the whole loop audit

This page now keeps timing logs separate from body/environment boundary logs. A loop can be fast and still remain boundary-incomplete if the paper does not say which sensory, action, interoceptive, self-generated-feedback, and slow internal-milieu routes were preserved, substituted, matched, perturbed, or omitted. On this site, low latency without that disclosure does not rise above a task-specific local controller or surrogate-body result.

Three public cards are stacked here, not one timing score

On this site, once a closed-loop claim leaves the narrow same-session timing question, it has to stack the Verification: Temporal Validity Card with the Verification: Body / Environment Boundary Card, and add the Calibration & Abstention Card whenever silence, abstention, or fallback behavior matters. A fast loop without those companion cards stays a bounded local-controller result.

2026-03-28 re-audit: co-adaptation is a separate evidence wall

The remaining blind spot was that the page could still let readers treat any online improvement as if it primarily reflected timing quality or long-horizon stability. The primary literature does not support that compression. Orsborn et al. (2014) showed that combined neural and decoder adaptation can itself shape neural representations. Perdikis et al. (2018) and Abu-Rmileh et al. (2019) showed that user learning and classifier adaptation evolve on different timescales in longitudinal EEG BCIs, and that adaptation that is too frequent can hinder subject learning. Wairagkar et al. (2025) and Wilson et al. (2025) then showed that modern speech and cursor loops still rely on per-session retraining, blockwise decoder updates, and explicit open-loop probes to estimate performance without closed-loop correction. Therefore, this site now treats co-adaptation / credit assignment as a separate wall rather than hiding it inside latency or recalibration.

2026-03-28 second re-audit: phase-targeting needs an estimability wall

One more shortcut remained. The page still allowed a reader to think that once a phase-targeted loop reports low latency and some phase error distribution, the main technical burden is already satisfied. The primary literature does not support that shortcut. Zrenner et al. (2020) showed that meaningful phase estimation itself degrades when oscillatory amplitude and SNR are low. Gordon et al. (2021) then showed that prefrontal theta targeting required extra constraints to avoid low-amplitude and phase-reset epochs. Vigué-Guix et al. (2022) achieved reliable trial-to-trial alpha phase locking yet did not obtain a consistent behavioral benefit, which means targeting success and functional effect must be kept separate. Kim et al. (2023) showed across 11 public datasets that higher power and SNR improve prediction accuracy and that waiting for eligible epochs matters more than forcing one cognitive state. Finally, Hougland et al. (2025) showed within-session fluctuations and low test-retest reliability of the optimal mu-phase. Therefore, phase-targeted stimulation on this site is now read through an estimability / targeting / effect / stability stack, not one timing number.

2026-03-31 re-audit: burst-driven neuromodulation needs a controller wall too

Another shortcut remained on the adaptive-DBS side. The page still let a reader treat burst timing or beta-trigger latency as if that were the main technical burden once phase-targeting had already been split more carefully. The newer primary literature does not support that shortcut. Mathiopoulou et al. (2024) showed that subthalamic beta is modulated differently by movement, medication, and stimulation. Stanslaski et al. (2024) showed that single-threshold and dual-threshold aDBS are different control modes with different timescales and therapeutic goals. Oehrn et al. (2024), Olaru et al. (2024), and Mathiopoulou et al. (2025) then showed that entrained gamma, dyskinesia-linked narrowband gamma, and personalized high-versus-low dopaminergic-state markers do not constrain the same symptom axis. Busch et al. (2025) and Cascino et al. (2026) further showed that sensing compatibility, threshold setting, signal artifacts, and patient eligibility remain concrete bottlenecks. Therefore, burst-driven neuromodulation on this site is now read through a biomarker / controller / delivery / effect / deployability stack, not one burst-timing story.

Why fixed thresholds are dangerous

Wilson et al. (2010) showed that for relatively slow BCI indicators such as mu rhythm amplitude, a small delay of about 10 ms does not necessarily destroy the essence, but if the latency/jitter of the entire system is not measured, the output path and display become rate-limiting. Conversely, Belinskaia et al. (2020) showed that with parietal alpha neurofeedback, anadditional 250 ms / 500 ms delay worsened the learning effect. Furthermore, in phase-targeting systems such as Mansouri et al. (2018) and Zrenner et al. (2018), the delay should be evaluated asthe phase error relative to the frequency of interestand not simply as a ms value.

Reading principles

"Low latency is good" is generally correct, but it cannot immediately be said that "microsecond-level delay is required for all loops" or "1 ms or less is required for all loops." The correct question isin what loop band, what error breaks what.

If you want the row-level route

If you want the one-row operational packet that turns this principle into a public-safe route, continue with the U8-1 closed-loop delay-tolerance route packet. That packet keeps the question at the level of one named loop class, one KPI bundle, and one downgrade rule rather than a universal latency threshold.

Before milliseconds, fix which loop boundary was actually preserved

The weakness of the older timing-only reading was that it could still let a reader say, "the loop was fast, therefore the closed-loop problem is close to solved." That is too weak. Primary literature shows that sensory cortex and higher-order dynamics are continuously reshaped by self-motion, predicted sensory consequences, multisensory navigation cues, respiration, arousal, tactile feedback, circadian timing, glucocorticoid exposure, and metabolic state. Therefore, a low-latency controller is not automatically a boundary-complete controller.

Boundary component What primary literature shows Why timing alone is insufficient
self-motion / optic flow / proprioceptive coupling Saleem et al. (2013) showed that V1 neurons combine visual speed with run speed during navigation. A fast visual loop still differs from the biological loop if locomotion- and proprioception-linked inputs were absent, simulated, or silently simplified.
predicted reafference / sensorimotor mismatch Keller et al. (2012) showed mismatch-sensitive responses in behaving-mouse V1, supporting the idea that expected sensory feedback matters beyond passive stimulation. The loop is not characterized only by delay; it also depends on whether self-generated sensory consequences and mismatch signals were available at all.
vestibular and multisensory navigation cues Ravassard et al. (2013) showed that removing real-world multisensory cues changes hippocampal spatiotemporal selectivity in virtual reality. A low-latency virtual loop can still be a different loop class if vestibular and other navigation cues were missing or remapped.
corollary discharge of self-generated sensory consequences Schneider et al. (2014) showed a motor-to-auditory cortical circuit that suppresses sensory responses during movement. If a system does not disclose whether corollary-discharge-like routes or self-generated sensory predictions were preserved, timing alone cannot tell you whether the sensory loop is comparable.
respiration / arousal / organism-wide physiology Zelano et al. (2016) showed nasal-respiration coupling to human limbic oscillations, and Raut et al. (2025) showed that neural activity, physiology, and behavior share a structured arousal manifold. A brain-only fast controller can still omit organism-wide state variables that co-organize the loop in vivo.
slow endocrine / circadian / metabolic milieu de Quervain et al. (1998) showed glucocorticoid-dependent memory-retrieval impairment, Oei et al. (2007) showed hydrocortisone-linked decreases in human hippocampal and prefrontal retrieval activity, McCauley et al. (2020) plus Barone et al. (2023) showed circadian gating of hippocampal plasticity, and Birnie et al. (2023), Benedict et al. (2004), Reger et al. (2008), and Sherman et al. (2015) showed that corticosteroid rhythm, insulin signaling, and circadian-rhythm consistency can shift hippocampal plasticity, human memory, or hippocampal activity. The same visible input-output loop can still be a different biological loop class if clock phase, steroid state, or feeding / insulin regime were unmatched or left latent.
tactile contact feedback Flesher et al. (2021) showed that adding tactile feedback improves robotic-arm control in a bidirectional BCI. The main issue is not only whether the loop is fast, but which feedback channels were restored and which still remained absent.
movement-linked latent structure Musall et al. (2019) and Stringer et al. (2019) showed that ongoing behavior explains a large fraction of cortical and brainwide neural variance. Without a boundary card, a fast controller can overfit a narrow behavioral contract while still being read as a general closed-loop success.
Operating rule on this site

If the paper does not disclose which sensory, action, interoceptive, self-generated-feedback, and slow internal-milieu routes were retained, substituted, matched, perturbed, or omitted, this site does not promote the result from fast local loop to boundary-complete L3 evidence. The formal public rule is the Verification: Body / Environment Boundary Card; this wiki supplies the timing-side companion logic.

First, divide into 5 loop types

Loop type Typical example What the literature shows Logs that should be left first on this site
state feedback / neurofeedback alpha This is a system that looks at the power and gives visual feedback. Belinskaia et al. (2020) showed that an additional 250/500 ms delay worsens alpha neurofeedback learning. Shorter delays were more beneficial for learning. Performance degradation curves for median/P95/P99 feedback latency, display path, and additional delay.
ERP / command BCI P300 speller or event-related control. Wilson et al. (2010) showed that it is necessary to decompose timing and measure hardware, and Mowla et al. (2017) showed that latency jitter lowers classification, so even if it is corrected, the negative effects cannot be completely eliminated. Correspondence with block jitter, stimulus onset measurement, trial-to-trial latency variance, and classification performance.
streaming communication / speech neuroprosthesis It is a system that continuously returns brain-to-text or brain-to-voice as audio or text. Littlejohn et al. (2025) demonstrated streaming brain-to-voice in 80 ms increments, and Wairagkar et al. (2025) demonstrated a loop that returns speech synthesis from raw neural input in less than 10 ms while returning silence for non-speech and overlapping speech. The key metrics here are not only average latency, but also tail latency, audio output path, and silence/abstention. per-step inference latency, cue-to-output latency distribution, audio driver latency, silence / false-speech rate, dropout, recalibration event.
phase-locked stimulation This is a system that delivers TMS/tES in accordance with the EEG phase. Mansouri et al. (2018) and Zrenner et al. (2018) demonstrated real-time phase targeting, but Zrenner et al. (2020), Gordon et al. (2021), Kim et al. (2023), and Hougland et al. (2025) show that the real bottleneck is not latency alone but whether the oscillation is estimable now, how the causal estimate is benchmarked, and whether the optimal phase is stable. Target band and spatial filter, power/SNR gate, no-stim rate, causal-versus-post-hoc benchmark, mean phase offset / circular spread, missed trigger, and any fixed-versus-adaptive phase policy.
burst/state-triggered neuromodulation Adaptive DBS using beta burst. Little et al. (2013) and Tinkhauser et al. (2017) established beta-based feedback, but Mathiopoulou et al. (2024), Stanslaski et al. (2024), Oehrn et al. (2024), Olaru et al. (2024), Busch et al. (2025), Mathiopoulou et al. (2025), and Cascino et al. (2026) show that the main burden is no longer burst timing alone but which biomarker is being controlled, which controller mode is used, and whether sensing and programming remain viable. biomarker family / symptom target, sensing contacts / signal-to-noise, controller mode, update interval / onset duration / ramp policy, false positive/negative, artifact-triggered resets, comparator condition, and rescue/programming burden.

Phase-targeting is estimability-limited, not latency-limited

The older wording on this page already separated phase error from plain milliseconds. That was necessary, but it was not yet sufficient. Current primary literature shows that a phase-targeted loop can fail for at least five different reasons: the target oscillation may not be estimable in the current epoch, the causal estimator may not match the post-hoc benchmark, the circular targeting precision may be too weak, the loop may phase-lock without producing a reliable physiological or behavioral effect, or the best phase may drift within and across sessions. Therefore, this site now reads phase-targeted stimulation through the following stack rather than a single timing figure.

Layer to separate What the primary literature supports What must be logged What it still does not prove
oscillation gate / estimability Zrenner et al. (2020) showed that phase estimability worsens when oscillatory amplitude and SNR are low, Gordon et al. (2021) improved prefrontal theta targeting by excluding low-theta and phase-reset epochs, and Kim et al. (2023) showed across 11 public datasets that high power and SNR are the main practical conditions for better phase prediction. Target band, channel or spatial filter, spectral peak criterion, amplitude/SNR threshold, no-stim or wait rate, and any phase-reset rejection rule. That the loop really stimulated the intended phase in every eligible epoch, or that a functional effect followed.
causal estimator benchmark Mansouri et al. (2018) and Zrenner et al. (2018) made real-time phase-triggering feasible, while Zrenner et al. (2020) and Gordon et al. (2021) showed why the causal estimate has to be benchmarked against a non-causal or post-hoc phase estimate under the same signal class. Causal algorithm family, training window, forecast horizon, artifact blanking rule, post-hoc benchmark procedure, and whether the benchmark was run on non-stimulated or artifact-free matched epochs. That the chosen causal estimator is uniquely best, or that the phase effect is biologically meaningful.
targeting precision Bruegger & Abegg (2021) compared methods using mean phase offset, circular standard deviation, and prediction latency, and Holt et al. (2019) showed that narrower phase bins and repeated phase-consistent pulses materially change effect size. Mean phase offset, circular spread or equivalent circular error metric, phase-locking statistic at trigger, missed-trigger rate, and any phase-bin width or consecutive-cycle rule. That the targeted phase is the most effective phase for the claimed physiological or behavioral endpoint.
functional effect versus targeting success Vigué-Guix et al. (2022) achieved reliable trial-to-trial alpha phase locking in a real-time BCI yet found no consistent reaction-time modulation, showing that accurate targeting and useful behavioral control are different evidence objects. Off-target or random-phase comparator, sham or surrogate comparator when available, effect-size distribution for the downstream endpoint, and the stopped claim if targeting succeeded but the endpoint did not. That a phase-targeted loop improves cognition, therapy, or plasticity simply because phase locking worked.
phase stability and adaptation policy Hougland et al. (2025) showed within-session fluctuations and low test-retest reliability of the optimal mu-phase, which limits the generalizability of fixed-phase targeting across sessions. Whether the preferred phase was fixed or updated, within-session drift audit, across-session reliability, retuning trigger, and whether adaptation changes the claim from fixed-policy targeting to adaptive targeting. That one fixed phase generalizes across people, sessions, or task states without re-validation.
Revision rule on this site

If a phase-targeted loop reports only milliseconds or only a single average phase error, this page does not promote it to validated phase-specific control. The minimum readable object is a declared target band with an estimability gate, a causal-versus-post-hoc benchmark, circular targeting metrics, a functional comparator, and a fixed-versus-adaptive phase policy.

Burst-driven neuromodulation is controller-limited, not just burst-timed

The older wording on this page already said that burst-triggered neuromodulation is slower than phase-locking. That was directionally correct, but it was still too coarse. Current primary literature shows that an adaptive-DBS loop can fail or change meaning for at least five different reasons: the chosen biomarker may track a different symptom axis, the biomarker may be modulated by movement / medication / stimulation state, the controller law may operate on a different timescale, sensing contacts and artifacts may constrain whether the loop can even run, and a biomarker-linked control signal may still fail to show unique clinical superiority over an energy-matched comparator. Therefore, this site now reads burst-driven neuromodulation through the following stack rather than a single burst-timing figure.

Layer to separate What the primary literature supports What must be logged What it still does not prove
biomarker family / symptom target Little et al. (2013) and Tinkhauser et al. (2017) constrain a beta-burst antikinetic route, Olaru et al. (2024) constrains a dyskinesia-linked narrowband-gamma route, Oehrn et al. (2024) used personalized high-versus-low dopaminergic-state markers, and Mathiopoulou et al. (2025) constrains entrained gamma as a prokinetic biomarker candidate. Those are not the same control object. Signal family, frequency band, anatomical source, intended symptom axis, and whether the signal is read as antikinetic beta, dyskinesia-linked gamma, entrained prokinetic gamma, or another personalized state marker. That one adaptive-DBS signal generalizes across bradykinesia, gait impairment, dyskinesia, and medication-state control.
state dependence / controllability Mathiopoulou et al. (2024) showed that movement, dopaminergic medication, and DBS each modulate subthalamic beta differently, while Busch et al. (2025) documented that useful beta-threshold setting depends on patient-specific long-term modulation and can be misread by in-clinic snapshots alone. Medication state, rest versus movement slices, controllability test of the candidate signal, band-width or peak-selection rule, and whether thresholds were derived from clinic-only or chronic home data. That a signal tuned at rest or in one medication state stays equally informative during naturalistic behavior.
controller mode / timescale Stanslaski et al. (2024) showed that ADAPT-PD uses single-threshold control with 250 ms amplitude changes and dual-threshold control with 2.5 min up / 5 min down adjustment plus a programmable 1.2–2 s onset, while Wilkins et al. (2025) used a beta-burst-duration controller with a therapeutic floor, ceiling, and slow ramp policy for gait / freezing-of-gait. Controller family, single- versus dual-threshold or other policy class, update interval, onset duration, floor/ceiling amplitude, ramp rate, and whether one or both hemispheres drive the control law. That two aDBS papers used the same control strategy simply because both were called adaptive or beta-based.
sensing compatibility / artifact burden Stanslaski et al. (2024) reported that participants could exit ADAPT-PD because of signal artifact, inadequate LFP signal, or no acceptable aDBS mode, and Busch et al. (2025) showed no visible beta peak in 3/16 hemispheres, unilateral sensing in 4/8 patients, threshold drift, and outlier distortion during setup. Wilkins et al. (2025) likewise required sense-friendly configurations and slower ramps to reduce stimulation artefacts. Sensing contacts, signal-to-noise, unilateral versus bilateral sensing, excluded hemispheres, artifact-detection rule, threshold reset events, and whether the signal remained usable during movement and stimulation. That the controller would have been available under ordinary contact settings or chronic use without extra debugging and exclusions.
biomarker-linked control versus clinical effect Oehrn et al. (2024) showed improved motor symptoms and quality of life with personalized adaptive DBS in a four-patient pilot, but Wilkins et al. (2025) found that a randomly adapting DBS control with matched therapeutic window and TEED still performed similarly to cDBS and aDBS at group level on several acute metrics, which means biomarker linkage and clinical superiority are separate evidence objects. cDBS comparator, random / inverted / surrogate comparator when available, TEED or duty-cycle matching rule, chosen symptom endpoint, and the stopped claim when the biomarker tracks a state but does not show unique clinical benefit. That better biomarker tracking or a cleaner controller trace automatically produced unique symptom-level superiority.
deployability / programming burden Busch et al. (2025), Cascino et al. (2026), and Dixon et al. (2026) show that home use still depends on programming workflow, remote or manual rescue, eligibility, and continuation. In ADAPT-START, only 9 of 20 consecutive chronic cDBS patients were eligible and 5 remained on chronic aDBS by July 2025. Screened n, exclusion reasons, programming visits, remote or manual optimization route, home slice, continuation, and the manpower / time burden of maintaining the controller. That a controller with an interesting biomarker is already routine, broadly eligible, or low-burden clinical care.
Revision rule on this site

If a burst-driven loop reports only burst duration or only one average timing number, this page does not promote it to validated symptom-linked adaptive control. The minimum readable object is a named biomarker family and symptom target, a state-dependence / controllability audit, a declared controller mode with timescale, a sensing / artifact burden audit, a biomarker-linked comparator, and a deployment slice.

Co-adaptation must be separated before online gains are interpreted

A remaining weakness at the L3 entry point was that "online performance improved" could still be read as if the same decoder had simply become more durable. Current primary literature does not support that shortcut. In closed-loop BCIs, improvement can come from user-side neural strategy learning, decoder-weight updates or pseudo-label self-training, and application / interaction redesign. If these are mixed, a fast loop is not yet evidence of a stable fixed decoder.

Source of apparent improvement What the primary literature supports What must be logged What it still does not prove
user-side learning Abu-Rmileh et al. (2019) compared a fixed classifier against regular adaptation over four days and showed different within-day versus between-day behaviour, while Perdikis et al. (2018) showed longitudinal subject learning and warned that frequent recalibration can hinder it. Fixed versus updated decoder schedule, practice dose, instruction changes, and within-day versus between-day curves. That the decoder itself was stable, or that gains will survive with no update.
decoder-side adaptation Orsborn et al. (2014) showed that combined neural and decoder adaptation can yield skillful control while reshaping neural representations, and Wilson et al. (2025) updated decoder weights after each closed-loop block while using open-loop probes to estimate performance without closed-loop effects. Update trigger, cadence, pseudo-label or supervision route, open-loop probe blocks, and frozen-comparator performance. That online gains came from a fixed decoder, or that they reflect user learning alone.
application / interaction shaping Perdikis et al. (2018) showed that control-paradigm refinement can facilitate subject learning, while Wairagkar et al. (2025) reported that participant engagement and enunciation influenced synthesis quality and retrained the decoder using previous-session data. Feedback policy, smoothing or evidence-accumulation rules, prompt or task scaffold, session-to-session interface changes, and engagement / fatigue notes. That the neural controller alone improved independent of interface or task redesign.
Revision rule on this site

A same-session online result must now name whether it is a fixed-policy loop or a co-adaptive loop. If the paper mixes user learning, decoder updates, and interface redesign without a frozen comparator or open-loop probe, this site does not promote the gain to fixed-decoder durability or portable deployment evidence.

2026-03 literature audit: five barriers that appear once a loop first works online

The remaining weakness of the previous version was that it still let readers compress long-horizon closed-loop evidence into a same-session timing problem. Looking at the primary literature for 2014-2025, the scientific bottlenecks after a loop first "moves" are not one axis. Closed-loop BCIs now force at least (1) co-adaptation / credit assignment, (2) output-path timing, (3) fixed-decoder durability, (4) rescue-mode recalibration / remote optimization burden, and (5) eligibility / continuation / clinic-home transfer to be logged separately. Therefore, this site does not raise L3 just because the loop runs online; it asks for the following five barriers as distinct evidence objects.

Wall What the primary literature now supports Revision policy on this page
co-adaptation / credit assignment Orsborn et al. (2014), Perdikis et al. (2018), Abu-Rmileh et al. (2019), Wairagkar et al. (2025), and Wilson et al. (2025) show that online gains can reflect mixed changes in user strategy, decoder weights, and application policy. A loop that improves online is therefore not automatically a durable fixed decoder. Record whether the decoder / thresholds / interaction policy were frozen or updated, when each change occurred, what open-loop or frozen-comparator probe was kept, and what part of the gain is attributed to user learning versus decoder adaptation.
tail latency / output path Littlejohn et al. (2025) showed streaming brain-to-voice in 80 ms steps and reported cue-to-audio timing rather than just decoder timing. Wairagkar et al. (2025) demonstrated sub-10 ms neural-to-voice synthesis while returning silence for non-speech and overlapping speech, which means output-path latency and fallback policy are part of the loop rather than post-processing detail. The average latency of the reasoner is not enough, and we leave the behavior of module-wise latency, cue-to-output tail, audio playback path, and silence/abstention separately.
fixed-decoder durability Wilson et al. (2025) made explicit that accumulating neural changes create periods in which users cannot use a static intracortical BCI reliably, and evaluated one-month operation against fixed-decoder comparators rather than hiding every failure behind adaptive rescue. That means a same-session fast loop and a fixed decoder that still works days later are not the same achievement. Report the fixed decoder interval, time since last supervised calibration, degradation curve under no-update conditions, and when the claim ceiling has to drop from durable fixed-decoder evidence to rescue-mode evidence.
rescue-mode recalibration / remote optimization burden Wilson et al. (2025) also showed multi-timescale unsupervised recalibration, Dixon et al. (2026) reported a machine-learning pipeline capable of remotely optimizing movement-responsive aDBS parameters in a home setting, and Busch et al. (2025) documented biomarker-selection, threshold-definition, and artifact-related maladaptation as programming burdens. Rescue is therefore a separate operating regime, not a free extension of fixed-decoder success. Log whether rescue was manual, unsupervised, or remotely optimized, what data and staff time it required, which parameters changed, how long recovery took, and whether performance after rescue is being compared fairly against the pre-rescue fixed-decoder slice.
eligibility / continuation / naturalistic transfer Oehrn et al. (2024) evaluated chronic adaptive DBS with both in-clinic and at-home recordings. Busch et al. (2025) reported that 6 of 8 patients chose to remain on adaptive DBS after two-week home evaluation, while Cascino et al. (2026) reported that only 9 of 20 consecutive chronic cDBS patients were eligible and 5 remained on chronic aDBS by July 2025. Eligibility and continuation therefore remain separate bottlenecks even after technical proof-of-principle. Not only lab success, but also screened n, exclusion reasons, clinic/home slice, continuation rate, programming visits, and stimulation-duty-cycle changes are recorded as required logs on the deployment side.
Points of criticism here

Therefore, just because "the fast loop worked once" or "the adaptive controller reduced the symptoms a little" does not mean that it can be used for a long time. A same-session online gain is not yet a credit-assigned fixed-policy result; a same-session fast loop is not yet a fixed decoder that still works tomorrow; a rescued loop is not yet an easy-to-program chronic controller; and a programmable chronic controller is not yet a broadly eligible and maintainable home-use route. Only after those barriers are passed separately can we read that we are approaching a deployable closed loop.

Which public card gets stacked when the loop leaves same-session

Evidence slice What it safely supports What it still does not support Public card stack on this site
same-session fixed-policy local loop That the declared subsystem can run online with measured timing under a frozen decoder / interaction policy and an explicit fallback policy. Cross-day durability, boundary completeness, easy clinical deployment. Timing log plus Calibration & Abstention Card when relevant.
same-session co-adaptive local loop That the coupled human + decoder + interface package can be trained online under a declared update policy and credit-assignment log. Fixed-decoder durability, user-independent stability, broad deployment. Timing log plus co-adaptation log plus Calibration & Abstention Card when relevant.
cross-day fixed-decoder loop That a decoder survives a declared no-update interval under declared state annotation and drift conditions. Adaptive rescue benefit, broad home-use scalability, solved embodiment. Temporal Validity Card plus timing log.
rescued / adaptively maintained loop That performance can be recovered under a declared update policy. That the original fixed decoder was durable, or that rescue burden is negligible. Temporal Validity Card plus update / rescue log and Calibration & Abstention Card.
naturalistic chronic therapeutic loop That the loop can remain useful under declared clinic/home and continuation constraints for the screened population. That the route is broadly eligible, easy to program, or body/environment complete by default. Temporal Validity Card plus Body / Environment Boundary Card plus deployment-burden log.
Reading rule

This page now blocks a common shortcut: same-session online is not quietly promoted to co-adaptation-aware, durable, rescued, or deployable. Those are five different evidence slices with different public cards and different failure modes.

What is measured end-to-end

Wilson et al.'s (2010) key point is that it is insufficient to measure signal processing latency alone. A closed loop is the entire path from the input to the output. The display, OS, driver, audio system, and stimulator may be different rate-limiting factors.

Interval What you need to know Typical measurement method
Input This is when the sensor actually detected a change. TTL, known pulse, DAQ input, stimulator marker output.
Processing This is how much time it took for preprocessing, estimation, and decision making. software timestamp, block duration, CPU/GPU logs.
Output It is when a display, sound, stimulus, or control signal really occurs. Photodiode, microphone, loopback, stimulus artifact onset.
Return When the influence of the output is returned to the next input. Redetection, environmental sensor, and body response logs within a closed-loop task.
Average is not enough

In a closed loop, P95/P99/worst-case and trial-to-trial jitter can be more destructive than average delay. Especially in phase-targeting and safety-critical loops, just showing the average value does not provide any reassurance.

What LSL and event markers do and don't guarantee

Kothe et al.'s (2025) LSL paper shows that LSL is useful for millisecond-scale synchronization in sufficient neurobehavioral research to provide offset correction and jitter compensation. On the other hand, this is software-based synchronization on LAN and does not automatically guarantee when the physical output of the stimulator or indicator occurs.

Appelhoff and Stenner (2021) showed that event marking with a USB microcontroller can produce latencies of less than 1 ms. However, this is also primarily a marker path accuracy. Even if the marker is fast, the end-to-end loop that includes the display, audio path, stimulator, and estimator does not necessarily have the same accuracy. </p>

Things to be divided here

  • LSL: Helps with common time system and offset correction for multiple streams. </li>
  • TTL / MCU marker:Improve the accuracy of marking events to the acquisition side.
  • Photodiode / microphone / loopback: Externally verify the actual output onset. </li>
  • Phase tracker:Separately audits how much phase shift remains for the target frequency.
  • </ul> </div> </section>

    Abstain, freeze and safety stop are different things

    How it works Main purpose Typical trigger What to keep at a minimum
    Abstain This is to avoid unreasonable output when reliability is low. Insufficient classification probability, insufficient phase estimation reliability, OOD detection. Abstention rate, confidence threshold at the time of abstention, and state after abstention.
    hold-last-output / silence fallback This is to maintain continuity without increasing erroneous output during short uncertainties or non-speech intervals. non-speech interval, decoder blank, short dropout, audio buffer underrun. Trigger rate, maximum duration, false speech suppression rate, and release delay.
    freeze / pause This is for recalibration and confirmation of the cause. Clock offset increase, packet loss, drift deviation, resynchronization request. Invocation reason, duration, restart conditions, and recalibration details.
    Safety stop / containment To stop a dangerous actuation. P99 latency budget exceeded, abnormal amplitude, stimulation prohibited phase, output saturation. Stop conditions, number of stops, previous latency/phase/error, and manual return conditions.
    Do not mix performance and safety issues

    Whether it's ``I didn't get it right so I won't output it'', ``I'm going to use silence to connect short spaces'', ``I'm going to put it on hold because the system seems to be broken'', or ``I'm going to stop it because it's dangerous'' are completely different in operational terms. If you combine everything into one "outage", you will not be able to trace the cause during review.

    The minimum log you want to keep

    Checklist

    • loop class: One ofstate feedback, ERP/command, speech/streaming, phase-locked, burst-triggered.
    • co-adaptation regime:State whether the decoder, thresholds, smoothing, evidence accumulation, or interface rules were frozen or updated, and what triggered each change.
    • declared boundary / target subsystem:State whether the loop is speech, grasp, navigation, memory-task, symptom-control, or another subsystem, and state the maximum claim ceiling.
    • retained / substituted sensory and self-generated-feedback routes:List which visual, tactile, auditory, proprioceptive, vestibular, respiration-linked, or predicted reafferent cues were present, simulated, or omitted.
    • retained / substituted action channels:Name the actual plant or actuator, such as cursor, robotic hand, speech synthesizer, avatar, or stimulator, together with its controllable degrees of freedom.
    • interoceptive / arousal logs:Record whether respiration, pupil, HR / HRV, effort, fatigue, or similar organism-wide covariates were logged, manipulated, or left latent.
    • slow internal-milieu logs:Record whether circadian phase or clock time, recent sleep-wake schedule, cortisol / glucocorticoid assay or steroid treatment, feeding / fasting or glucose-insulin regime, and similar slow body-state covariates were controlled, measured, perturbed, or left latent.
    • end-to-end latency:Leave median, P95, P99, worst-case separate.
    • Module-wise latency: Separate input, inference, output, and recursive input, leaving what is rate-limiting.
    • Definition of jitter:Specify SD, IQR, or peak-to-peak.
    • clock offset / drift: Leave before and after the LSL and hardware marker correction. </li>
    • Marker verification method:Write which of TTL, MCU, photodiode, microphone, or loopback was used for actual measurement.
    • loop-removal / ablation test:Report what happened when tactile feedback, self-motion cues, predicted sensory consequences, or another decisive route was removed, scrambled, or delayed.
    • credit-assignment probe:Keep fixed-policy or open-loop probe blocks so gains can be compared with and without closed-loop correction or online updates.
    • Additional metrics for speech / streaming: Leave cue-to-output tail latency, audio driver latency, silence / hold-last-output rate, and false speech rate.
    • Additional metrics for phase-targeting systems:Target band and spatial filter, power/SNR gate, no-stim rate, causal-versus-post-hoc benchmark, mean phase offset, circular spread or equivalent circular metric, trigger-time phase-locking statistic, missed trigger, and any fixed-versus-adaptive phase policy.
    • Additional metrics for burst systems:Name biomarker family and symptom target, sensing contacts and signal-to-noise, controller mode, medication / movement state, floor/ceiling amplitude, update interval / onset duration / ramp policy, false positive/negative, artifact-triggered resets, comparator condition, and any TEED / duty-cycle matching rule.
    • residual omitted loops / abstention boundary:State which body/environment routes remain absent and what stronger claim therefore remains forbidden.
    • Abstain/freeze/safety stop:Leave the number of activations, previous state, and return conditions.
    • fixed decoder interval / training-free horizon:State how long the system was required to run before any supervised or unsupervised update was allowed.
    • user/application training changes:Record practice dose, instruction changes, control-paradigm refinements, prompt or task-scaffold changes, and engagement / fatigue notes across sessions.
    • rescue-mode policy:Record whether unsupervised adaptation, manual reprogramming, or remote optimization was used, which parameters changed, and what manpower/time was required.
    • eligibility / continuation / naturalistic deployment:Leave clinic/home performance difference, screened n, exclusion reasons, continuation, programming visits, and duty cycle.
    • Performance degradation curve: Leaves the point at which it collapses when artificially adding delay.
    • </ul> </div> </section>

      16 questions when reading L3 arguments

      1. Does it say which loop class it deals with? Check whether slow feedback, speech streaming, phase-locked, and aDBS are mentioned in the same table.
      2. Does it declare which body/environment boundary it actually used? Check whether the paper fixes the target subsystem and names preserved, substituted, and omitted loops instead of only saying "closed loop."
      3. Are sensory, action, interoceptive, and slow internal-milieu routes disclosed? Look for tactile, proprioceptive, vestibular, respiration-linked, arousal-linked, circadian, glucocorticoid, and metabolic-state channels, not only the main output stream.
      4. Was any decisive loop component removed or scrambled? Check whether feedback-removal or sensory-ablation tests were run, rather than assuming robustness.
      5. Are there module-wise measurements, not just end-to-end? Don't just rely on software timestamps; check which of the input, inference, and output paths are rate-limiting.
      6. For speech / streaming, are silence and output path displayed? Check whether false speech, audio driver, or hold-last-output are hidden.
      7. Is delay mapped to phase error, burst timing, or controller-update timescale? Check whether the paper goes beyond a single ms value when phase targeting or adaptive stimulation policy is what matters.
      8. For phase-targeted loops, does it show the oscillation was estimable before triggering? Check whether power/SNR thresholds, no-stim epochs, and phase-reset rejection were declared rather than assuming every band-passed epoch has meaningful phase.
      9. For phase-targeted loops, does it separate targeting success from functional effect and from phase stability? Check whether the paper reports circular targeting precision, off-target or random-phase comparators, and whether the preferred phase was fixed or drifted across time.
      10. For burst-driven loops, does it name the biomarker family and symptom target? Check whether the paper distinguishes beta, beta-burst duration, entrained gamma, dyskinesia-linked gamma, or another personalized marker rather than saying only "adaptive DBS."
      11. For burst-driven loops, does it disclose controller mode, state dependence, and comparator? Check whether medication / movement dependence, single versus dual threshold or other policy, artifact burden, and cDBS or random / surrogate comparators are shown rather than only burst-trigger timing.
      12. Does it separate user learning, decoder updates, and interface redesign? Check whether the gain could come from co-adaptation rather than from a stable fixed decoder.
      13. Does it separate fixed-decoder durability from adaptive rescue? Check whether the paper shows the no-update slice rather than reporting only the post-update result.
      14. If rescue happened, is the rescue cost shown? Check whether staff time, parameter changes, remote optimization, or unsupervised adaptation are hidden.
      15. Are eligibility, continuation, and clinic/home transfer shown separately from symptom benefit? Check that deployability is not inferred from a small set of successfully programmed cases alone.
      16. Are abstentions, silence fallbacks, freezes, and safety stops separated? Confirm that danger-handling and low-confidence handling are not collapsed into one outage label.

      References

      1. Wilson JA, Mellinger J, Schalk G, Williams JC. A procedure for measuring latencies in brain-computer interfaces. IEEE Trans Biomed Eng. 2010;57(7):1785-1797. doi:10.1109/TBME.2010.2047259
      2. Thompson DE, Warschausky SA, Huggins JE. Classifier-based latency estimation: a novel way to estimate and predict BCI accuracy. J Neural Eng. 2013;10(1):016006. doi:10.1088/1741-2560/10/1/016006
      3. Mowla MR, Huggins JE, Thompson DE. Enhancing P300-BCI performance using latency estimation. Brain Comput Interfaces. 2017;4(3):137-145. doi:10.1080/2326263X.2017.1338010
      4. Belinskaia A, Smetanin N, Lebedev M, Ossadtchi A. Short-delay neurofeedback facilitates training of the parietal alpha rhythm. J Neural Eng. 2020;17(6):066012. doi:10.1088/1741-2552/abc8d7
      5. Mansouri F, Fettes P, Schulze L, et al. A Real-Time Phase-Locking System for Non-invasive Brain Stimulation. Front Neurosci. 2018;12:877. doi:10.3389/fnins.2018.00877
      6. Zrenner C, Desideri D, Belardinelli P, Ziemann U. Real-time EEG-defined excitability states determine efficacy of TMS-induced plasticity in human motor cortex. Brain Stimul. 2018;11(2):374-389. doi:10.1016/j.brs.2017.11.016
      7. Holt AB, Kormann E, Gulberti A, et al. Phase-Dependent Suppression of Beta Oscillations in Parkinson's Disease Patients. J Neurosci. 2019;39(6):1119-1134. doi:10.1523/JNEUROSCI.1913-18.2018
      8. Zrenner C, Galevska D, Nieminen JO, Baur D, Stefanou MI, Ziemann U. The shaky ground truth of real-time phase estimation. Neuroimage. 2020;214:116761. doi:10.1016/j.neuroimage.2020.116761
      9. Gordon PC, Dörre S, Belardinelli P, Stenroos M, Zrenner B, Ziemann U, Zrenner C. Prefrontal Theta-Phase Synchronized Brain Stimulation With Real-Time EEG-Triggered TMS. Front Hum Neurosci. 2021;15:691821. doi:10.3389/fnhum.2021.691821
      10. Bruegger D, Abegg M. Prediction of cortical theta oscillations in humans for phase-locked visual stimulation. J Neurosci Methods. 2021;361:109288. doi:10.1016/j.jneumeth.2021.109288
      11. Vigué-Guix I, Morís Fernández L, Torralba Cuello M, Ruzzoli M, Soto-Faraco S. Can the occipital alpha-phase speed up visual detection through a real-time EEG-based brain-computer interface (BCI)? Eur J Neurosci. 2022;55(11-12):3224-3240. doi:10.1111/ejn.14931
      12. Kim B, Erickson BA, Fernandez-Nunez G, Rich R, Mentzelopoulos G, Vitale F, Medaglia JD. EEG Phase Can Be Predicted with Similar Accuracy across Cognitive States after Accounting for Power and Signal-to-Noise Ratio. eNeuro. 2023;10(9):ENEURO.0050-23.2023. doi:10.1523/ENEURO.0050-23.2023
      13. Hougland JR, Kirchhoff M, Vetter DE, Ahola O, Jooß A, Humaidan D, Ziemann U. Fluctuations in the optimal sensorimotor mu-rhythm phase associated with high corticospinal excitability during TMS-EEG. Brain Stimul. 2025;18(6):1843-1851. doi:10.1016/j.brs.2025.09.019
      14. Little S, Pogosyan A, Neal S, et al. Adaptive deep brain stimulation in advanced Parkinson disease. Ann Neurol. 2013;74(3):449-457. doi:10.1002/ana.23951
      15. Tinkhauser G, Pogosyan A, Little S, et al. The modulatory effect of adaptive deep brain stimulation on beta bursts in Parkinson's disease. Brain. 2017;140(4):1053-1067. doi:10.1093/brain/awx010
      16. Mathiopoulou V, Lofredi R, Feldmann LK, et al. Modulation of subthalamic beta oscillations by movement, dopamine, and deep brain stimulation in Parkinson's disease. npj Parkinsons Dis. 2024;10:77. doi:10.1038/s41531-024-00693-3
      17. Stanslaski S, Summers RLS, Tonder L, et al. Sensing data and methodology from the Adaptive DBS Algorithm for Personalized Therapy in Parkinson's Disease (ADAPT-PD) clinical trial. npj Parkinsons Dis. 2024;10:174. doi:10.1038/s41531-024-00772-5
      18. Olaru M, et al. Motor network gamma oscillations in chronic home recordings predict dyskinesia in Parkinson's disease. Brain. 2024;147:2038-2052. doi:10.1093/brain/awae004
      19. Appelhoff S, Stenner T. In COM we trust: Feasibility of USB-based event marking. Behav Res Methods. 2021;53(6):2450-2455. doi:10.3758/s13428-021-01571-z
      20. Kothe C, Shirazi SY, Stenner T, et al. The lab streaming layer for synchronized multimodal recording. Imaging Neurosci. 2025;3:IMAG.a.136. doi:10.1162/IMAG.a.136
      21. Keller GB, Bonhoeffer T, Hubener M. Sensorimotor mismatch signals in primary visual cortex of the behaving mouse. Neuron. 2012;74(5):809-815. doi:10.1016/j.neuron.2012.03.040
      22. Saleem AB, Ayaz A, Jeffery KJ, Harris KD, Carandini M. Integration of visual motion and locomotion in mouse visual cortex. Nat Neurosci. 2013;16(12):1864-1869. doi:10.1038/nn.3567
      23. Ravassard P, Kees A, Willers B, et al. Multisensory control of hippocampal spatiotemporal selectivity. Science. 2013;340(6138):1342-1346. doi:10.1126/science.1232655
      24. Schneider DM, Nelson A, Mooney R. A synaptic and circuit basis for corollary discharge in the auditory cortex. Nature. 2014;513(7517):189-194. doi:10.1038/nature13724
      25. Zelano C, Jiang H, Zhou G, et al. Nasal respiration entrains human limbic oscillations and modulates cognitive function. J Neurosci. 2016;36(49):12448-12467. doi:10.1523/JNEUROSCI.2586-16.2016
      26. Musall S, Kaufman MT, Juavinett AL, Gluf S, Churchland AK. Single-trial neural dynamics are dominated by richly varied movements. Nat Neurosci. 2019;22:1677-1686. doi:10.1038/s41593-019-0502-4
      27. Stringer C, Pachitariu M, Steinmetz N, et al. Spontaneous behaviors drive multidimensional, brainwide activity. Science. 2019;364(6437):eaav7893. doi:10.1126/science.aav7893
      28. Flesher SN, Downey JE, Weiss JM, et al. A brain-computer interface that evokes tactile sensations improves robotic arm control. Science. 2021;372(6544):831-836. doi:10.1126/science.abd0380
      29. Raut RV, Rosenthal ZP, Wang X, et al. Arousal as a universal embedding for spatiotemporal brain dynamics. Nature. 2025;647:454-461. doi:10.1038/s41586-025-09544-4
      30. Littlejohn KT, Dabagia M, Ladwig A, et al. A streaming brain-to-voice neuroprosthesis to restore naturalistic communication. Nat Neurosci. 2025. doi:10.1038/s41593-025-01905-6
      31. Wairagkar M, Card NS, Singer-Clark T, et al. An instantaneous voice-synthesis neuroprosthesis. Nature. 2025. doi:10.1038/s41586-025-09127-3
      32. Mathiopoulou V, Habets J, Feldmann LK, et al. Gamma entrainment induced by deep brain stimulation as a biomarker for motor improvement with neuromodulation. Nat Commun. 2025;16:2956. doi:10.1038/s41467-025-58132-7
      33. Wilkins KB, Melbourne JA, Akella P, et al. Beta burst-driven adaptive deep brain stimulation for gait impairment and freezing of gait in Parkinson's disease. Brain Commun. 2025. PMCID: PMC12268161
      34. Wilson GH, Stein EA, Kamdar F, et al. Long-term unsupervised recalibration of intracortical brain-computer interfaces using a hidden Markov model. Nat Biomed Eng. 2025. doi:10.1038/s41551-025-01536-z
      35. Oehrn CR, Roediger J, Diehl A, et al. Chronic adaptive deep brain stimulation versus conventional stimulation in Parkinson's disease: a blinded randomized feasibility trial. Nat Med. 2024. doi:10.1038/s41591-024-03196-z
      36. Cascino S, Roediger J, Oehrn C, et al. Chronic adaptive deep brain stimulation in Parkinson's disease: ADAPT-START findings and programming principles. npj Parkinsons Dis. 2026. doi:10.1038/s41531-026-01269-z
      37. Dixon TC, Strandquist G, Zeng A, et al. Movement-responsive deep brain stimulation for Parkinson’s disease using a remotely optimized neural decoder. Nat Biomed Eng. 2026;10:110-124. doi:10.1038/s41551-025-01438-0
      38. Busch JL, Kaplan J, Behnke JK, et al. Chronic adaptive deep brain stimulation for Parkinson’s disease: clinical outcomes and programming strategies. npj Parkinsons Dis. 2025;11:264. doi:10.1038/s41531-025-01124-7
      39. Orsborn AL, Moorman HG, Overduin SA, Shanechi MM, Dimitrov DF, Carmena JM. Closed-loop decoder adaptation shapes neural plasticity for skillful neuroprosthetic control. Neuron. 2014;82(6):1380-1393. doi:10.1016/j.neuron.2014.04.048
      40. Perdikis S, Tonin L, Saeedi S, et al. The Cybathlon BCI race: successful longitudinal mutual learning with two tetraplegic users. PLoS Biol. 2018;16(5):e2003787. doi:10.1371/journal.pbio.2003787
      41. Abu-Rmileh A, Zakkay E, Shmuelof L, Shriki O. Co-adaptive training improves efficacy of a multi-day EEG-based motor imagery BCI training. Front Hum Neurosci. 2019;13:362. doi:10.3389/fnhum.2019.00362
      42. Lin CY, Lu CF, Jao CW, Wang PS, Wu YT. Toward consistency between humans and classifiers: improved performance of a real-time brain-computer interface using a mutual learning system. Expert Syst Appl. 2023;226:120205. doi:10.1016/j.eswa.2023.120205

      Where to go back next

      If you want to go back to the overall design of L3, please use Verification Platform, if you want to go back to EEG and synchronization practices, please use Introduction to EEG, and if you want to go back to Roadmap I1/I8, please use Technology Roadmap.

      </article> </main>