The shortest conclusion
A closed loop is a system in which the output changes the next input. However, there is more than one timing required. The dominant time scales and breakdown methods are different foralpha neurofeedback,P300/ERP BCI,streaming speech neuroprosthesis,phase-locked stimulation, andadaptive DBS. Therefore, it is dangerous to place a common 1 ms threshold or common 10 ms threshold as the correct answer for the whole site.
On this page, instead of talking about "how fast is enough" in an abstract way, we first fix which loop type we are dealing with, what is the delay that breaks in the loop, and what was actually measured with hardware. Event marker acceleration, LSL synchronization, phase tracking, and stopping rules are separate layers.
This page now keeps timing logs separate from body/environment boundary logs. A loop can be fast and still remain boundary-incomplete if the paper does not say which sensory, action, interoceptive, self-generated-feedback, and slow internal-milieu routes were preserved, substituted, matched, perturbed, or omitted. On this site, low latency without that disclosure does not rise above a task-specific local controller or surrogate-body result.
On this site, once a closed-loop claim leaves the narrow same-session timing question, it has to stack the Verification: Temporal Validity Card with the Verification: Body / Environment Boundary Card, and add the Calibration & Abstention Card whenever silence, abstention, or fallback behavior matters. A fast loop without those companion cards stays a bounded local-controller result.
The remaining blind spot was that the page could still let readers treat any online improvement as if it primarily reflected timing quality or long-horizon stability. The primary literature does not support that compression. Orsborn et al. (2014) showed that combined neural and decoder adaptation can itself shape neural representations. Perdikis et al. (2018) and Abu-Rmileh et al. (2019) showed that user learning and classifier adaptation evolve on different timescales in longitudinal EEG BCIs, and that adaptation that is too frequent can hinder subject learning. Wairagkar et al. (2025) and Wilson et al. (2025) then showed that modern speech and cursor loops still rely on per-session retraining, blockwise decoder updates, and explicit open-loop probes to estimate performance without closed-loop correction. Therefore, this site now treats co-adaptation / credit assignment as a separate wall rather than hiding it inside latency or recalibration.
One more shortcut remained. The page still allowed a reader to think that once a phase-targeted loop reports low latency and some phase error distribution, the main technical burden is already satisfied. The primary literature does not support that shortcut. Zrenner et al. (2020) showed that meaningful phase estimation itself degrades when oscillatory amplitude and SNR are low. Gordon et al. (2021) then showed that prefrontal theta targeting required extra constraints to avoid low-amplitude and phase-reset epochs. Vigué-Guix et al. (2022) achieved reliable trial-to-trial alpha phase locking yet did not obtain a consistent behavioral benefit, which means targeting success and functional effect must be kept separate. Kim et al. (2023) showed across 11 public datasets that higher power and SNR improve prediction accuracy and that waiting for eligible epochs matters more than forcing one cognitive state. Finally, Hougland et al. (2025) showed within-session fluctuations and low test-retest reliability of the optimal mu-phase. Therefore, phase-targeted stimulation on this site is now read through an estimability / targeting / effect / stability stack, not one timing number.
Another shortcut remained on the adaptive-DBS side. The page still let a reader treat burst timing or beta-trigger latency as if that were the main technical burden once phase-targeting had already been split more carefully. The newer primary literature does not support that shortcut. Mathiopoulou et al. (2024) showed that subthalamic beta is modulated differently by movement, medication, and stimulation. Stanslaski et al. (2024) showed that single-threshold and dual-threshold aDBS are different control modes with different timescales and therapeutic goals. Oehrn et al. (2024), Olaru et al. (2024), and Mathiopoulou et al. (2025) then showed that entrained gamma, dyskinesia-linked narrowband gamma, and personalized high-versus-low dopaminergic-state markers do not constrain the same symptom axis. Busch et al. (2025) and Cascino et al. (2026) further showed that sensing compatibility, threshold setting, signal artifacts, and patient eligibility remain concrete bottlenecks. Therefore, burst-driven neuromodulation on this site is now read through a biomarker / controller / delivery / effect / deployability stack, not one burst-timing story.
Why fixed thresholds are dangerous
Wilson et al. (2010) showed that for relatively slow BCI indicators such as mu rhythm amplitude, a small delay of about 10 ms does not necessarily destroy the essence, but if the latency/jitter of the entire system is not measured, the output path and display become rate-limiting. Conversely, Belinskaia et al. (2020) showed that with parietal alpha neurofeedback, anadditional 250 ms / 500 ms delay worsened the learning effect. Furthermore, in phase-targeting systems such as Mansouri et al. (2018) and Zrenner et al. (2018), the delay should be evaluated asthe phase error relative to the frequency of interestand not simply as a ms value.
"Low latency is good" is generally correct, but it cannot immediately be said that "microsecond-level delay is required for all loops" or "1 ms or less is required for all loops." The correct question isin what loop band, what error breaks what.
If you want the one-row operational packet that turns this principle into a public-safe route, continue with the U8-1 closed-loop delay-tolerance route packet. That packet keeps the question at the level of one named loop class, one KPI bundle, and one downgrade rule rather than a universal latency threshold.
Before milliseconds, fix which loop boundary was actually preserved
The weakness of the older timing-only reading was that it could still let a reader say, "the loop was fast, therefore the closed-loop problem is close to solved." That is too weak. Primary literature shows that sensory cortex and higher-order dynamics are continuously reshaped by self-motion, predicted sensory consequences, multisensory navigation cues, respiration, arousal, tactile feedback, circadian timing, glucocorticoid exposure, and metabolic state. Therefore, a low-latency controller is not automatically a boundary-complete controller.
| Boundary component | What primary literature shows | Why timing alone is insufficient |
|---|---|---|
| self-motion / optic flow / proprioceptive coupling | Saleem et al. (2013) showed that V1 neurons combine visual speed with run speed during navigation. | A fast visual loop still differs from the biological loop if locomotion- and proprioception-linked inputs were absent, simulated, or silently simplified. |
| predicted reafference / sensorimotor mismatch | Keller et al. (2012) showed mismatch-sensitive responses in behaving-mouse V1, supporting the idea that expected sensory feedback matters beyond passive stimulation. | The loop is not characterized only by delay; it also depends on whether self-generated sensory consequences and mismatch signals were available at all. |
| vestibular and multisensory navigation cues | Ravassard et al. (2013) showed that removing real-world multisensory cues changes hippocampal spatiotemporal selectivity in virtual reality. | A low-latency virtual loop can still be a different loop class if vestibular and other navigation cues were missing or remapped. |
| corollary discharge of self-generated sensory consequences | Schneider et al. (2014) showed a motor-to-auditory cortical circuit that suppresses sensory responses during movement. | If a system does not disclose whether corollary-discharge-like routes or self-generated sensory predictions were preserved, timing alone cannot tell you whether the sensory loop is comparable. |
| respiration / arousal / organism-wide physiology | Zelano et al. (2016) showed nasal-respiration coupling to human limbic oscillations, and Raut et al. (2025) showed that neural activity, physiology, and behavior share a structured arousal manifold. | A brain-only fast controller can still omit organism-wide state variables that co-organize the loop in vivo. |
| slow endocrine / circadian / metabolic milieu | de Quervain et al. (1998) showed glucocorticoid-dependent memory-retrieval impairment, Oei et al. (2007) showed hydrocortisone-linked decreases in human hippocampal and prefrontal retrieval activity, McCauley et al. (2020) plus Barone et al. (2023) showed circadian gating of hippocampal plasticity, and Birnie et al. (2023), Benedict et al. (2004), Reger et al. (2008), and Sherman et al. (2015) showed that corticosteroid rhythm, insulin signaling, and circadian-rhythm consistency can shift hippocampal plasticity, human memory, or hippocampal activity. | The same visible input-output loop can still be a different biological loop class if clock phase, steroid state, or feeding / insulin regime were unmatched or left latent. |
| tactile contact feedback | Flesher et al. (2021) showed that adding tactile feedback improves robotic-arm control in a bidirectional BCI. | The main issue is not only whether the loop is fast, but which feedback channels were restored and which still remained absent. |
| movement-linked latent structure | Musall et al. (2019) and Stringer et al. (2019) showed that ongoing behavior explains a large fraction of cortical and brainwide neural variance. | Without a boundary card, a fast controller can overfit a narrow behavioral contract while still being read as a general closed-loop success. |
If the paper does not disclose which sensory, action, interoceptive, self-generated-feedback, and slow internal-milieu routes were retained, substituted, matched, perturbed, or omitted, this site does not promote the result from fast local loop to boundary-complete L3 evidence. The formal public rule is the Verification: Body / Environment Boundary Card; this wiki supplies the timing-side companion logic.
First, divide into 5 loop types
| Loop type | Typical example | What the literature shows | Logs that should be left first on this site |
|---|---|---|---|
| state feedback / neurofeedback | alpha This is a system that looks at the power and gives visual feedback. | Belinskaia et al. (2020) showed that an additional 250/500 ms delay worsens alpha neurofeedback learning. Shorter delays were more beneficial for learning. | Performance degradation curves for median/P95/P99 feedback latency, display path, and additional delay. |
| ERP / command BCI | P300 speller or event-related control. | Wilson et al. (2010) showed that it is necessary to decompose timing and measure hardware, and Mowla et al. (2017) showed that latency jitter lowers classification, so even if it is corrected, the negative effects cannot be completely eliminated. | Correspondence with block jitter, stimulus onset measurement, trial-to-trial latency variance, and classification performance. |
| streaming communication / speech neuroprosthesis | It is a system that continuously returns brain-to-text or brain-to-voice as audio or text. | Littlejohn et al. (2025) demonstrated streaming brain-to-voice in 80 ms increments, and Wairagkar et al. (2025) demonstrated a loop that returns speech synthesis from raw neural input in less than 10 ms while returning silence for non-speech and overlapping speech. The key metrics here are not only average latency, but also tail latency, audio output path, and silence/abstention. | per-step inference latency, cue-to-output latency distribution, audio driver latency, silence / false-speech rate, dropout, recalibration event. |
| phase-locked stimulation | This is a system that delivers TMS/tES in accordance with the EEG phase. | Mansouri et al. (2018) and Zrenner et al. (2018) demonstrated real-time phase targeting, but Zrenner et al. (2020), Gordon et al. (2021), Kim et al. (2023), and Hougland et al. (2025) show that the real bottleneck is not latency alone but whether the oscillation is estimable now, how the causal estimate is benchmarked, and whether the optimal phase is stable. | Target band and spatial filter, power/SNR gate, no-stim rate, causal-versus-post-hoc benchmark, mean phase offset / circular spread, missed trigger, and any fixed-versus-adaptive phase policy. |
| burst/state-triggered neuromodulation | Adaptive DBS using beta burst. | Little et al. (2013) and Tinkhauser et al. (2017) established beta-based feedback, but Mathiopoulou et al. (2024), Stanslaski et al. (2024), Oehrn et al. (2024), Olaru et al. (2024), Busch et al. (2025), Mathiopoulou et al. (2025), and Cascino et al. (2026) show that the main burden is no longer burst timing alone but which biomarker is being controlled, which controller mode is used, and whether sensing and programming remain viable. | biomarker family / symptom target, sensing contacts / signal-to-noise, controller mode, update interval / onset duration / ramp policy, false positive/negative, artifact-triggered resets, comparator condition, and rescue/programming burden. |
Phase-targeting is estimability-limited, not latency-limited
The older wording on this page already separated phase error from plain milliseconds. That was necessary, but it was not yet sufficient. Current primary literature shows that a phase-targeted loop can fail for at least five different reasons: the target oscillation may not be estimable in the current epoch, the causal estimator may not match the post-hoc benchmark, the circular targeting precision may be too weak, the loop may phase-lock without producing a reliable physiological or behavioral effect, or the best phase may drift within and across sessions. Therefore, this site now reads phase-targeted stimulation through the following stack rather than a single timing figure.
| Layer to separate | What the primary literature supports | What must be logged | What it still does not prove |
|---|---|---|---|
| oscillation gate / estimability | Zrenner et al. (2020) showed that phase estimability worsens when oscillatory amplitude and SNR are low, Gordon et al. (2021) improved prefrontal theta targeting by excluding low-theta and phase-reset epochs, and Kim et al. (2023) showed across 11 public datasets that high power and SNR are the main practical conditions for better phase prediction. | Target band, channel or spatial filter, spectral peak criterion, amplitude/SNR threshold, no-stim or wait rate, and any phase-reset rejection rule. | That the loop really stimulated the intended phase in every eligible epoch, or that a functional effect followed. |
| causal estimator benchmark | Mansouri et al. (2018) and Zrenner et al. (2018) made real-time phase-triggering feasible, while Zrenner et al. (2020) and Gordon et al. (2021) showed why the causal estimate has to be benchmarked against a non-causal or post-hoc phase estimate under the same signal class. | Causal algorithm family, training window, forecast horizon, artifact blanking rule, post-hoc benchmark procedure, and whether the benchmark was run on non-stimulated or artifact-free matched epochs. | That the chosen causal estimator is uniquely best, or that the phase effect is biologically meaningful. |
| targeting precision | Bruegger & Abegg (2021) compared methods using mean phase offset, circular standard deviation, and prediction latency, and Holt et al. (2019) showed that narrower phase bins and repeated phase-consistent pulses materially change effect size. | Mean phase offset, circular spread or equivalent circular error metric, phase-locking statistic at trigger, missed-trigger rate, and any phase-bin width or consecutive-cycle rule. | That the targeted phase is the most effective phase for the claimed physiological or behavioral endpoint. |
| functional effect versus targeting success | Vigué-Guix et al. (2022) achieved reliable trial-to-trial alpha phase locking in a real-time BCI yet found no consistent reaction-time modulation, showing that accurate targeting and useful behavioral control are different evidence objects. | Off-target or random-phase comparator, sham or surrogate comparator when available, effect-size distribution for the downstream endpoint, and the stopped claim if targeting succeeded but the endpoint did not. | That a phase-targeted loop improves cognition, therapy, or plasticity simply because phase locking worked. |
| phase stability and adaptation policy | Hougland et al. (2025) showed within-session fluctuations and low test-retest reliability of the optimal mu-phase, which limits the generalizability of fixed-phase targeting across sessions. | Whether the preferred phase was fixed or updated, within-session drift audit, across-session reliability, retuning trigger, and whether adaptation changes the claim from fixed-policy targeting to adaptive targeting. | That one fixed phase generalizes across people, sessions, or task states without re-validation. |
If a phase-targeted loop reports only milliseconds or only a single average phase error, this page does not promote it to validated phase-specific control. The minimum readable object is a declared target band with an estimability gate, a causal-versus-post-hoc benchmark, circular targeting metrics, a functional comparator, and a fixed-versus-adaptive phase policy.
Burst-driven neuromodulation is controller-limited, not just burst-timed
The older wording on this page already said that burst-triggered neuromodulation is slower than phase-locking. That was directionally correct, but it was still too coarse. Current primary literature shows that an adaptive-DBS loop can fail or change meaning for at least five different reasons: the chosen biomarker may track a different symptom axis, the biomarker may be modulated by movement / medication / stimulation state, the controller law may operate on a different timescale, sensing contacts and artifacts may constrain whether the loop can even run, and a biomarker-linked control signal may still fail to show unique clinical superiority over an energy-matched comparator. Therefore, this site now reads burst-driven neuromodulation through the following stack rather than a single burst-timing figure.
| Layer to separate | What the primary literature supports | What must be logged | What it still does not prove |
|---|---|---|---|
| biomarker family / symptom target | Little et al. (2013) and Tinkhauser et al. (2017) constrain a beta-burst antikinetic route, Olaru et al. (2024) constrains a dyskinesia-linked narrowband-gamma route, Oehrn et al. (2024) used personalized high-versus-low dopaminergic-state markers, and Mathiopoulou et al. (2025) constrains entrained gamma as a prokinetic biomarker candidate. Those are not the same control object. | Signal family, frequency band, anatomical source, intended symptom axis, and whether the signal is read as antikinetic beta, dyskinesia-linked gamma, entrained prokinetic gamma, or another personalized state marker. | That one adaptive-DBS signal generalizes across bradykinesia, gait impairment, dyskinesia, and medication-state control. |
| state dependence / controllability | Mathiopoulou et al. (2024) showed that movement, dopaminergic medication, and DBS each modulate subthalamic beta differently, while Busch et al. (2025) documented that useful beta-threshold setting depends on patient-specific long-term modulation and can be misread by in-clinic snapshots alone. | Medication state, rest versus movement slices, controllability test of the candidate signal, band-width or peak-selection rule, and whether thresholds were derived from clinic-only or chronic home data. | That a signal tuned at rest or in one medication state stays equally informative during naturalistic behavior. |
| controller mode / timescale | Stanslaski et al. (2024) showed that ADAPT-PD uses single-threshold control with 250 ms amplitude changes and dual-threshold control with 2.5 min up / 5 min down adjustment plus a programmable 1.2–2 s onset, while Wilkins et al. (2025) used a beta-burst-duration controller with a therapeutic floor, ceiling, and slow ramp policy for gait / freezing-of-gait. | Controller family, single- versus dual-threshold or other policy class, update interval, onset duration, floor/ceiling amplitude, ramp rate, and whether one or both hemispheres drive the control law. | That two aDBS papers used the same control strategy simply because both were called adaptive or beta-based. |
| sensing compatibility / artifact burden | Stanslaski et al. (2024) reported that participants could exit ADAPT-PD because of signal artifact, inadequate LFP signal, or no acceptable aDBS mode, and Busch et al. (2025) showed no visible beta peak in 3/16 hemispheres, unilateral sensing in 4/8 patients, threshold drift, and outlier distortion during setup. Wilkins et al. (2025) likewise required sense-friendly configurations and slower ramps to reduce stimulation artefacts. | Sensing contacts, signal-to-noise, unilateral versus bilateral sensing, excluded hemispheres, artifact-detection rule, threshold reset events, and whether the signal remained usable during movement and stimulation. | That the controller would have been available under ordinary contact settings or chronic use without extra debugging and exclusions. |
| biomarker-linked control versus clinical effect | Oehrn et al. (2024) showed improved motor symptoms and quality of life with personalized adaptive DBS in a four-patient pilot, but Wilkins et al. (2025) found that a randomly adapting DBS control with matched therapeutic window and TEED still performed similarly to cDBS and aDBS at group level on several acute metrics, which means biomarker linkage and clinical superiority are separate evidence objects. | cDBS comparator, random / inverted / surrogate comparator when available, TEED or duty-cycle matching rule, chosen symptom endpoint, and the stopped claim when the biomarker tracks a state but does not show unique clinical benefit. | That better biomarker tracking or a cleaner controller trace automatically produced unique symptom-level superiority. |
| deployability / programming burden | Busch et al. (2025), Cascino et al. (2026), and Dixon et al. (2026) show that home use still depends on programming workflow, remote or manual rescue, eligibility, and continuation. In ADAPT-START, only 9 of 20 consecutive chronic cDBS patients were eligible and 5 remained on chronic aDBS by July 2025. | Screened n, exclusion reasons, programming visits, remote or manual optimization route, home slice, continuation, and the manpower / time burden of maintaining the controller. | That a controller with an interesting biomarker is already routine, broadly eligible, or low-burden clinical care. |
If a burst-driven loop reports only burst duration or only one average timing number, this page does not promote it to validated symptom-linked adaptive control. The minimum readable object is a named biomarker family and symptom target, a state-dependence / controllability audit, a declared controller mode with timescale, a sensing / artifact burden audit, a biomarker-linked comparator, and a deployment slice.
Co-adaptation must be separated before online gains are interpreted
A remaining weakness at the L3 entry point was that "online performance improved" could still be read as if the same decoder had simply become more durable. Current primary literature does not support that shortcut. In closed-loop BCIs, improvement can come from user-side neural strategy learning, decoder-weight updates or pseudo-label self-training, and application / interaction redesign. If these are mixed, a fast loop is not yet evidence of a stable fixed decoder.
| Source of apparent improvement | What the primary literature supports | What must be logged | What it still does not prove |
|---|---|---|---|
| user-side learning | Abu-Rmileh et al. (2019) compared a fixed classifier against regular adaptation over four days and showed different within-day versus between-day behaviour, while Perdikis et al. (2018) showed longitudinal subject learning and warned that frequent recalibration can hinder it. | Fixed versus updated decoder schedule, practice dose, instruction changes, and within-day versus between-day curves. | That the decoder itself was stable, or that gains will survive with no update. |
| decoder-side adaptation | Orsborn et al. (2014) showed that combined neural and decoder adaptation can yield skillful control while reshaping neural representations, and Wilson et al. (2025) updated decoder weights after each closed-loop block while using open-loop probes to estimate performance without closed-loop effects. | Update trigger, cadence, pseudo-label or supervision route, open-loop probe blocks, and frozen-comparator performance. | That online gains came from a fixed decoder, or that they reflect user learning alone. |
| application / interaction shaping | Perdikis et al. (2018) showed that control-paradigm refinement can facilitate subject learning, while Wairagkar et al. (2025) reported that participant engagement and enunciation influenced synthesis quality and retrained the decoder using previous-session data. | Feedback policy, smoothing or evidence-accumulation rules, prompt or task scaffold, session-to-session interface changes, and engagement / fatigue notes. | That the neural controller alone improved independent of interface or task redesign. |
A same-session online result must now name whether it is a fixed-policy loop or a co-adaptive loop. If the paper mixes user learning, decoder updates, and interface redesign without a frozen comparator or open-loop probe, this site does not promote the gain to fixed-decoder durability or portable deployment evidence.
2026-03 literature audit: five barriers that appear once a loop first works online
The remaining weakness of the previous version was that it still let readers compress long-horizon closed-loop evidence into a same-session timing problem. Looking at the primary literature for 2014-2025, the scientific bottlenecks after a loop first "moves" are not one axis. Closed-loop BCIs now force at least (1) co-adaptation / credit assignment, (2) output-path timing, (3) fixed-decoder durability, (4) rescue-mode recalibration / remote optimization burden, and (5) eligibility / continuation / clinic-home transfer to be logged separately. Therefore, this site does not raise L3 just because the loop runs online; it asks for the following five barriers as distinct evidence objects.
| Wall | What the primary literature now supports | Revision policy on this page |
|---|---|---|
| co-adaptation / credit assignment | Orsborn et al. (2014), Perdikis et al. (2018), Abu-Rmileh et al. (2019), Wairagkar et al. (2025), and Wilson et al. (2025) show that online gains can reflect mixed changes in user strategy, decoder weights, and application policy. A loop that improves online is therefore not automatically a durable fixed decoder. | Record whether the decoder / thresholds / interaction policy were frozen or updated, when each change occurred, what open-loop or frozen-comparator probe was kept, and what part of the gain is attributed to user learning versus decoder adaptation. |
| tail latency / output path | Littlejohn et al. (2025) showed streaming brain-to-voice in 80 ms steps and reported cue-to-audio timing rather than just decoder timing. Wairagkar et al. (2025) demonstrated sub-10 ms neural-to-voice synthesis while returning silence for non-speech and overlapping speech, which means output-path latency and fallback policy are part of the loop rather than post-processing detail. | The average latency of the reasoner is not enough, and we leave the behavior of module-wise latency, cue-to-output tail, audio playback path, and silence/abstention separately. |
| fixed-decoder durability | Wilson et al. (2025) made explicit that accumulating neural changes create periods in which users cannot use a static intracortical BCI reliably, and evaluated one-month operation against fixed-decoder comparators rather than hiding every failure behind adaptive rescue. That means a same-session fast loop and a fixed decoder that still works days later are not the same achievement. | Report the fixed decoder interval, time since last supervised calibration, degradation curve under no-update conditions, and when the claim ceiling has to drop from durable fixed-decoder evidence to rescue-mode evidence. |
| rescue-mode recalibration / remote optimization burden | Wilson et al. (2025) also showed multi-timescale unsupervised recalibration, Dixon et al. (2026) reported a machine-learning pipeline capable of remotely optimizing movement-responsive aDBS parameters in a home setting, and Busch et al. (2025) documented biomarker-selection, threshold-definition, and artifact-related maladaptation as programming burdens. Rescue is therefore a separate operating regime, not a free extension of fixed-decoder success. | Log whether rescue was manual, unsupervised, or remotely optimized, what data and staff time it required, which parameters changed, how long recovery took, and whether performance after rescue is being compared fairly against the pre-rescue fixed-decoder slice. |
| eligibility / continuation / naturalistic transfer | Oehrn et al. (2024) evaluated chronic adaptive DBS with both in-clinic and at-home recordings. Busch et al. (2025) reported that 6 of 8 patients chose to remain on adaptive DBS after two-week home evaluation, while Cascino et al. (2026) reported that only 9 of 20 consecutive chronic cDBS patients were eligible and 5 remained on chronic aDBS by July 2025. Eligibility and continuation therefore remain separate bottlenecks even after technical proof-of-principle. | Not only lab success, but also screened n, exclusion reasons, clinic/home slice, continuation rate, programming visits, and stimulation-duty-cycle changes are recorded as required logs on the deployment side. |
Therefore, just because "the fast loop worked once" or "the adaptive controller reduced the symptoms a little" does not mean that it can be used for a long time. A same-session online gain is not yet a credit-assigned fixed-policy result; a same-session fast loop is not yet a fixed decoder that still works tomorrow; a rescued loop is not yet an easy-to-program chronic controller; and a programmable chronic controller is not yet a broadly eligible and maintainable home-use route. Only after those barriers are passed separately can we read that we are approaching a deployable closed loop.
Which public card gets stacked when the loop leaves same-session
| Evidence slice | What it safely supports | What it still does not support | Public card stack on this site |
|---|---|---|---|
| same-session fixed-policy local loop | That the declared subsystem can run online with measured timing under a frozen decoder / interaction policy and an explicit fallback policy. | Cross-day durability, boundary completeness, easy clinical deployment. | Timing log plus Calibration & Abstention Card when relevant. |
| same-session co-adaptive local loop | That the coupled human + decoder + interface package can be trained online under a declared update policy and credit-assignment log. | Fixed-decoder durability, user-independent stability, broad deployment. | Timing log plus co-adaptation log plus Calibration & Abstention Card when relevant. |
| cross-day fixed-decoder loop | That a decoder survives a declared no-update interval under declared state annotation and drift conditions. | Adaptive rescue benefit, broad home-use scalability, solved embodiment. | Temporal Validity Card plus timing log. |
| rescued / adaptively maintained loop | That performance can be recovered under a declared update policy. | That the original fixed decoder was durable, or that rescue burden is negligible. | Temporal Validity Card plus update / rescue log and Calibration & Abstention Card. |
| naturalistic chronic therapeutic loop | That the loop can remain useful under declared clinic/home and continuation constraints for the screened population. | That the route is broadly eligible, easy to program, or body/environment complete by default. | Temporal Validity Card plus Body / Environment Boundary Card plus deployment-burden log. |
This page now blocks a common shortcut: same-session online is not quietly promoted to co-adaptation-aware, durable, rescued, or deployable. Those are five different evidence slices with different public cards and different failure modes.
What is measured end-to-end
Wilson et al.'s (2010) key point is that it is insufficient to measure signal processing latency alone. A closed loop is the entire path from the input to the output. The display, OS, driver, audio system, and stimulator may be different rate-limiting factors.
| Interval | What you need to know | Typical measurement method |
|---|---|---|
| Input | This is when the sensor actually detected a change. | TTL, known pulse, DAQ input, stimulator marker output. |
| Processing | This is how much time it took for preprocessing, estimation, and decision making. | software timestamp, block duration, CPU/GPU logs. |
| Output | It is when a display, sound, stimulus, or control signal really occurs. | Photodiode, microphone, loopback, stimulus artifact onset. |
| Return | When the influence of the output is returned to the next input. | Redetection, environmental sensor, and body response logs within a closed-loop task. |
In a closed loop, P95/P99/worst-case and trial-to-trial jitter can be more destructive than average delay. Especially in phase-targeting and safety-critical loops, just showing the average value does not provide any reassurance.
What LSL and event markers do and don't guarantee
Kothe et al.'s (2025) LSL paper shows that LSL is useful for millisecond-scale synchronization in sufficient neurobehavioral research to provide offset correction and jitter compensation. On the other hand, this is software-based synchronization on LAN and does not automatically guarantee when the physical output of the stimulator or indicator occurs.
Appelhoff and Stenner (2021) showed that event marking with a USB microcontroller can produce latencies of less than 1 ms. However, this is also primarily a marker path accuracy. Even if the marker is fast, the end-to-end loop that includes the display, audio path, stimulator, and estimator does not necessarily have the same accuracy.
</p>
Whether it's ``I didn't get it right so I won't output it'', ``I'm going to use silence to connect short spaces'', ``I'm going to put it on hold because the system seems to be broken'', or ``I'm going to stop it because it's dangerous'' are completely different in operational terms. If you combine everything into one "outage", you will not be able to trace the cause during review.
If you want to go back to the overall design of L3, please use Verification Platform, if you want to go back to EEG and synchronization practices, please use Introduction to EEG, and if you want to go back to Roadmap I1/I8, please use Technology Roadmap.
Things to be divided here
Abstain, freeze and safety stop are different things
How it works
Main purpose
Typical trigger
What to keep at a minimum
Abstain
This is to avoid unreasonable output when reliability is low.
Insufficient classification probability, insufficient phase estimation reliability, OOD detection.
Abstention rate, confidence threshold at the time of abstention, and state after abstention.
hold-last-output / silence fallback
This is to maintain continuity without increasing erroneous output during short uncertainties or non-speech intervals.
non-speech interval, decoder blank, short dropout, audio buffer underrun.
Trigger rate, maximum duration, false speech suppression rate, and release delay.
freeze / pause
This is for recalibration and confirmation of the cause.
Clock offset increase, packet loss, drift deviation, resynchronization request.
Invocation reason, duration, restart conditions, and recalibration details.
Safety stop / containment
To stop a dangerous actuation.
P99 latency budget exceeded, abnormal amplitude, stimulation prohibited phase, output saturation.
Stop conditions, number of stops, previous latency/phase/error, and manual return conditions.
The minimum log you want to keep
Checklist
16 questions when reading L3 arguments
References
Where to go back next