Wiki

Wiki: Terminology guide from measurement to modeling

Don't memorize words by dots, connect them by flow.

Mind Uploading Research Project

Public Page Updated: 2026-04-04 Learning guide (updated with the 2026-04-04 companion-card split)

How to use this page

Read this first to avoid getting lost

This page is a wiki for understanding measurement terms such as EEG, model terms such as ESI and DCM, and operational terms such as BIDS and QC by connecting them in the flow of ``observation → organization → estimation → verification.'' Rather than memorizing individual words, the aim is to first distinguish which words to use and where to stop making an argument.

  • You can understand EEG, QC, BIDS, ESI, DCM, and SCM in one flow.
  • We will use audit items to isolate the reason why observation alone does not equate to reconstruction.
  • Inverse-problem claims are now split into field-formation visibility, forward-model or conductivity burden, solver-family uncertainty, and validation class, so ``better ESI'' is not read as one progress bar.
  • ESI, DCM, and SCM do not fail in the same way; each one needs a different stop rule.
  • DCM is not one progress bar either; candidate-model comparison, scalable search, processing sensitivity, and reliability windows are different audit axes.
  • A single route card is no longer enough when a claim spans several rows or stages; Fusion Card, Human Proxy Composition Card, and State-Continuity Bridge Card now stay separate as companion cards rather than one generic metadata bucket.
  • More modalities, same-brain linkage, same-subject language, and connectome constraints do not bypass route cards; fusion, human-proxy composition, state continuity, and identifiability remain separate audits.
  • A stronger design is not just ``more data''; the route now also asks which identifiability objective chose the next condition, whether omitted-mechanism error was exposed, and what minimum-sufficiency stop rule would have been enough.
  • Reorder definitions in the glossary by working order and strength of evidence.
Best for
People who see a mix of measurement words and model words, and people who want to understand a glossary in a flow.
Reading time
10-15 minutes
Accuracy note
The flow shown here is organized for understanding. Although there are back-and-forths and exceptions in actual research, it is important not to confuse observation and estimation, and estimation and verification.

Relatively clear at this stage

What we know now

  • Measurement, preprocessing, estimation, and verification have different roles and different words are used for each.
  • The observed signal is not the brain state as it is, and estimation involves uncertainty and candidate model dependence.
  • BIDS and QC are not an added bonus; they are the backbone of comparability.
  • Solver names do not determine claim strength; validation ladders and route cards do.
  • For inverse problems, field visibility, conductivity sensitivity, solver uncertainty, and validation class are different questions; progress in one does not erase the others.
  • Whole-brain or faster DCM improves tractability, but does not erase candidate-model dependence, processing sensitivity, or reliability limits set by priors, scan duration, and sample size.
  • Route cards and companion cards answer different questions: route cards type one measurement or estimation route, whereas companion cards type the relation among several routes or across sequential stages.
  • More data is not automatically a stronger design; identifiability objective, omitted-mechanism stress, and minimum-sufficiency stop rules answer different questions.
  • Same-session multimodal, same-subject human-proxy, and same-brain language do not by themselves solve fusion validity, human-proxy composition, or state continuity.

Still unresolved beyond this point

What we still do not know

  • The extent to which non-invasive measurements alone can restore sufficient internal state for WBE remains an open question.
  • Which modeling combinations will ultimately be most effective is still being studied.
  • The extent to which causality can be identified using observational data alone varies greatly depending on the intervention design.
  • Which DCM route-card bundle should become the site's default benchmark across task fMRI, resting-state fMRI, and MEG remains unsettled.
  • Which inverse-problem route or validation ladder generalizes beyond focal or clinical benchmark regimes remains unresolved.
  • Which companion-card bundle should become the site's default front-door language for claims that mix multimodal fusion, living-human proxy bundles, and sequential same-brain bridges remains unsettled.

Learn the basics

Check the basics in the wiki

What the wiki is for

The wiki is a learning aid. For the project's official current synthesis, success criteria, and operating rules, always return to the public pages.

The shortest map

The words on this site can be roughly divided into four levels. These are to observe, arrange, estimate, and confirm. Even if the words seem difficult, confusion will be reduced if you first explain what stage the story is in.

2026-03 correction to the beginner route

The older beginner route on this site grouped ESI, DCM, and SCM together too loosely as "modeling words." That was too weak. On this site, ESI is read through a validation ladder, DCM through candidate-model disclosure and model recovery, and SCM through intervention conditions and equivalence-class narrowing.

2026-03-26 correction to the beginner route

A second beginner overread also remained: more sensors, same-brain linkage, or a connectome prior can sound as if the candidate set were almost closed. On this site, that is still too strong. Same-session multimodal work needs a Fusion Card, sequential same-brain or cross-day claims need a State-Continuity Bridge Card, and connectome-constrained predictors still need an Identifiability Card. The detailed rule lives in Wiki: From observation to estimation.

2026-03-27 correction to the inverse-problem route

One more beginner overread still remained: inverse-problem progress could still sound like one continuous bar. The primary literature does not support that shortcut. Ahlfors et al. (2010), Goldenholz et al. (2009), and Piastra et al. (2021) show that field formation and head-model detail already limit what reaches the sensors. Vorwerk et al. (2024) and Vorwerk et al. (2026) show that conductivity uncertainty and estimation still materially move the result. Luria et al. (2024), Tong et al. (2025), and Feng et al. (2025) improve how candidate sets and uncertainty are exposed inside a stated inverse family. Mikulan et al. (2020), Pascarella et al. (2023), Unnwongse et al. (2023), and Hao et al. (2025) then validate different source regimes and error questions. Therefore, on this site, inverse papers are no longer read as one ladder.

2026-03-28 correction to the beginner route

One more technical overread still remained: after naming an ambiguity, readers could still think the next step is simply to add more data or one more modality. The primary literature does not support that shortcut. Raue et al. (2011) showed that non-identifiability is resolved by experimental design under suitable conditions or by model reduction matched to the information content of the data. Chis et al. (2016) then showed that sloppiness is not identifiability, so design should optimize identifiability criteria rather than only compress one proxy uncertainty score. White et al. (2016) showed that apparently complementary experiments can instead make omitted mechanisms relevant and increase model discrepancy. Gevertz & Kareva (2024) and Liu et al. (2025) then showed that identifiability analysis and active learning can derive a minimally sufficient schedule rather than an open-ended collection plan. In neuroscience, Beiran & Litwin-Kumar (2025) showed that a small targeted recording set can remove degeneracy in connectome-constrained networks, and Langdon & Engel (2025) showed that preserving causal interactions among task variables can recover behaviorally relevant computation that correlation-only reductions miss. Therefore, on this site, the safer beginner rule is not "collect more" but "name the surviving ambiguity, state which identifiability objective chose the next condition, test whether the new condition exposed omitted-mechanism error, and say what minimum-sufficiency stop rule would end collection." The longer rule lives in Wiki: From observation to estimation and Verification: experiment-design leverage.

2026-04-01 correction to the beginner route

The remaining beginner weakness was subtler: this page already stopped readers from treating DCM as generic causal wording, but it still left room to read recent DCM papers as if they formed one monotonic ladder of causal strength. The primary literature does not support that shortcut. Penny et al. (2004) and Rosa et al. (2012) strengthen candidate-model comparison and family search. Frässle et al. (2021) and Wu et al. (2024) strengthen tractability and scaling. But Frässle et al. (2016), Almgren et al. (2020), Zhang et al. (2024), and Ma et al. (2024) show that reliability, priors, processing policy, scan duration, and sample size still materially move the result. Therefore, on this site, DCM advances are read by axis, not as one progress bar.

2026-04-04 correction to the measurement-model boundary

The next weakness was not inside one route, but between routes. This page already taught that observation, estimation, and verification are different stages, yet it still left one high-cost ambiguity: readers could move from same-session multimodal, same-subject proxy-rich, or same-brain sequential wording to the vague idea that measurement itself became stronger. The current primary literature does not support that compression. Kothe et al. (2025), Vafaii et al. (2024), Chen et al. (2025), Bolt et al. (2025), and Epp et al. (2025) show why synchronized or coupled modalities still do not define one temporal object or one biological quantity by default. Li et al. (2025), Bøgh et al. (2024), Morgan et al. (2024), Amiri et al. (2023), and Manasova et al. (2026) show why several living-human rows still differ in quantity type, operating point, complete-case availability, and disagreement topology. Bosch et al. (2022), MICrONS Consortium et al. (2025), Gallego et al. (2020), Van De Ville et al. (2021), Karpowicz et al. (2025), Wilson et al. (2025), and Wairagkar et al. (2025) show why specimen identity or stable use across time still does not tell you which carried object remained the same. Therefore, this page now separates route cards from companion cards instead of letting them blur together under one generic verification label.

View in 4 levels

stage Words that are easy to appear here What are you doing
1. Observation EEG, MEG, fMRI, ECoG We first measure the signals coming out from the brain and body.
2. Organize QC, pretreatment, BIDS Check for noise and defects and arrange it into a shape that others can follow.
3. Estimation Inverse problem, ESI, DCM, SCM Think about how far you can estimate the state and causal structure in the brain from observations, and which route card fixes the ceiling.
4. Verification Benchmark, baseline, pre-registration, model card Check whether the estimation or model really holds true in a comparable manner.

Three companion cards that still had to be split

This page now uses a stricter distinction. A route card describes one measurement or estimation route and its ceiling. A companion card describes the relation among several routes or stages when one route no longer explains the claim by itself. The scientific reason is simple: current primary literature does not support treating multimodal, proxy-rich, and same-brain as if they were one generic upgrade in evidence strength.

Claim pattern that still gets overread Why the shortcut is scientifically unsafe Primary-literature stop rule Companion card required on this site
Same-session multimodal / atlas-informed claim Shared timestamps, a shared factor, and one externally grounded biological quantity are different achievements. Same-session or atlas-informed wording does not by itself prove one temporal object or one quantity bridge. Kothe et al. (2025), Vafaii et al. (2024), Chen et al. (2025), Bolt et al. (2025), and Epp et al. (2025) show why temporal-kernel mismatch, shared-vs-specific structure, autonomic coupling, and quantity-bridge failure remain separate burdens. Fusion Card
Several living-human proxy rows in one argument Several real human routes can still differ in quantity type, evidence role, operating point, complete-case geometry, and disagreement topology. Listing them together does not yet make one same-subject state sample. Li et al. (2025), Bøgh et al. (2024), Morgan et al. (2024), Amiri et al. (2023), and Manasova et al. (2026) show why route-local repeatability, method-family non-equivalence, restricted complete-case slices, and disagreement in hard groups remain separate audits. Human Proxy Composition Card
Same-subject / same-brain sequential bridge Specimen identity, local structure-function linkage, or stable interface use do not by themselves tell you which object stayed the same across time, regime, or tissue transformation. Bosch et al. (2022), MICrONS Consortium et al. (2025), Gallego et al. (2020), Van De Ville et al. (2021), Karpowicz et al. (2025), Wilson et al. (2025), and Wairagkar et al. (2025) show why stable use can still depend on alignment, recalibration, local witness objects, or short-horizon support. State-Continuity Bridge Card
One operational distinction to keep visible

If the claim fails because one route hid its own assumptions, the missing object is a route card. If the claim fails because several routes or stages were silently fused into one argument, the missing object is a companion card. This page now keeps those failure modes separate on purpose.

1. Observation: First get the signal

EEG and MEG do not directly look inside the brain, but rather measure signals that can be observed from outside. The important point here is thatwhat you observe is not the same as what is really happening inthe brain.

Term In one word
EEG This is a method to quickly measure the potential difference on the scalp. While it is resistant to changes over time, it is easily blurred spatially.
MEG This is a method of measuring magnetic fields. Although it is complementary to EEG, it is expensive and has significant equipment limitations.
fMRI This is a method to measure changes in blood flow. It is strong in position, but slow in time resolution.
ECoG This is an invasive measurement that measures near the brain surface. Although it is highly accurate, there are strong restrictions on the applicable range.

2. Organize: Don't believe the signal as it is

The observed signals include blinks, myoelectricity, body movements, equipment noise, etc. Therefore, the next step is QC and pre-treatment. This is not a matter of improving the appearance, but of recording what information has been kept and what has been removed.

Words used here

  • QC:Leave missing, noise, artifact, and exclusion reasons in numerical form.
  • Preprocessing: Set up reference methods, filters, artifact removal, etc.
  • BIDS:A standard for aligning data and metadata in a way that others can track them.

If you skip this step, even if a high-performance model comes out later, it will not provide comparable evidence.

3. Estimation: How much can we tell from observations

We want to estimate brain activity and causal structure based on the organized signals. This is where inverse problems, ESI, DCM, and SCM come into play. However, it must be remembered at this stage thatthe estimate is an estimate, and uncertainty and candidate model dependence remain.

Term What it adds What still has to be disclosed
Inverse problem This is the general family of routes that estimate hidden causes from externally observed signals. The solution is not unique by default, so field visibility, forward-model or conductivity burden, solver uncertainty, and validation class remain part of the result.
ESI A concrete inverse workflow that combines a head model, source prior, and estimation rule to produce candidate source configurations. One polished map is not enough; disclose field visibility, forward-model burden, cross-solver or posterior spread, and the validation class or source regime that was actually tested.
DCM A framework for comparing candidate generative circuit models and asking which one better explains the observation. The result still depends on the candidate model space, priors, family comparison, recovery, and external validation.
SCM A language for making interventions and counterfactuals explicit. With observational data alone, equivalence classes often remain, so intervention design still determines how strong the causal claim can become.
Inverse-problem gate What question it answers Representative primary literature What it still does not close
Gate 1: Field-formation visibility Does the target source class generate a usable scalp field under the actual orientation, extent, anatomy, and head-model detail? Ahlfors et al. (2010); Goldenholz et al. (2009); Piastra et al. (2021) A visible source class can still remain poorly localized, poorly identified, or weakly validated.
Gate 2: Forward-model / conductivity burden How much do skull or tissue conductivity and geometry assumptions move localization, depth, magnitude, or orientation? Vorwerk et al. (2024); Vorwerk et al. (2026) Reducing conductivity-driven spread does not by itself collapse solver degeneracy or prove source recovery in every regime.
Gate 3: Solver-family / posterior uncertainty Does the method expose alternative source configurations, intervals, or extended-source uncertainty instead of one polished point map? Luria et al. (2024); Tong et al. (2025); Feng et al. (2025) Better uncertainty exposure does not repair missing observability, wrong head models, or unmatched validation classes.
Gate 4: Validation class / source regime Which error question was actually tested: known stimulation site, focal-source board, simultaneous invasive concordance, or clinical ictal localization? Mikulan et al. (2020); Pascarella et al. (2023); Unnwongse et al. (2023); Hao et al. (2025) Validation success in one regime is not a universal winner for focal, extended, spontaneous, and deep-source recovery together.
Changes that are likely to occur here

Observing an EEG is not the same as uniquely reconstructing brain states. Furthermore, being correct in a correlational prediction is not the same as knowing the causal structure.

Supplementary information for 2026-03

DCM is a comparison of candidate generative models, and SCM is a language that facilitates describing interventions and counterfactuals. Causal equivalence classes often remain from observational data alone, so it is necessary to read the candidate model space, family comparison, external validation, and the presence or absence of intervention data separately. For more information, see Wiki: From observation to estimation.

Read ESI by four gates, not solver name

Michel & Brunet (2019) summarize ESI as a multi-step pipeline rather than a one-word method. On top of that, Ahlfors et al. (2010), Goldenholz et al. (2009), and Piastra et al. (2021) show that field formation is already selective before inversion begins, Vorwerk et al. (2024) and Vorwerk et al. (2026) show that conductivity assumptions still move the result, and Luria et al. (2024), Tong et al. (2025), and Feng et al. (2025) show why uncertainty has to be exposed rather than hidden. Finally, Mikulan et al. (2020), Pascarella et al. (2023), Unnwongse et al. (2023), and Hao et al. (2025) validate different source regimes. On this site, a claim that says only "we used ESI" still does not say enough.

Read DCM by candidate-model rule, not causal wording

Penny et al. (2004) fixed that DCM inference is relative to the compared model set, Rosa et al. (2012) showed how post-hoc model-space search can be expanded, and Frässle et al. (2021) plus Wu et al. (2024) pushed whole-brain and faster estimation. Those are advances in tractability, not automatic solutions to identifiability. On this site, DCM therefore remains a model-conditioned causal hypothesis unless candidate space, observed-subsystem closure / latent-confound audit, node-definition policy, sampling / transformation sensitivity, recovery, reliability, and validation are disclosed.

DCM axis What it actually strengthens What it still does not close
Candidate-model comparison / family search Penny et al. (2004); Rosa et al. (2012). Stronger comparison among explicitly declared competitors. It does not prove that omitted nodes, edges, priors, or model families were absent or irrelevant.
Scaling / tractability Frässle et al. (2021); Wu et al. (2024). Larger or faster search within a declared DCM family. It does not turn the graph into preprocessing-invariant, node-invariant, or competitor-complete causal truth.
Processing / first-level design robustness Almgren et al. (2020); Zhang et al. (2024). Stronger disclosure of how GSR, GLM design, contrast definition, and thresholding change the inferred edges and parameter certainty. It does not let one reasonable pipeline stand in for pipeline-robust effective connectivity.
Reliability window Frässle et al. (2016); Ma et al. (2024). A bounded statement about how stable the result remains under named priors, session structure, scan duration, and sample size. It does not show that the same graph will survive different sites, longer horizons, weaker scans, or changed processing policies.
Where the full rule lives

This page is the beginner map. If you need the full DCM / effective-connectivity submission rule, including latent-confound audit, node-definition policy, and abstention boundary, continue to Wiki: From observation to estimation.

4. Verification: How to trust estimates

The final question is, "Can other people confirm this estimation or model under the same conditions?" This is where words like Benchmark, Baseline, Preregistration, and Model Card come into play.

Term What is it needed for
Benchmark Fix what will be compared and what indicators will be used to score.
Baseline Places a starting point for advocating for improvements.
Pre-registration Avoid changing the conditions later.
Model card In addition to the score, we will also publish weaknesses, failure examples, leak countermeasures, and calculation conditions.
Experiment-design leverage Name which surviving ambiguity the next measurement or perturbation targets, why it was chosen by the stated identifiability objective, and what minimum-sufficiency stop rule would end further collection.
Route card When ESI, connectivity, or DCM is used, we disclose the assumptions, validation class, abstention boundary, and what the result still does not identify.
Companion card When a claim spans several routes or stages, we disclose whether the unresolved burden is fusion, human-proxy composition, or state continuity, instead of hiding that relation behind words such as multimodal, same-subject, or same-brain.
Verification now asks why the next condition was chosen

On this site, verification is no longer only a place to list what was measured. When ambiguity remains, the stronger workflow must also explain which ambiguity class survived, which identifiability objective selected the next condition, whether the new condition exposed omitted-mechanism error, and what minimum-sufficiency design would have been enough. Otherwise, even a careful benchmark can still look like open-ended data accumulation rather than an ambiguity-breaking design.

References

  1. Pernet, C. R., Appelhoff, S., Gorgolewski, K. J., et al. (2019). EEG-BIDS, an extension to the brain imaging data structure for electroencephalography. Scientific Data, 6, 103. doi:10.1038/s41597-019-0104-8
  2. Michel, C. M., & Brunet, D. (2019). EEG source imaging: a practical review of the analysis steps. Frontiers in Neurology, 10, 325. doi:10.3389/fneur.2019.00325
  3. Ahlfors, S. P., Han, J., Belliveau, J. W., & Hämäläinen, M. S. (2010). Sensitivity of MEG and EEG to source orientation. Brain Topography, 23(3), 227-232. doi:10.1007/s10548-010-0154-x
  4. Goldenholz, D. M., Ahlfors, S. P., Hämäläinen, M. S., Sharon, D., Ishitobi, M., Vaina, L. M., & Stufflebeam, S. M. (2009). Mapping the signal-to-noise-ratios of cortical sources in magnetoencephalography and electroencephalography. Human Brain Mapping, 30(4), 1077-1086. doi:10.1002/hbm.20571
  5. Piastra, M. C., Nüßing, A., Vorwerk, J., Clerc, M., Engwer, C., & Wolters, C. H. (2021). A comprehensive study on electroencephalography and magnetoencephalography sensitivity to cortical and subcortical sources. Human Brain Mapping, 42(4), 978-992. doi:10.1002/hbm.25272
  6. Mikulan, E., Russo, S., Bares, M., et al. (2020). Simultaneous human intracerebral stimulation and HD-EEG, ground-truth for source localization methods. Scientific Data, 7, 127. doi:10.1038/s41597-020-0467-x
  7. Pascarella, A., Mikulan, E., Sciacchitano, F., et al. (2023). An in-vivo validation of ESI methods with focal sources. NeuroImage, 277, 120219. doi:10.1016/j.neuroimage.2023.120219
  8. Unnwongse, K., Achakulvisut, T., Wu, J. Y., et al. (2023). Direct validation of EEG source imaging by intracranial electric stimulation in human patients. Brain Communications, 5(2), fcad023. doi:10.1093/braincomms/fcad023
  9. Vorwerk, J., Wolters, C. H., & Baumgarten, D. (2024). Global sensitivity of EEG source analysis to tissue conductivity uncertainties. Frontiers in Human Neuroscience, 18, 1335212. doi:10.3389/fnhum.2024.1335212
  10. Luria, G., Viani, S., Pascarella, A., et al. (2024). The SESAMEEG package: a probabilistic tool for source localization and uncertainty quantification in M/EEG. Frontiers in Human Neuroscience, 18, 1359753. doi:10.3389/fnhum.2024.1359753
  11. Tong, P. F., Yang, H., Ding, X., et al. (2025). Debiased Estimation and Inference for Spatial-Temporal EEG/MEG Source Imaging. IEEE Transactions on Medical Imaging. doi:10.1109/TMI.2024.3506596
  12. Hao, S., Zhao, H., Feng, Z., et al. (2025). HD-EEG source imaging with simultaneous SEEG recording in drug-resistant epilepsy. Epilepsia, 66(11), 4451-4464. doi:10.1111/epi.18552
  13. Feng, Z., Mishne, G., Hashemi, A., et al. (2025). Block-Champagne: Imaging extended E/MEG source activation with empirical Bayesian uncertainty quantification. IEEE Transactions on Medical Imaging. doi:10.1109/TMI.2025.3642620
  14. Vorwerk, J., Köhler, T., Güllmar, D., et al. (2026). Potential of EEG and EEG/MEG skull conductivity estimation to improve source analysis in presurgical evaluation of epilepsy. Journal of Neural Engineering, 23(1), 016007. doi:10.1088/1741-2552/ae2f01
  15. Penny, W. D., Stephan, K. E., Mechelli, A., & Friston, K. J. (2004). Comparing dynamic causal models. NeuroImage, 22(3), 1157-1172. doi:10.1016/j.neuroimage.2004.03.026
  16. Rosa, M. J., Friston, K., & Penny, W. (2012). Post-hoc selection of dynamic causal models. Journal of Neuroscience Methods, 208(1), 66-78. doi:10.1016/j.jneumeth.2012.04.013
  17. Frässle, S., Paulus, F. M., Krach, S., & Jansen, A. (2016). Test-retest reliability of effective connectivity in the face perception network. Human Brain Mapping, 37(2), 730-744. doi:10.1002/hbm.23061
  18. Frässle, S., Manjaly, Z. M., Do, C. T., Kasper, L., Pruessmann, K. P., & Stephan, K. E. (2021). Whole-brain estimates of directed connectivity for human connectomics. NeuroImage, 225, 117491. doi:10.1016/j.neuroimage.2020.117491
  19. Wu, H., Hu, X., & Zeng, Y. (2024). A fast dynamic causal modeling regression method for fMRI. NeuroImage, 304, 120954. doi:10.1016/j.neuroimage.2024.120954
  20. Almgren, H., Van de Steen, F., Razi, A., Friston, K., & Marinazzo, D. (2020). The effect of global signal regression on DCM estimates of noise and effective connectivity from resting state fMRI. NeuroImage, 208, 116435. doi:10.1016/j.neuroimage.2019.116435
  21. Zhang, S., Jung, K., Langner, R., Florin, E., Eickhoff, S. B., & Popovych, O. V. (2024). Impact of data processing varieties on DCM estimates of effective connectivity from task-fMRI. Human Brain Mapping, 45(8), e26751. doi:10.1002/hbm.26751
  22. Ma, L., Braun, S. E., Steinberg, J. L., Bjork, J. M., Martin, C. E., Keen II, L. D., & Moeller, F. G. (2024). Effect of scanning duration and sample size on reliability in resting state fMRI dynamic causal modeling analysis. NeuroImage, 292, 120604. doi:10.1016/j.neuroimage.2024.120604
  23. Raue, A., Kreutz, C., Maiwald, T., Klingmüller, U., & Timmer, J. (2011). Addressing parameter identifiability by model-based experimentation. IET Systems Biology, 5(2), 120-130. doi:10.1049/iet-syb.2010.0061
  24. Chis, O.-T., Villaverde, A. F., Banga, J. R., & Balsa-Canto, E. (2016). On the relationship between sloppiness and identifiability. Mathematical Biosciences, 282, 147-161. doi:10.1016/j.mbs.2016.10.009
  25. White, A., Tolman, M., Thames, H. D., Withers, H. R., Mason, K. A., & Transtrum, M. K. (2016). The Limitations of Model-Based Experimental Design and Parameter Estimation in Sloppy Systems. PLoS Computational Biology, 12(12), e1005227. doi:10.1371/journal.pcbi.1005227
  26. Gevertz, J. L., & Kareva, I. (2024). Minimally sufficient experimental design using identifiability analysis. npj Systems Biology and Applications, 10(1), 2. doi:10.1038/s41540-023-00325-1
  27. Beiran, M., & Litwin-Kumar, A. (2025). Prediction of neural activity in connectome-constrained recurrent networks. Nature Neuroscience, 28, 2561-2574. doi:10.1038/s41593-025-02080-4
  28. Langdon, C., & Engel, T. A. (2025). Latent circuit inference from heterogeneous neural responses during cognitive tasks. Nature Neuroscience, 28, 665-675. doi:10.1038/s41593-025-01869-7
  29. Liu, X., Wanika, L., Chappell, M. J., & Branke, J. (2025). Efficient data collection for establishing practical identifiability via active learning. Computational and Structural Biotechnology Journal, 27, 4992-5006. doi:10.1016/j.csbj.2025.10.058
  30. Kothe, C., Shirazi, S. Y., Stenner, T., Medine, D., Boulay, C., Grivich, M. I., Artoni, F., Mullen, T., Delorme, A., & Makeig, S. (2025). The lab streaming layer for synchronized multimodal recording. Imaging Neuroscience. doi:10.1162/IMAG.a.136
  31. Vafaii, H., Mandino, F., Desrosiers-Grégoire, G., et al. (2024). Multimodal measures of spontaneous brain activity reveal both common and divergent patterns of cortical functional organization. Nature Communications, 15, 383. doi:10.1038/s41467-023-44363-z
  32. Chen, J. E., Lewis, L. D., Coursey, S. E., et al. (2025). Simultaneous EEG-PET-MRI identifies temporally coupled and spatially structured brain dynamics across wakefulness and NREM sleep. Nature Communications, 16, 8887. doi:10.1038/s41467-025-64414-x
  33. Bolt, T. S., van den Brink, R. L., Song, C., et al. (2025). Autonomic physiological coupling of the global fMRI signal. Nature Neuroscience, 28, 1001-1014. doi:10.1038/s41593-025-01945-y
  34. Epp, S. M., Castrillon, G., Yuan, B., Andrews-Hanna, J., Preibisch, C., & Riedl, V. (2025). BOLD signal changes can oppose oxygen metabolism across the human cortex. Nature Neuroscience. doi:10.1038/s41593-025-02132-9
  35. Li, X., Zhu, X.-H., Li, Y., et al. (2025). Quantitative mapping of key glucose metabolic rates in the human brain using dynamic deuterium magnetic resonance spectroscopic imaging. PNAS Nexus, 4(3), pgaf072. doi:10.1093/pnasnexus/pgaf072
  36. Bøgh, N., Vaeggemose, M., Schulte, R. F., et al. (2024). Repeatability of deuterium metabolic imaging of healthy volunteers at 3 T. European Radiology Experimental, 8, 9. doi:10.1186/s41747-024-00426-4
  37. Morgan, C. A., Thomas, D. L., Shao, X., et al. (2024). Measurement of blood-brain barrier water exchange rate using diffusion-prepared and multi-echo arterial spin labelling: Comparison of quantitative values and age dependence. NMR in Biomedicine, 37(12), e5256. doi:10.1002/nbm.5256
  38. Amiri, M., Hermann, B., Märtens, B., et al. (2023). Multimodal prediction of residual consciousness in the intensive care unit: the CONNECT-ME study. Brain, 146(2), 645-661. doi:10.1093/brain/awac335
  39. Manasova, D., Hermann, B., Calligaris, C., et al. (2026). Multimodal multicentre investigation of diagnostic and prognostic markers in disorders of consciousness. Brain. doi:10.1093/brain/awaf412
  40. Bosch, C., Ackels, T., Pacureanu, A., et al. (2022). Functional and multiscale 3D structural investigation of brain tissue through correlative in vivo physiology, synchrotron microtomography and volume electron microscopy. Nature Communications, 13, 2923. doi:10.1038/s41467-022-30199-6
  41. MICrONS Consortium, Bae, J. A., Lee, W.-C. A., et al. (2025). Functional connectomics spanning multiple areas of mouse visual cortex. Nature, 640, 435-447. doi:10.1038/s41586-025-08790-w
  42. Gallego, J. A., Perich, M. G., Chowdhury, R. H., et al. (2020). Long-term stability of cortical population dynamics underlying consistent behavior. Nature Neuroscience, 23, 260-270. doi:10.1038/s41593-019-0555-4
  43. Van De Ville, D., Farouj, Y., Preti, M. G., Liégeois, R., & Amico, E. (2021). When makes you unique: Temporality of the human brain fingerprint. Science Advances, 7(42), eabj0751. doi:10.1126/sciadv.abj0751
  44. Karpowicz, B. M., Ali, Y. H., Wimalasena, L. N., et al. (2025). Stabilizing brain-computer interfaces through alignment of latent dynamics. Nature Communications, 16, 4662. doi:10.1038/s41467-025-59652-y
  45. Wilson, G. H., Stein, E. A., Kamdar, F., et al. (2025). Long-term unsupervised recalibration of cursor-based intracortical brain-computer interfaces using a hidden Markov model. Nature Biomedical Engineering. doi:10.1038/s41551-025-01536-z
  46. Wairagkar, M., Card, N. S., Singer-Clark, T., et al. (2025). An instantaneous voice-synthesis neuroprosthesis. Nature, 644, 145-152. doi:10.1038/s41586-025-09127-3

What has been learned from this process and what is still unknown

What we know What we still don't know
Which stage of work does the term belong to? Which model ultimately adequately explains consciousness and identity?
How to read without confusing observation, estimation, and verification. Is it possible to obtain sufficient information for WBE with non-invasive measurements alone?
Why are BIDS and QC part of the technology rather than the outside? Which multimodal integration is ultimately best?
Why inverse-problem papers must be separated into visibility, forward-model burden, solver uncertainty, and validation class. Which inverse route or validation ladder generalizes beyond focal or clinical benchmark regimes.
Why route cards and companion cards must be separated before multimodal, proxy-rich, or same-brain language is read strongly. Which combination of Fusion, Human Proxy Composition, State-Continuity Bridge, and Identifiability cards should become the site's default front-door bundle for cross-stack claims.

Where to go back next

Please use Glossary to return to a short definition, Introduction to EEG to read the role of EEG again, and Verification infrastructure to proceed to comparable verification.