From Protocol to High-Impact Publication: Research with NeuroTrax Digital Neurometrics

NeuroTrax Science Team and Glen M. Doniger, PhD

Designing studies with cognitive endpoints presents a formidable challenge. Researchers need tools that are both scientifically rigorous and operationally feasible. Traditional neuropsychological testing may introduce challenges related to variability in administration, manual scoring, and difficulty scaling across sites.

NeuroTrax addresses this by providing a standardized digital cognitive assessment platform that has been used in hundreds of peer-reviewed studies and clinical trials. It serves not only as a measurement tool, but as an infrastructure that supports valid, reproducible, and ultimately publishable cognitive data. Indeed, NeuroTrax digital neurometrics have been validated in a variety of populations and research contexts.

A seminal study demonstrated the utility of NeuroTrax as a neuromarker that differentiates among cognitive health, mild cognitive impairment (MCI), and Alzheimer’s disease [1]. The computerized measures showed strong discriminant validity comparable to, and in some cases exceeding, traditional paper-based neuropsychological tests. Importantly, NeuroTrax discriminant validity has been shown robust to depressive symptoms [2] and socioethnic factors [3], highlighting its utility in heterogeneous research cohorts. Subsequent studies demonstrated validity of the platform in diverse clinical populations. One study found that NeuroTrax attention and executive function metrics were highly correlated with the gold standard Conners’ CPT-II in adults with ADHD [4]. Another study revealed that response-time related measures (e.g., information processing speed) are particularly relevant for discriminating persons with multiple sclerosis (PwMS) from healthy individuals [5]. Follow-up work in PwMS validated the platform by comparison with traditional testing and identified an optimal cutoff for predicting impairment on the neuropsychological battery [6]. An analysis of over 5,000 individuals using machine learning analytics demonstrated that NeuroTrax scores distinguish neurological conditions including with high accuracy (e.g., multiple sclerosis: AUC=0.94, traumatic brain injury: AUC=0.93) [7]. Findings like these establish NeuroTrax as a robust and clinically relevant alternative to traditional testing, with the added advantages of standardization, automation, and scalability. Further, with tools like NeuroTrax researchers can move beyond a single summary score to multi-domain and highly detailed cognitive profiles essential to understanding disease phenotype and potentially etiology.

A common use case for NeuroTrax is interventional studies, in which detection of subtle cognitive changes is critical. Such longitudinal studies require consistency across repeated measures. The platform supports high test-retest reliability with standardized administration, practice sessions, and alternate test forms to reduce learning effects. The test-retest reliability of the digital measures has been established in several studies, including a psychometric study of Navy divers [8,9]. In addition to core validity and reliability, NeuroTrax incorporates built-in detection of high negative response bias [10]. In combination, these features safeguard scientific rigor, ensuring that observed changes reflect true cognitive variation rather than methodological noise or examinee bias.

Beyond scientific validity, NeuroTrax addresses practical research challenges. Compared to traditional approaches, incorporating the platform enables faster study setup and training, minimal participant burden, and scalable deployment across sites. This is particularly important in multisite and global studies, where logistical challenges may be the greatest barrier.

The growing corpus of NeuroTrax-based publications reflects a shift in cognitive research. More investigators are selecting tools that are not only validated, but also scalable and efficient. Digital neuromarkers like NeuroTrax allow research teams to design more tightly controlled studies, generate cleaner datasets and produce findings that are publishable in top peer-reviewed journals.

If your study includes cognitive endpoints, your choice of measurement tool will directly influence your results. NeuroTrax provides a validated, scalable approach to cognitive assessment that supports high-quality research from protocol to publication.

References:

[1] Dwolatzky, T., Whitehead, V., Doniger, G.M., Simon, E.S., Schweiger, A., Jaffe, D., and Chertkow, H. (2003). Validity of a novel computerized cognitive battery for mild cognitive impairment. BMC Geriatrics, 3:4. DOI: 10.1186/1471-2318-3-4

[2] Doniger, G.M., Dwolatzky, T., Zucker, D.M., Chertkow, H., Crystal, H., Schweiger, A., and Simon, E.S. (2006). Computerized cognitive testing battery identifies mild cognitive impairment and mild dementia even in the presence of depressive symptoms. American Journal of Alzheimer’s Disease and Other Dementias, 21(1), 28–36. DOI: 10.1177/153331750602100105

[3] Doniger, G.M., Jo, M.-Y., and Crystal, H.A. (2009). Computerized cognitive assessment of mild cognitive impairment in urban African Americans. American Journal of Alzheimer’s Disease and Other Dementias, 24(5), 396–403. DOI: 10.1177/1533317509342982

[4] Achiron, A., Doniger, G.M., Harel, Y., Appleboim-Gavish, N., Lavie, M., and Simon, E.S. (2007). Prolonged response times characterize cognitive performance in multiple sclerosis. European Journal of Neurology, 14(10), 1102–1108. DOI: 10.1111/j.1468-1331.2007.01909.x

[5] Schweiger, A., Abramovitch, A., Doniger, G.M., and Simon, E.S. (2007). A clinical construct validity study of a novel computerized battery for the diagnosis of ADHD in young adults. Journal of Clinical and Experimental Neuropsychology, 29(1), 100–111. DOI: 10.1080/13803390500519738

[6] Golan, D., Wilken, J., Doniger, G.M., Fratto, T., Kane, R., Srinivasan, J., Zarif, M., Bumstead, B., Buhse, M., Fafard, L., Topalli, I., and Gudesblatt, M. (2019). Validity of a multi-domain computerized cognitive assessment battery for patients with multiple sclerosis. Multiple Sclerosis and Related Disorders, 30, 154–162. DOI: 10.1016/j.msard.2019.01.051

[7] Mishan-Shamay, H., Doniger, G., Chalom, E., Simon, E., and Unger, R. (2013). A machine-learning approach for integration of computerized cognitive data in the neuropsychological assessment of older adults. Alzheimer’s & Dementia, 9, P635–P636. DOI: 10.1016/j.jalz.2013.05.1292

[8] Schweiger, A., Doniger, G.M., Dwolatzky, T., Jaffe, D., and Simon, E.S. (2003). Reliability of a novel computerized neuropsychological battery for mild cognitive impairment. Acta Neuropsychologica, 1(4), 407–413. GICID: 01.3001.0001.0603

[9] Melton, J.L. (2005). NEDU Technical Report 06-10, Navy Experimental Diving Unit, Panama City, FL.

[10] Bar-Hen, M., Doniger, G.M., Golzad, M., Geva, N., and Schweiger, A. (2015). Empirically derived algorithm for performance validity assessment embedded in a widely used neuropsychological battery: Validation among TBI patients in litigation. Journal of Clinical and Experimental Neuropsychology, 37(10), 1086–1097. DOI: 10.1080/13803395.2015.1078294