Highlights
– AI-RAPNO outlines advances in paediatric-specific AI for tumour segmentation, response quantification, and prognostication while emphasising unique paediatric challenges such as small datasets and heterogeneous imaging protocols.
– The initiative recommends standardised imaging protocols, robust external validation, model interpretability, and infrastructure (data curation, federated learning) to enable trustworthy clinical translation within the RAPNO response framework.
– Priority applications include automated volumetric segmentation aligned to RAPNO metrics, multimodal integration (imaging, molecular, clinical), and use of synthetic controls to support trial efficiency, but regulatory, ethical, and operational barriers remain.
Background and disease burden
Pediatric brain tumours are the leading cause of cancer-related death in children in high-income settings and represent a heterogeneous group of neoplasms with diverse histopathology, molecular subtypes, and clinical behaviours. Accurate and reproducible imaging-based response assessment is critical for clinical management, risk-adapted therapy, and trial endpoints. The Response Assessment in Pediatric Neuro-Oncology (RAPNO) criteria were developed to provide standardized definitions of response and progression tailored to the paediatric population, seeking to harmonize endpoint reporting across trials.
Artificial intelligence (AI), particularly deep learning, can markedly reduce manual burden, increase reproducibility of quantitative imaging biomarkers (for example, tumour volume and contrast enhancement), and integrate complex multimodal data. However, paediatric neuro-oncology has features that complicate direct adoption of adult-derived AI systems: lower incidence leading to smaller datasets, developmental changes in brain morphology, a wider range of tumour types with distinct imaging phenotypes, and variable imaging protocols across centres and time.
Study design and scope of AI-RAPNO
The AI-RAPNO effort (detailed in two companion papers published in Lancet Oncology) is a policy-oriented, multi-disciplinary initiative that critically reviews the current state of AI methods for paediatric neuro-oncology (part 1) and lays out challenges, opportunities, and implementation recommendations for clinical translation (part 2). The work synthesises published literature, technical advances in segmentation and prognostic modelling, and expert consensus from the RAPNO community to generate actionable guidance for researchers, trialists, regulators, and clinical teams.
Because AI-RAPNO is a consensus and policy review rather than a primary clinical trial, its ‘endpoints’ are practical: identifying high-priority use cases, defining validation expectations, and proposing infrastructure and regulatory pathways that would allow AI tools to be integrated safely into the RAPNO framework and clinical trials.
Key findings and technical advances
1. Task-specific AI tools for paediatric imaging
Recent technical work demonstrates that deep learning models can achieve high performance for tumour segmentation and volumetric quantification when trained on appropriately curated paediatric datasets. Advances include 3D convolutional neural networks (CNNs) and attention-based architectures tailored to multimodal MR sequences (T1, post-contrast T1, T2, FLAIR). Paediatric-specific models outperform adult-trained models on many paediatric tumour types because they learn age- and tumour-specific morphologic patterns.
2. Alignment with RAPNO metrics
AI enables automated derivation of metrics central to RAPNO, including bidimensional measures, volumetric tumour burden, and contrast-enhancing fraction. Automated volumetry can decrease inter-reader variability and reduce time per study. Integration of AI-derived metrics into RAPNO reporting templates would facilitate standardized endpoint capture across trials and longitudinal clinical care.
3. Multimodal integration and prognostication
Beyond segmentation, multimodal AI models combining imaging with genomics, clinical status (age, symptoms), and treatment exposures can improve prediction of progression-free survival, treatment response, and late effects. These integrative models have potential both for individualised risk stratification and for enriching or stratifying clinical trial cohorts.
4. Synthetic controls and trial efficiency
AI can support creation of high-quality historical or synthetic control arms using well-annotated registries and harmonised imaging-derived endpoints. When carefully validated, these approaches may reduce the number of patients required for randomized trials in rare paediatric tumours, accelerate evaluation of novel agents, and minimise exposure to ineffective therapies.
5. Technical and validation gaps
Important limitations persist: small and fragmented datasets, heterogeneity in MRI acquisition (field strength, sequence parameters), limited external validation, and a tendency to report optimistic internal performance without reporting calibration or decision-impact studies. Model explainability is often inadequate for clinical acceptance, and few prospective studies demonstrate clinical utility or improved outcomes when AI is used to guide care.
Challenges to clinical implementation
Data heterogeneity and scarcity
Paediatric neuro-oncology studies are typically small and spread across many institutions. Differences in MRI protocols, contrast dosing and timing, and institution-specific post-processing challenge model generalisability. Centralised data sharing is constrained by privacy laws and variable consent, especially for older cohorts.
Model generalisability and robustness
Models trained in one centre often degrade when applied to external data. Robustness testing across scanners, vendors, and acquisition protocols is essential; this includes stress-testing for age-related anatomical variability and for post-operative imaging where scarring, blood products, or hardware can confound segmentation.
Regulatory and ethical considerations
AI tools used for response assessment in trials or clinical decision-making likely meet definitions of medical devices and must satisfy regulatory pathways (for example, FDA software as a medical device). Requirements include transparency about training data, performance across subgroups, and post-market surveillance. Ethical issues include data sovereignty, informed consent for secondary data use, and potential bias affecting underrepresented tumour subgroups.
Integration into clinical workflows
Beyond algorithmic performance, deployment requires integration with PACS, radiology reporting systems, and trial case report forms. Radiologist acceptance depends on clear, interpretable outputs and evidence that AI reduces workload or improves diagnostic accuracy without introducing new risks.
Recommendations from AI-RAPNO
1. Standardise and harmonise imaging acquisition
Develop and promote paediatric-optimized MRI protocols for common tumour types and timepoints (diagnosis, early response, surveillance) to reduce acquisition variability. When harmonisation is infeasible, document acquisition meta-data and apply post hoc harmonisation methods.
2. Curate high-quality annotated datasets
Establish multi-institutional registries with standardized annotations mapped to RAPNO endpoints. Use common annotation protocols, multi-reader consensus, and versioned datasets that can support benchmarking and reproducible model development.
3. Emphasise external validation and calibration
Require multi-institution, multi-vendor external validation as part of model reporting. Present calibration metrics and subgroup analyses (eg, age strata, tumour subtype). Prospective clinical validation and randomized implementation trials should be prioritised where feasible.
4. Support federated and privacy-preserving learning
To overcome data-sharing constraints, invest in federated learning and other privacy-preserving approaches that allow model training across institutions without centralising raw data. Establish data governance frameworks and consent templates aligned with paediatric research ethics.
5. Define regulatory and reporting standards
Engage regulators early to clarify expectations for AI tools used for response assessment. Encourage adoption of reporting standards such as TRIPOD-AI and CLAIM, and require public release of model weights and code when ethically and legally feasible to support independent evaluation.
Expert commentary and caveats
AI-RAPNO represents a pragmatic roadmap bridging technical innovation and clinical need. The enthusiasm for automated volumetry and multimodal prognostication must be tempered by rigorous evaluation: models should demonstrate that they alter clinician behaviour in ways that benefit patients, improve trial efficiency, or produce more reliable endpoints. Adoption will depend on clear demonstration of safety, equity across subgroups, and cost-effectiveness within real-world workflows.
Limitations of the review include rapid evolution of the field — new architectures and federated initiatives are emerging continually — and variability in the maturity of evidence across use cases. Stakeholders should treat the recommendations as evolving best practice rather than rigid mandates.
Conclusion and future directions
AI has the potential to transform response assessment in pediatric neuro-oncology by providing reproducible, quantitative, and multimodal biomarkers that align with RAPNO criteria. Realising this potential requires coordinated investment in standardised imaging, curated datasets, external validation, regulatory engagement, and clinical implementation research. With these elements in place, AI-driven tools can enhance trial design, enable personalised care, and ultimately improve outcomes for children with brain tumours.
Funding and clinicaltrials.gov
AI-RAPNO outputs are the product of the Response Assessment in Pediatric Neuro-Oncology (RAPNO) community and reported in two companion Lancet Oncology policy papers. Readers should consult the original publications for detailed funding disclosures and author declarations. No specific clinicaltrials.gov registration is associated with the AI-RAPNO policy review itself; recommended prospective validation studies should be registered in standard trial registries.
References
1. Kann BH, Vossough A, Brüningk SC, Familiar AM, Aboian M, Linguraru MG, Yeom KW, Chang SM, Hargrave D, Mirsky D, Storm PB, Huang RY, Resnick AC, Weller M, Mueller S, Prados M, Peet AC, Villanueva-Meyer JE, Bakas S, Fangusaro J, Nabavizadeh A, Kazerooni AF; Response Assessment in Pediatric Neuro-Oncology (RAPNO) group. Artificial Intelligence for Response Assessment in Pediatric Neuro-Oncology (AI-RAPNO), part 1: review of the current state of the art. Lancet Oncol. 2025 Nov;26(11):e597-e606. doi: 10.1016/S1470-2045(25)00484-X. PMID: 41167227.
2. Kazerooni AF, Familiar AM, Aboian M, Brüningk SC, Vossough A, Linguraru MG, Huang RY, Hargrave D, Peet AC, Resnick AC, Storm PB, Mirsky D, Yeom KW, Weller M, Prados M, Chang SM, Mueller S, Villanueva-Meyer JE, Bakas S, Fangusaro J, Kann BH, Nabavizadeh A; Response Assessment in Pediatric Neuro-Oncology (RAPNO) group. Artificial Intelligence for Response Assessment in Pediatric Neuro-Oncology (AI-RAPNO), part 2: challenges, opportunities, and recommendations for clinical translation. Lancet Oncol. 2025 Nov;26(11):e607-e618. doi: 10.1016/S1470-2045(25)00489-9. PMID: 41167228.
3. Menze BH, Jakab A, Bauer S, et al. The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS). IEEE Trans Med Imaging. 2015;34(10):1993-2024. doi:10.1109/TMI.2014.2377694.
4. Ostrom QT, Gittleman H, Fulop J, et al. CBTRUS Statistical Report: Primary brain and other central nervous system tumors diagnosed in the United States, 2013-2017. Neuro Oncol. 2020;22(Suppl 2):iv1–iv96. doi:10.1093/neuonc/noaa200.
5. U.S. Food & Drug Administration. Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device Action Plan. FDA; 2021. Available at: https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligencemachine-learning-ai-ml-based-software-medical-device (accessed 2025).

