Curia: A Frontier Foundation Model for 3D Imaging, Opening a New Era of AI in Radiology
September 9, 2025 – We are thrilled to announce the public release of Curia, a groundbreaking multi-modal foundation model poised to transform radiological image interpretation and accelerate progress in AI for healthcare. Curia is, to our knowledge, the strongest foundation model specifically designed for precision radiology, with a focus on cross-sectional imaging (CT and MRI slices).
Radiological image interpretation is fundamental to countless clinical diagnoses, yet the traditional approach of building task-specific AI for every imaging modality, disease, and radiological feature is simply not scalable. Foundation models offer a powerful solution by training on vast, uncurated sets of unlabeled data, learning comprehensive features that can be applied to a wide array of downstream tasks.
Test Curia Model
Select a category and task to see Curia's AI capabilities in action
Choose a category
Select a category and task to see Curia's AI analysis
What Makes Curia Different?
Unlike existing models that are often trained on narrower and task-specific datasets, Curia was trained on the entire cross-sectional imaging output of the Centre d’Imagerie du Nord (CIN) in Paris, France, over several years. The CIN has been a long-term partner of Raidium, and all imaging data have been collected in accordance with the highest data privacy standards. This massive corpus comprises 150K exams, totaling 130TB of real-world data, including over 200 million CT and MRI slices, from head to toe. This extensive and uniform training on clinical routine images provides Curia with a deep, transferable understanding of complex anatomy and pathology, mitigating biases that can limit generalizability.

Unprecedented Performance and Emergent Properties
We built a new curated external validation benchmark comprising 19 diverse tasks spanning multiple modalities and diseases. Curia meets or surpasses the performance of both radiologists and recently published foundation models (including Biomed CLIP, BiomedParse, and MedImageInsight). Our evaluations demonstrate several clinically significant emergent properties:
Unparalleled Performance: Curia consistently outperforms other foundation models across numerous tasks. Curia-L achieves near-perfect accuracy of 98.39% in organ classification on CT scans, significantly outperforming BiomedCLIP (88.19%) and MedImageInsight (84.96%).

Cross-Modality Generalization: Despite being trained without explicit pairing of CT and MRI data, Curia exhibits a remarkable ability to generalize features from one modality to another. For example, Curia-L exhibits a more balanced accuracy decrease (-9.17 percentage points) when evaluated on out-of-distribution MRI data compared to the larger drops seen in MedImageInsight (-35.51 percentage points) and BioMedCLIP (-43.01 percentage points).

Few-Shot Learning: Curia demonstrates strong few-shot learning capabilities (the ability to learn to recognize patterns and make predictions with a very small number of training examples), achieving high accuracy with a minimal number of labeled examples. This is particularly advantageous in the medical field, where large, expertly annotated datasets are often resource-intensive to create.

Clinical Impact: Curia delivers performance comparable to, or even exceeding, the accuracy of senior resident radiologists on benchmark tasks across various specialties. On average, Curia-L achieves a 89.3% prediction rate, compared to an average 79.1% prediction rate for radiologists.

Benchmarking Against State-of-the-Art Models
We rigorously compared Curia against leading foundation models in radiology using our newly developed, real-world data benchmark, the first of its kind for foundation models. This benchmark, with its 19 distinct radiological tasks spanning both CT and MRI modalities and covering most anatomical regions, provides a standardized and rigorous method for evaluating a model's general performance. It encompasses a wide spectrum of diseases that radiologists encounter, including disorders related to aging, emergencies, infectious diseases, and oncological conditions.
In our benchmark, we compared Curia with state-of-the-art foundation models:
MedImageInsight: An open-source visual embedding model by Microsoft, trained on multi-modal medical data from various domains (radiology, histology, pathology, dermatology, ophthalmology), for a total of 3.8M images.
BiomedCLIP: A ViT-B model trained with contrastive learning on 15M (image, text) pairs extracted from PubMed.

The Power of Curia in Clinical Practice
Curia's capabilities extend beyond just interpreting single images. One of its most significant use cases is co-registration, a process that aligns different medical images to provide a more comprehensive view. Curia consistently outperforms other models in complex registration tasks, including CT-to-CT, MRI-to-MRI, and even the more challenging cross-modality CT-to-MRI alignments. For instance, Curia-L achieved the highest mean Dice Similarity Coefficient (DSC) for CT-to-CT registration (81%) and MRI-to-MRI registration (86%). This ability to align images from different modalities is crucial for tasks like surgical planning and tracking disease progression.
In addition to registration, Curia helps predict the survival rate in cancer patients. Going deeper than a simple binary classification of benign or malignant tumors, the model can help predict a patient's survival time using a cox regression model on the model’s features. This capability is a significant advancement, as the model's predictions are more accurate than conventional tumor staging methods.
Finally, Curia's proficiency in anatomy classification is a foundational strength. The model achieves near-perfect accuracy (98.40%) in organ classification on CT scans, significantly outperforming BiomedCLIP (88.19%) and MedImageInsight (84.96%). This strong performance extends to MRI data as well, with Curia-L achieving an accuracy of 89.11%. This fundamental understanding of anatomy is critical for a wide range of diagnostic tasks and serves as a building block for more complex applications.
To accelerate progress in the field, we are releasing the Curia-B model that shows great performance on all tasks. We also plan to release the 19-task external validation benchmark.
Ready to explore Curia? Here’s where to start:
Download Curia-B code from HuggingFace
Read our preprint on arXiv
Contact us to access Curia-L model.

Looking Ahead: On the Path Towards General Purpose Radiology Models and Products
While Curia represents a significant leap forward, our work continues. Future developments will focus on incorporating rich, multimodal data from electronic health records and textual reports, aiming for an even deeper contextual understanding of relevant medical knowledge and enabling conversational interactions with the model using natural language.
Curia provides a robust foundation and a standardized benchmark for future research, paving the way for more powerful, versatile, and data-efficient AI tools that aim to enhance diagnostic accuracy and assist clinical workflows, ultimately improving patient outcomes.
Finally, while the Curia model itself is a significant technical achievement, we recognize that its true impact lies in its clinical application. To bridge the gap between research and routine practice, we have developed a novel AI-native, promptable PACS viewer. This interface is designed to seamlessly integrate Curia into established physician workflows, enabling radiologists to interact with the model through visual and textual prompts. By providing a concrete platform for using the model, this viewer lays the groundwork for covering all radiological workflows in the future, making the power of foundation models accessible for radiology practice.
For more updates as we continue to push the boundaries of AI in precision radiology:
Thank you to our co-authors: Jean Du Terrail (.omics), Dr Mariam Moshiri (Department of Radiology and Radiological Science, Medical University of South Carolina), Dr Laurent Dercle (Department of Radiology, Columbia University Irving Medical Center), Dr Tom Boeken (Department of Vascular and Oncological Interventional Radiology, Hôpital Européen Georges Pompidou AP-HP, Université Paris-Cité, HEKA, INRIA, INSERM PARCC U 970), Dr Jules Gregory (HEKA, INRIA, INSERM PARCC U 970), Pr Maxime Ronot (HEKA, INRIA, INSERM PARCC U 970), Dr François Legou (Centre Cardiologique du Nord), Dr Pascal Roux (Centre Cardiologique du Nord), Pr Marc Sapoval (Department of Vascular and Oncological Interventional Radiology, Hôpital Européen Georges Pompidou, AP-HP) Also, thanks to our partners CIN, Jean Zay (IDRIS + GENCI), and Leonardo (CINECA, EuroHPC)
