Foundation models are a type of AI that are trained on large datasets and can provide multiple outputs, such as detecting several health conditions in a scan.
Large medtech firms like GE Healthcare, technology firms like Nvidia and Amazon Web Services, startups, and radiology practices are introducing their own models.
HOPPR, an AI startup, trained a foundation model on a large dataset of chest X-rays, spanning hundreds of health conditions.
But the company isn't taking that model directly to the FDA. Instead, it's offering it to other medtech companies that want to use HOPPR's foundation model as the basis for their own fine-tuned models.
Aidoc, like HOPPR, sees foundation models as a launching point for more specific radiology tools.
At last year's Radiological Society of North America conference, Aidoc debuted its CARE foundation model, a vision-language model built using CT and X-ray images, and supporting clinical information such as notes, labs and vitals.
Radiology Partners has made two different models that it is piloting with radiologists.
The first tool, called Mosaic Reporting, uses large language models and voice recognition to structure radiology reports and reduce time spent dictating notes. The second model, Mosaic Drafting, combined a large language model and a large vision model to interpret images and pre-draft X-ray reports.
【MORE】