Developed by Cedars-Sinai and the Smidt Heart Institute, the EchoCLIP algorithm uses a large dataset to potentially enhance the accuracy of echocardiogram interpretations.
Summary: Cedars-Sinai and the Smidt Heart Institute have developed EchoCLIP, an AI algorithm trained on over a million echocardiograms to provide clinician-level evaluations of heart function. This model can identify patients across different studies, recognize clinically important changes, and assess cardiac devices. EchoCLIP, described in Nature Medicine, demonstrates how large datasets and AI can enhance medical imaging interpretations. It integrates computer vision and natural language processing to augment cardiologists’ capabilities.
Key Takeaways:
- The novel foundation model integrates computer vision interpretation of echocardiogram images with natural language processing to augment cardiologists’ interpretation of echocardiograms.
- According to the researchers, EchoCLIP is one of the largest AI models trained specifically on echocardiography, utilizing over a million cardiac ultrasound videos to achieve high accuracy in image interpretation.
- The EchoCLIP foundation model has the capability to identify individual patients across different studies and timepoints, recognizing important clinical changes and assisting in longitudinal patient management.
- The AI model can not only assess general heart function but also accurately identify and evaluate implanted intracardiac devices and surgical interventions.
Artificial intelligence experts at Cedars-Sinai and the Smidt Heart Institute created a dataset with more than 1 million echocardiograms, or cardiac ultrasound videos, and their corresponding clinical interpretations.
Using this database, they created EchoCLIP, a powerful machine learning algorithm that can “interpret” echocardiogram images and assess key findings.
Clinical-Level Evaluations
The design and evaluation of EchoCLIP, described in a manuscript published in Nature Medicine, suggest that an EchoCLIP interpretation of a patient’s echocardiogram provides clinician-level evaluations of heart function, assessment of past surgeries and devices, and may assist clinicians in identifying patients in need of treatment.
The EchoCLIP foundation model also can identify the same patient across multiple videos, studies, and timepoints, as well as recognize clinically important changes in a patient’s heart.
“To our knowledge, this is the largest model trained on echocardiography images,” says corresponding author David Ouyang, MD, a faculty member in the department of cardiology in the Smidt Heart Institute and in the division of artificial intelligence in medicine, in a release. “Many previous AI models for echocardiograms are only trained on tens of thousands of examples. In contrast, EchoCLIP’s uniquely strong performance in image interpretation is a result of its training on almost tenfold more data than existing models.”
Potential of Large-Scale Data in AI
Ouyang adds, “Our results suggest that large datasets of medical imaging and expert-adjudicated interpretations can serve as the basis for training medical foundation models, which are a form of generative artificial intelligence.”
He said this advanced foundation model can soon help cardiologists assess echocardiograms by generating preliminary assessments of cardiac measurements, identifying changes that happen over time, and identifying common disease states.
The team of investigators built a dataset of 1,032,975 cardiac ultrasound videos and corresponding expert interpretations to develop EchoCLIP.
Study Takeaways
Key takeaways from the study include:
- EchoCLIP displayed strong performance when assessing cardiac function using heart images.
- The foundation model could identify implanted intracardiac devices like a pacemaker, implanted mitral valve repairs and aortic valves from the echocardiogram images.
- EchoCLIP accurately identified unique patients across studies, identified clinically important changes such as having undergone heart surgery, and enabled the development of a preliminary text interpretation of echocardiogram images.
“Foundation models are one of the newest areas within generative AI, but most models do not have enough medical data to be useful in the healthcare arena,” says Christine M. Albert, MD MPH, chair of the department of cardiology in the Smidt Heart Institute and the Lee and Harold Kapelovitz Distinguished Chair in Cardiology, in a release.
Albert, who was not involved in the Nature Medicine study, says in a release, “This novel foundation model integrates computer vision interpretation of echocardiogram images with natural language processing to augment cardiologists’ interpretation of echocardiograms.”
Photo 208431771 © BiancoBlue | Dreamstime.com