Artificial intelligence (AI) helps produce echocardiograms more quickly and efficiently, with better-quality images and less fatigue for operators, shows the first prospective randomized controlled trial of AI-assisted echocardiography.
The Japanese study used Us2.ai software, developed from an 11-country research platform and supported by the Singapore Agency for Science, Technology and Research. This system and another newly developed AI system, PanEcho — developed at the Yale School of Medicine in New Haven, Connecticut, and the University of Texas at Austin — can automatically analyze a wide range of structures, functions, and cardiographic views. Studies of these two systems were presented at the American Heart Association (AHA) Scientific Sessions 2024.
“This is what happens when you introduce computer scientists to cardiologists,” said David Ouyang, MD, a cardiologist at Cedars-Sinai Medical Center in Los Angeles, California. “It really allows for exciting new technologies to be developed.” Ouyang was not involved with either of the studies presented, but led a previous study of another AI platform, Echo-Net Dynamic, developed at Stanford University in Palo Alto, California, which proved to be better than human interpretation of echocardiograms for left ventricular ejection fraction.
AI’s Speed and Precision
Echocardiography— the most common form of cardiac imaging — is “the ideal place to use AI,” said Ouyang. “It covers the full spectrum of disease; we use it in very sick patients as well as healthy patients for screening.” Echocardiography is cheap and portable and involves no radiation, but variability in interpretation among observers and in the quality of images are its Achilles’ heel, he explained.
AI aims to reduce the variability in interpretation and improve image quality. It can also increase the number of exams produced in a given day, according to Nobuyuki Kagiyama, MD, PhD, a researcher with Juntendo University in Tokyo, Japan. He presented a study that involved four sonographers working at a single center in Japan, where echocardiography is performed at a higher rate per capita than in the United States.
AI can also help overcome the “main bottleneck: limited access to highly skilled personnel” who interpret images and videos, said Gregory Holste, a PhD candidate in electrical engineering at the University of Texas at Austin, a researcher in the Cardiovascular Data Science Lab at Yale, and an investigator for a validation study of PanEcho. The PanEcho model was trained on more than 1.2 million videos that consisted of 50 million images, and it selected five views that can detect abnormalities while maintaining a high rate of accuracy.
“This is a way for AI to actually simplify echo acquisition,” said Holste, and “could have major implications in settings with limited access to highly trained technicians or sonographers.” Because fewer views are needed, it could “enable automated cardiovascular healthcare that would otherwise be inaccessible to these populations.”
Beyond the Hype
These studies demonstrate the value of AI to a core technology in healthcare. The randomized controlled trial conducted in Japan studied 14 tasks and showed that AI returned almost all values, and the values generated were within the range of the physician’s final report in 85% to 99% of cases (although there was a lower rate for a single measurement). Blinded reviewers rated the quality of the images as excellent for 31% of non-AI images and 41% of images produced with the help of AI; most of the rest were rated as good. However, the number of sonographers (four) and the time span of the study (38 working days) were limited.
The validation study of PanEcho showed a similar rate of accuracy, with 39 measurements highly likely to be accurately reported by the system. The AI model was validated against a later cohort in the same health system in which it was developed, as well as against public echocardiographic datasets from Stanford University. Validation showed the results can be generalized to different patient populations, although PanEcho has not been tested in a trial.
The Japanese study was prospective and validated externally, but the AI model had a small training dataset, Ouyang pointed out. In contrast, the PanEcho study had a retrospective design, was validated mainly internally, and was based on a large training dataset.
Another difference is closed-source versus open-source software. The Japanese study involved closed-source Us2.ai software, provided free of charge for the study. The developers of PanEcho plan to release their programming code openly to help others develop AI for echocardiography.
“I applaud the investigators for saying that they are going to release the code and the weights, which is important for open science,” Ouyang said.