EchoFocus

Logo

AI Automated Echocardiography

View the Project on GitHub cavalab/echofocus

EchoFocus

EchoFocus is an AI method for echocardiography that diagnoses and measures cardiac function based on videos comprising an echocardiogram study.
Its name refers to the fact that EchoFocus skips view classification, instead relying on attention mechanisms to determine which echo views to priortize in making specfic predictions.

Install

Project dependencies are in pyproject.toml and uv.lock.

Requirements:

Quickstart (uv)

Using uv:

uv sync # sync dependencies
uv run echofocus.py --help

Quickstart (pip)

python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
python echofocus.py --help

Usage

Extract video embeddings

EchoFocus works on top of the video embeddings generated by PanEcho. In order to use EchoFocus, you must first generate video embeddings for your echos, and store them in hd5 files. See the embed for scripts and guidelines on how to accomplish this.

From there, you can use EchoFocus to train models, generate study embeddings, generate new predictions, and analyze video importance according to the functions below.

Load a pre-trained model

Pre-trained models are made available via the Releases page. Once downloaded, they can be used through calls to echofocus.py as described below.

Train Models

To train a model, you must first create a configuration file named config.json. An example file called config-example.json is included as a template for you to use. The config.json file specifies:

Once config.json is made, you can train models by calling python echofocus.py train and specifying a model name, a dataset, and task.

python echofocus.py train \
    --model_name [model_name] \
    --dataset [dataset] \
    --task [measure,chd,fyler]

Generate Study Embeddings

To generate study level embeddings, use python echofocus.py embed. For example:

python echofocus.py embed \
    --dataset outside \
    --model_name EchoFocus_Measure

Would generate study embeddings using the EchoFocus_Measure model on the “outside” dataset.

Explain Model Outputs

EchoFocus supports model explanations that attribute importance to individual videos in the echo study using integrated gradients. This is accomplished by calling python echofocus.py explain with the appropriate arguments.

Cite

Platon Lukyanenko, Sunil Ghelani, Yuting Yang, Bohan Jiang, Timothy Miller, David Harrild, Nao Sasaki, Francesca Sperotto, Danielle Sganga, John Triedman, Andrew J. Powell, Tal Geva, William G. La Cava, Joshua Mayourian (2026). Automated Echocardiographic Detection of Congenital Heart Disease Using Artificial Intelligence. Preprint: medrxiv.org

Contact

This work is a joint project of the Congenital Heart AI Lab (CHAI Lab) and the Cava Lab at Boston Children’s Hospital, affiliated with Harvard Medical School.

To get help with the repository, create an issue. PR contributions are very welcome.

Maintainers

Acknowledgments

The authors would like to acknowledge Boston Children’s Hospital’s High-Performance Computing Resources Clusters Enkefalos 3 (E3) made available for conducting the research reported in this publication.

This work was supported in part by the Kostin Innovation Fund, Thrasher Research Fund Early Career Award, NIH/NHLBI T32HL007572, and NIH/NLHBI 2U01HL098147-12.