Home

Awesome

<div align=center> <h1> GraphEcho: Graph-Driven Unsupervised Domain Adaptation for Echocardiogram Video Segmentation </h1> </div> <div align=center> <a src="https://img.shields.io/badge/%F0%9F%93%96-ICCV_2023-8A2BE2.svg?style=flat-square" href="https://arxiv.org/abs/2309.11145"> <img src="https://img.shields.io/badge/%F0%9F%93%96-ICCV_2023-8A2BE2.svg?style=flat-square"> </a> <a src="https://img.shields.io/badge/%F0%9F%9A%80-xmed_Lab-ed6c00.svg?style=flat-square" href="https://xmengli.github.io/"> <img src="https://img.shields.io/badge/%F0%9F%9A%80-xmed_Lab-ed6c00.svg?style=flat-square"> </a> <a src="https://img.shields.io/badge/%F0%9F%9A%80-XiaoweiXu's Github-blue.svg?style=flat-square" href="https://github.com/XiaoweiXu/CardiacUDA-dataset"> <img src="https://img.shields.io/badge/%F0%9F%9A%80-Xiaowei Xu's Github-blue.svg?style=flat-square"> </a> </div>

:hammer: PostScript

  :smile: This project is the pytorch implemention of [paper];

  :laughing: Our experimental platform is configured with <u>One RTX3090 (cuda>=11.0)</u>;

  :blush: Currently, this code is avaliable for public dataset <u>CAMUS and EchoNet</u>;

  :smiley: For codes and accessment that related to dataset CardiacUDA;

      :eyes: The code is now available at:       ..\datasets\cardiac_uda.py

  :heart_eyes: For codes and accessment that related to dataset CardiacUDA

      :eyes: Please follw the link to access our dataset:

:computer: Installation

  1. You need to build the relevant environment first, please refer to : requirements.yaml

  2. Install Environment:

    conda env create -f requirements.yaml
    

:blue_book: Data Preparation

1. EchoNet & CAMUS

2. CardiacUDA

  1. Please access the dataset through : XiaoweiXu's Github
  2. Follw the instruction and download.
  3. Finish dataset download and unzip the datasets.
  4. Modify your code in both:
    ..\datasets\cardiac_uda.py
    and modify the infos and dataset path in
    ..\train_cardiac_uda.py
    # The layer of the infos dict should be :
    # dict{
    #     center_name: {
    #                  file: {
    #                        views_images: {image_path},
    #                        views_labels: {label_path},}}}
    

:feet: Training

  1. In this framework, after the parameters are configured in the file train_cardiac_uda.py and train_camus_echo.py, you only need to use the command:

    python train_cardiac_uda.py
    

    And

    python train_camus_echo.py
    
  2. You are also able to start distributed training.

    • Note: Please set the number of graphics cards you need and their id in parameter "enable_GPUs_id".

:rocket: Code Reference
:rocket: Updates Ver 1.0(PyTorch)
:rocket: Project Created by Jiewen Yang : jyangcu@connect.ust.hk