Awesome
<div align=center> <h1> GraphEcho: Graph-Driven Unsupervised Domain Adaptation for Echocardiogram Video Segmentation </h1> </div> <div align=center> <a src="https://img.shields.io/badge/%F0%9F%93%96-ICCV_2023-8A2BE2.svg?style=flat-square" href="https://arxiv.org/abs/2309.11145"> <img src="https://img.shields.io/badge/%F0%9F%93%96-ICCV_2023-8A2BE2.svg?style=flat-square"> </a> <a src="https://img.shields.io/badge/%F0%9F%9A%80-xmed_Lab-ed6c00.svg?style=flat-square" href="https://xmengli.github.io/"> <img src="https://img.shields.io/badge/%F0%9F%9A%80-xmed_Lab-ed6c00.svg?style=flat-square"> </a> <a src="https://img.shields.io/badge/%F0%9F%9A%80-XiaoweiXu's Github-blue.svg?style=flat-square" href="https://github.com/XiaoweiXu/CardiacUDA-dataset"> <img src="https://img.shields.io/badge/%F0%9F%9A%80-Xiaowei Xu's Github-blue.svg?style=flat-square"> </a> </div>:hammer: PostScript
:smile: This project is the pytorch implemention of [paper];
:laughing: Our experimental platform is configured with <u>One RTX3090 (cuda>=11.0)</u>;
:blush: Currently, this code is avaliable for public dataset <u>CAMUS and EchoNet</u>;
:smiley: For codes and accessment that related to dataset CardiacUDA;
:eyes: The code is now available at:
..\datasets\cardiac_uda.py
:heart_eyes: For codes and accessment that related to dataset CardiacUDA
:eyes: Please follw the link to access our dataset:
:computer: Installation
-
You need to build the relevant environment first, please refer to : requirements.yaml
-
Install Environment:
conda env create -f requirements.yaml
- We recommend you to use Anaconda to establish an independent virtual environment, and python > = 3.8.3;
:blue_book: Data Preparation
1. EchoNet & CAMUS
-
This project provides the use case of echocardiogram video segmentation task;
-
The hyper parameters setting of the dataset can be found in the train.py, where you could do the parameters modification;
-
For different tasks, the composition of data sets have significant different, so there is no repetition in this file;
1.1. Download The CAMUS.
:speech_balloon: The detail of CAMUS, please refer to: https://www.creatis.insa-lyon.fr/Challenge/camus/index.html/.
-
Download & Unzip the dataset.
The CAMUS dataset is composed as: /testing & /training.
-
The source code of loading the CAMUS dataset exist in path :
..\datasets\camus.py and modify the dataset path in ..\train_camus_echo.py
New Version : We have updated the infos.npy in our new released code
1.2. Download The EchoNet.
:speech_balloon: The detail of EchoNet, please refer to: https://echonet.github.io/dynamic/.
-
Download & Unzip the dataset.
- The EchoNet dataset is consist of: /Video, FileList.csv & VolumeTracings.csv.
-
The source code of loading the Echonet dataset exist in path :
..\datasets\echo.py and modify the dataset path in ..\train_camus_echo.py
-
2. CardiacUDA
- Please access the dataset through : XiaoweiXu's Github
- Follw the instruction and download.
- Finish dataset download and unzip the datasets.
- Modify your code in both:
..\datasets\cardiac_uda.py and modify the infos and dataset path in ..\train_cardiac_uda.py # The layer of the infos dict should be : # dict{ # center_name: { # file: { # views_images: {image_path}, # views_labels: {label_path},}}}
:feet: Training
-
In this framework, after the parameters are configured in the file train_cardiac_uda.py and train_camus_echo.py, you only need to use the command:
python train_cardiac_uda.py
And
python train_camus_echo.py
-
You are also able to start distributed training.
- Note: Please set the number of graphics cards you need and their id in parameter "enable_GPUs_id".
:rocket: Code Reference
- https://github.com/huawei-noah/Efficient-AI-Backbones/tree/master/vig_pytorch
- https://github.com/chengchunhsu/EveryPixelMatters