Awesome
(ECCV 2024) Open-Vocabulary Camouflaged Object Segmentation
<p align="center"> <a href='https://arxiv.org/abs/2311.11241'> <img src='https://img.shields.io/badge/Paper-PDF-red?style=flat&logo=arXiv&logoColor=red' alt='arXiv PDF'> </a> <img src="https://img.shields.io/github/last-commit/lartpang/OVCamo"> <img src="https://img.shields.io/github/release-date/lartpang/OVCamo"> <br/> <img src='https://github.com/lartpang/OVCamo/assets/26847524/d2c474f2-4bde-455c-af71-e0761e57a574' alt='logo'> </p>@inproceedings{OVCOS_ECCV2024,
title={Open-Vocabulary Camouflaged Object Segmentation},
author={Pang, Youwei and Zhao, Xiaoqi and Zuo, Jiaming and Zhang, Lihe and Lu, Huchuan},
booktitle=ECCV,
year={2024},
}
[!note]
Details of the proposed OVCamo dataset can be found in the document for our dataset.
Prepare Dataset
- Prepare the training and testing splits: See the document in our dataset for details.
- Set the training and testing splits in the yaml file
env/splitted_ovcamo.yaml
:OVCamo_TR_IMAGE_DIR
: Image directory of the training set.OVCamo_TR_MASK_DIR
: Mask directory of the training set.OVCamo_TR_DEPTH_DIR
: Depth map directory of the training set. Depth maps of the training set which are generated by us, can be downloaded from https://github.com/lartpang/OVCamo/releases/download/dataset-v1.0/depth-train-ovcoser.zipOVCamo_TE_IMAGE_DIR
: Image directory of the testing set.OVCamo_TE_MASK_DIR
: Mask directory of the testing set.OVCamo_CLASS_JSON_PATH
: Path of the json fileclass_info.json
storing class information of the proposed OVCamo.OVCamo_SAMPLE_JSON_PATH
: Path of the json filesample_info.json
storing sample information of the proposed OVCamo.
Training/Inference
- Install dependencies:
pip install -r requirements.txt
.- The versions of
torch
andtorchvision
are listed in the comment ofrequirements.txt
.
- The versions of
- Run the script to:
- train the model:
python .\main.py --config .\configs\ovcoser.py --model-name OVCoser
; - inference the model:
python .\main.py --config .\configs\ovcoser.py --model-name OVCoser --evaluate --load-from <path of the local .pth file.>
.
- train the model:
Evaluate the Pretrained Model
- Download the pretrained model.
- Run the script:
python .\main.py --config .\configs\ovcoser.py --model-name OVCoser --evaluate --load-from model.pth
.
Evaluate Our Results
- Download our results and unzip it into
<path>/ovcoser-ovcamo-te
. - Run the script:
python .\evaluate.py --pre <path>/ovcoser-ovcamo-te
LICENSE
- Code: MIT LICENSE
- Dataset: <p xmlns:cc="http://creativecommons.org/ns#" xmlns:dct="http://purl.org/dc/terms/"><a property="dct:title" rel="cc:attributionURL" href="https://github.com/lartpang/OVCamo">OVCamo</a> by <span property="cc:attributionName">Youwei Pang, Xiaoqi Zhao, Jiaming Zuo, Lihe Zhang, Huchuan Lu</span> is licensed under <a href="https://creativecommons.org/licenses/by-nc-sa/4.0/?ref=chooser-v1" target="_blank" rel="license noopener noreferrer" style="display:inline-block;">Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International<img style="height:22px!important;margin-left:3px;vertical-align:text-bottom;" src="https://mirrors.creativecommons.org/presskit/icons/cc.svg?ref=chooser-v1" alt=""><img style="height:22px!important;margin-left:3px;vertical-align:text-bottom;" src="https://mirrors.creativecommons.org/presskit/icons/by.svg?ref=chooser-v1" alt=""><img style="height:22px!important;margin-left:3px;vertical-align:text-bottom;" src="https://mirrors.creativecommons.org/presskit/icons/nc.svg?ref=chooser-v1" alt=""><img style="height:22px!important;margin-left:3px;vertical-align:text-bottom;" src="https://mirrors.creativecommons.org/presskit/icons/sa.svg?ref=chooser-v1" alt=""></a></p>