Awesome
This is the official PyTorch implementation of our paper: Conditional Extreme Value Theory for Open Set Video Domain Adaptation
Paper link: https://dl.acm.org/doi/10.1145/3469877.3490600
Requirements
- Python 3.6, PyTorch 0.4, CUDA 10.0+
- The full requirements can be found in requirements.txt
Datasets
Experiments are conducted on two datasets: UCF-HMDB, UCF-Olympic.
The downloaded files need to store in ./dataset
.
Pre-extracted features and data lists can be downloaded as,
- Features
- UCF: download
- HMDB: download
- Olympic: training | validation
- Data lists
- UCF-Olympic
- UCF: training list | validation list
- Olympic: training list | validation list
- UCF-HMDB
- UCF: training list | validation list
- HMDB: training list | validation list
- UCF-Olympic
Datasets Split
For the open-set domain adaptation task, we need to keep source samples with known classes 0, 1, ..., C-1, C only and remove all source samples with classes C+1, C+2, ... We also need to change the unkown classes (C+1, C+2, ...) of target samples to (C+1), which is unknown class. To complete the datasets splitting, follow the steps below as an example of Olympic → UCF:
- Rename the data list files ("org" means "original", which is used to backup the original list.):
dataset/olympic/list_olympic_train_ucf_olympic-feature.txt
→dataset/olympic/list_olympic_train_ucf_olympic-feature_org.txt
dataset/ucf101/list_ucf101_train_ucf_olympic-feature.txt
→dataset/ucf101/list_ucf101_train_ucf_olympic-feature_org.txt
dataset/ucf101/list_ucf101_val_ucf_olympic-feature.txt
→dataset/ucf101/list_ucf101_val_ucf_olympic-feature_org.txt
- In the script open_set_data.py, follow the comments to understand and set the variables as you need.
- Run open_set_data.py. You will get 3 processed list files.
- According to the number of known classes you choose, remove lines of unknown classes in the file
data/classInd_ucf_olympic.txt
. Also, remember to keep the original file.
Dataset Selection and Hyper-parameter Selection
In the main script main.py, there there are two blocks of code: BLOCK 1 and BLOCK 2. These two blocks are used for select the dataset (put the names of 3 processed list files and modified class file) and hyper-parameters (best hyper-parameters are in paper), separately.
########## BLOCK 1: Change Here for Different Datasets ##########
args.class_file = "data/classInd_ucf_olympic.txt"
args.train_source_list = "dataset/olympic/list_olympic_train_ucf_olympic-feature.txt"
args.train_target_list = "dataset/ucf101/list_ucf101_train_ucf_olympic-feature.txt"
args.val_list = "dataset/ucf101/list_ucf101_val_ucf_olympic-feature.txt"
########## END OF BLOCK 1 ##########
########## BLOCK 2: Change Here for Different Hyper-Parameters ##########
args.lambda_ = 0.214 # H->U 0.7 | U->O 0.19 | O->U 0.214
args.adv_param = 5 # H->U 0.10 | U->O 1.83 | O->U 5
args.EVT_threshold = 0.3 # H->U 0.45 | U->O 0.565 | O->U 0.3
########## END OF BLOCK 2 ##########
Get Started
Once you have done the steps above, then you can run the code.
python main.py
Whenever you completed an adaptation experiment (e.g. Olympic → UCF), remember to remove the 3 processed list files and restore backups by removing _org
. Then, for the next experiment with different adaptation task (e.g. UCF → Olympic), you need to do the Datasets Split again for that task.