Home

Awesome

iq-dota-use-case

This is a study of object detector performance degradation with different level of compression added to the input images. The core inference model code is from dota_models, see the main section below.


To reproduce the experiments:

  1. git clone git@publicgitlab.satellogic.com:iqf/iq-dota-use-case
  2. cd iq-dota-use-case
  3. Then build the docker image with make build.
  4. In order to execute the experiments:
    • make dockershell (*)
    • make download from outsite the docker or just ./download.sh from inside the docker. This will download the trained models and preprocess the data.
    • Inside the docker terminal execute /miniconda3/envs/iqfenv/bin/python ./iqf-usecase.py
  5. Start the mlflow server by doing make mlflow (*)
  6. Notebook examples can be launched and executed by make notebookshell NB_PORT=[your_port]" (**)
  7. To access the notebook from your browser in your local machine you can do:
    • If the executions are launched in a server, make a tunnel from your local machine. ssh -N -f -L localhost:[your_port]:localhost:[your_port] [remote_user]@[remote_ip] Otherwise skip this step.
    • Then, in your browser, access: localhost:[your_port]/?token=IQF

Notes

DOTA_models

We provide the config files, TFRecord files and label_map file used in training DOTA with ssd and rfcn, and the trained models have been uploaded to Baidu Drive.
Notice that our code is tested on official Tensorflow models@(commit fe2f8b01c6) with tf-nightly-gpu (1.5.0.dev20171102), cuda-8.0 and cudnn-6.0 on Ubuntu 16.04.1 LTS.

Installation

Preparing inputs

Tensorflow Object Detection API reads data using the TFRecord file format. The raw DOTA data set is located here. To download, extract and convert it to TFRecords, run the following commands below:

# From tensorflow/models/object_detection/
python create_dota_tf_record.py \
    --data_dir=/your/path/to/dota/train \
    --indexfile=train.txt \
    --output_name=dota_train.record \
    --label_map_path=data/dota_label_map.pbtxt \

The subdirectory of "data_dir" is in the structure of

data_dir
    ├── images
    └── labelTxt
    └── indexfile

Here the indexfile contains the full path of all images to convert, such as train.txt or test.txt. Its format is shown below.

/your/path/to/dota/train/images/P2033__1__0___0.png
/your/path/to/dota/train/images/P2033__1__0___595.png
...

And the output path of tf_record is also under "data_dir", you can easily find it in data_dir/tf_records/

Training

A local training job can be run with the following command:

# From tensorflow/models/object_detection/
python train.py \
    --logtostderr \
    --pipeline_config_path=${PATH_TO_YOUR_PIPELINE_CONFIG} \
    --train_dir=${PATH_TO_TRAIN_DIR}

The pipline config file for DOTA data set can be found at models/model/rfcn_resnet101_dota.config or models/model/ssd608_inception_v2_dota608.config. You need to replace some paths in it with your own paths.

Here we train rfcn with image size of 1024×1024, ssd with image size of 608×608. Please refer to DOTA_devkit/ImgSplit.py to split the picture and label. The trained models can be downloaded here:<br>

Evaluation

You can use the pre-trained models to test images. Modify paths in getresultfromtfrecord.py and then run with the following commad:

# From tensorflow/models/object_detection/
python getresultfromtfrecord.py

Then you will obtain 15 files in the specified folder. For DOTA, you can submit your results on Task2 - Horizontal Evaluation Server for evaluation. Make sure your submission is in the correct format.