Home

Awesome

An oriented object detection framework based on TensorRT

<details open> <summary> <font color="green">Supported network:</font> </summary> </details> <details open> <summary> <font color="orange">TensorRT engine file:</font> </summary>
ModelbackboneParamSizeInputGFLOPsFPSmAPTensorRT
FCOSR-liteMobilenet v26.9M51.63MB1024×1024101.257.64(NX)74.30code: ABCD
FCOSR-tinyMobilenet v23.52M23.2MB1024×102435.8910.68(NX)73.93code: ABCD
</details> <br>

This implement is modified from TensorRT/efficientdet. <br> Support Running Mode: fix, whole, server(base on zeroMQ, wrappers)

The inference framework is shown bellow. framework

Detection result detection

Log framework

Recommend system environments:

Install

pip install Cython
pip install -r requirements.txt

Note: DOTA_devkit. INSTALL.md

Test result on Jetson AGX Xavier

Dota1.0 test set

namesizepatch sizegappatchesdet objectsdet time(s)
P0031.png5343×379510242003511972.75
P0051.png4672×54301024200423092.38
P0112.png6989×45161024200541843.02
P0137.png5276×4308102420035661.95
P1004.png7001×39071024200451832.52
P1125.png7582×4333102420054282.95
P1129.png4093×6529102420040702.23
P1146.png5231×4616102420042642.29
P1157.png7278×52861024200631843.47
P1378.png5445×4561102420042832.32
P1379.png4426×41821024200306861.78
P1393.png6072×65401024200648933.63
P1400.png6471×44791024200483482.63
P1402.png4112×47931024200302931.68
P1406.png6531×4182102420040192.19
P1415.png4894x48981024200361901.99
P1436.png5136×5156102420042392.31
P1448.png7242×5678102420063513.41
P1457.png5193×46581024200423822.33
P1461.png6661×6308102420064273.45
P1494.png4782×6677102420048702.61
P1500.png4769×4386102420036921.96
P1772.png5963×5553102420049282.70
P1774.png5352×42811024200352911.95
P1796.png5870×58221024200493082.74
P1870.png5942×60591024200561353.04
P2043.png4165×343810242002014791.49
P2329.png7950×4334102420060833.26
P2641.png7574×56251024200632693.41
P2642.png7039×55511024200634513.50
P2643.png7568×56191024200632493.40
P2645.png4605×34421024200243571.42
P2762.png8074×43591024200601273.23
P2795.png4495×3981102420030651.64

How to use

We define configure file (yaml) to replace plenty of args.

# Small pictures inference mode
python infer_multi.py config/fix_mode/fcosr_tiny_nx.yaml
# Big whole picture inference mode
python infer_whole.py config/whole_mode/fcosr_tiny_agx_whole.yaml

server mode (ZeroMQ Wrapper)

# server
python infer_zmq_server.py config/zmq_server/fcosr_tiny_zmq_server.yaml
# client
python infer_zmq_client.py

A client running demo.

video

A configure file demo is:

mode: 'whole'  # support mode: fix, whole, server
port: 10000 # only server mode support this attribution
model:
  engine_file: '/home/nvidia/Desktop/FCOSR/model/epoch_36_16_lite_nx.trt' # TensorRT engine file path
  labels: 'labels.txt' # calss name
io: # only support whole/fix mode.
  input_dir: '/home/nvidia/DOTA_TEST/images/' # image folder path
  output_dir: 'result' # output
preprocess: # preprocess configure
  num_process: 8 # multi process
  queue_length: 40
  normalization: # normalization parameters
    enable: 1 # switch
    mean:
      - 123.675
      - 116.28
      - 103.53
    std:
      - 58.395
      - 57.12
      - 57.375
  split:    # split configure, only support whole/server mode.
    subsize: 1024
    gap: 200
postprocess: # postprocess configure
  num_process: 6 # multi process
  queue_length: 40
  nms_threshold: 0.1 # poly nms threshold 
  score_threshold: 0.1 # poly nms score threshold
  max_det_num: 2000
  draw_image: # visualization configure, only support whole/fix mode.
    enable: 0 # switch
    num: 20 # int or 'all'

Run Mode

ModeSpecified AttributionsIgnored AttributionsDescription
fixio, draw_imagesplit, portprogress fix size image
wholeio, draw_image, splitportprogress a big image
serverport, splitio, draw_imageuse zeromq to get image, then progress it like whole mode