Home

Awesome

SAR-JEPA: A Joint-Embedding Predictive Architecture for SAR ATR

These are codes and weights of the paper:

Predicting Gradient is Better: Exploring Self-Supervised Learning for SAR ATR with a Joint-Embedding Predictive Architecture:

百度网盘: 链接:https://pan.baidu.com/s/14sRPSCygTKMelSy4ZkqRzw?pwd=jeq8 提取码:jeq8

Dataset

DatasetSize#Target#SceneRes(m)BandPolarizationDescription
MSAR28,499>4>61CQuadGround and sea target detection dataset
SAR-Ship39,729>1>43~25CQuadShip detection dataset in complex scenes
SARSim21,168730.3XSingleVehicle simulation dataset
SAMPLE5,3801010.3XSingleVehicle simulation and measured~dataset
MSTAR5,2161010.3XSingleFine-grained vehicle classification dataset
FUSAR-Ship9,83010>51.1~1.7CDoubleFine-grained ship classification dataset
SAR-ACD2,537631CSingleFine-grained aircraft classification dataset

Pre-training

Our code is based on LoMaR with MAE and MaskFeat, and its enviroment is follow LoMaR.

cd rpe_ops/
python setup.py install --user

For pre-training with default setting

CUDA_VISIBLE_DEVICES=0,1,2,3  python -m torch.distributed.launch --nproc_per_node=4 --master_port=25642  main_pretrain.py --data_path ${IMAGENET_DIR}

Our main changes are in the model_lomar.py

        self.sarfeature1 = GF(nbins=self.nbins,pool=self.cell_sz,kensize=5,
                                  img_size=self.img_size,patch_size=self.patch_size)
        self.sarfeature2 = GF(nbins=self.nbins,pool=self.cell_sz,kensize=9,
                                  img_size=self.img_size,patch_size=self.patch_size)
        self.sarfeature3 = GF(nbins=self.nbins,pool=self.cell_sz,kensize=13,
                                  img_size=self.img_size,patch_size=self.patch_size)
        self.sarfeature4 = GF(nbins=self.nbins,pool=self.cell_sz,kensize=17,
                                  img_size=self.img_size,patch_size=self.patch_size)

Fine-tuning with pre-trained checkpoints

Our few-shot learning is based on Dassl. You may need to installate this and use our modified tools.py and transforms.py for SAR images. You can run MIM_finetune.sh and MIM_linear.sh.

Contact us

If you have any questions, please contact us at lwj2150508321@sina.com

@article{li2023predicting,
  title={Predicting Gradient is Better: Exploring Self-Supervised Learning for {SAR} {ATR} with a Joint-Embedding Predictive Architecture },
  author={Li, Weijie and Wei, Yang and Liu, Tianpeng and Hou, Yuenan and Liu, Yongxiang and Liu, Li},
  journal={arXiv preprint},
  url={https://arxiv.org/abs/2311.15153},
  year={2024}
}