Awesome
Deep Network IP Protection
Official pytorch implementation of the paper:
- DeepIPR: Deep Neural Network Ownership Verification with Passports
- Rethinking Deep Neural Network Ownership Verification: Embedding Passports to Defeat Ambiguity Attacks
(Released on September 16, 2019)
Updated on September 25, 2022
Updates
- Fix bugs
- Added training and attack bash scripts (see
training.sh
andattacking.sh
) - Added flipping attack (see
flip_attack.py
) - Added ImageNet experiment
- Our framework on GAN IP protection is accepted in CVPR 2021, see here.
- Our framework on RNN IP protection is accepted in AACL IJCNLP 2022, see here.
- Our framework on Multi-modal IP protection is accepted in Pattern Recognition 2022, see here.
Description
<p align="justify"> With the rapid development of deep neural networks (DNN), there emerges an urgent need to protect the trained DNN models from being illegally copied, redistributed, or abused without respecting the intellectual properties of legitimate owners. This work proposes novel passport-based DNN ownership verification schemes which are both robust to network modifications and resilient to ambiguity attacks. The gist of embedding digital passports is to design and train DNN models in a way such that, the DNN model performance of an original task will be significantly deteriorated due to forged passports (see Figure 1). In other words genuine passports are not only verified by looking for predefined signatures, but also reasserted by the unyielding DNN model performances. </p> <p align="center"> <img src="Ex2.gif" width="25%"> <img src="Ex1.gif" width="25%"> </p> <p align="center"> Figure 1: Example of ResNet performance on CIFAR10 when (left) Random Attack and (right) Ambiguity Attack </p>How to run
For compability issue, please run the code using python 3.6
and pytorch 1.2
. Also, please see requirements.txt
or Dockerfile
to check out the environment.
If you wish to use a real image as the passport, please ensure that you have a pre-trained model before training the passport layer.
To see more arguments, please run the script with --help
.
The example below is running on default arguments.
To train a normal model (without passport)
Skip --train-passport
as follow:
python train_v1.py
To train a V1 model (scheme 1 passport)
Run --train-passport
as follow:
python train_v1.py --train-passport --pretrained-path path/to/pretrained.pth
To train a V2 model (scheme 2 passport)
Skip --train-private
, it is true by default.
python train_v23.py --pretrained-path path/to/pretrained.pth
To train a V3 model (scheme 3 passport)
Run --train-backdoor
as follow:
python train_v23.py --train-backdoor --pretrained-path path/to/pretrained.pth
Dataset
Most of the datasets will be automatically downloaded except the trigger set
data.
To download the default trigger set
data, kindly refer to https://github.com/adiyoss/WatermarkNN
Also, please refer to dataset.py
to understand how the data are loaded.
Attack
passport_attack_1.py
, passport_attack_2.py
, and passport_attack_3.py
are the scripts to run fake attack 1, 2 and 3 respectively, as mentioned in our paper.
Please refer to --help
on how to setup the arguments.
How to embed passport & signature into a desired layer
All passport configs are stored in passport_configs/
To set a passport layer for Alexnet or ResNet18, simply changing false
to true
or a string
.
If string
is deployed as the signature, please make sure that the length of the string is less than the number of channels in the specific layer.
For example, a layer with 256 channels, so the maximum will be 256-bit === 32 ascii characters are allowed. If the signature is less than 32 characters, the remaining bits will be set randomly.
The example below is AlexNet with the last 3 layers as the passport layer, i.e we embed random signature into the 4th and 5th layer and embed this is my signature
into the last layer (6th).
{
"0": false,
"2": false,
"4": true,
"5": true,
"6": "this is my signature"
}
For passport in our experiments, we randomly choose 20 images from the test data. Passports in the intermediate layer will be the activation map of 20 images computed from pretrained model.
Citation
If you find this work useful for your research, please cite
@article{Deepipr,
title={DeepIPR: Deep Neural Network Ownership Verification with Passports},
author={Fan, Lixin and Ng, Kam Woh and Chan, Chee Seng and Qiang, Yang},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2022},
volume={44},
number={10},
pages={6122-6139},
doi = {10.1109/TPAMI.2021.3088846}
}
Feedback
Suggestions and opinions on this work (both positive and negative) are greatly welcomed. Please contact the authors by sending an email to
lixinfan at webank.com
or kamwoh at gmail.com
or cs.chan at um.edu.my
.
License and Copyright
The project is open source under BSD-3 license (see the LICENSE
file).
©2019 Webank and Universiti Malaya.