Awesome
Deep Learning Based Automatic Modulation Recognition: Models, Datasets, and Challenges
Source code for the paper "Deep Learning Based Automatic Modulation Recognition: Models, Datasets, and Challenges", which is published in Digital Signal Processing.
Representative and up-to-date models in the AMR field are implemented on four different datasets (RML2016.10a, RML2016.10b, RML2018.01a, HisarMod2019.1), providing a unified reference for interested researchers.
The article is available here:Deep Learning Based Automatic Modulation Recognition: Models, Datasets, and Challenges
If you have any question, please contact e-mail: zhangxx8023@gmail.com
Abstract
Automatic modulation recognition (AMR) detects the modulation scheme of the received signals for further signal processing without needing prior information, and provides the essential function when such information is missing. Recent breakthroughs in deep learning (DL) have laid the foundation for developing high-performance DL-AMR approaches for communications systems. Comparing with traditional modulation detection methods, DL-AMR approaches have achieved promising performance including high recognition accuracy and low false alarms due to the strong feature extraction and classification abilities of deep neural networks. Despite the promising potential, DL-AMR approaches also bring concerns to complexity and explainability, which affect the practical deployment in wireless communications systems. This paper aims to present a review of the current DL-AMR research, with a focus on appropriate DL models and benchmark datasets. We further provide comprehensive experiments to compare the state of the art models for single-input-single-output (SISO) systems from both accuracy and complexity perspectives, and propose to apply DL-AMR in the new multiple-input-multiple-output (MIMO) scenario with precoding. Finally, existing challenges and possible future research directions are discussed.
Content
Experimental comparison for SISO system
Accuracy
Fig.1 Recognition accuracy comparison of the state-of-the-art models on (a) RML2016.10a, (b) RML2016.10b, (c) RML2018.01a, (d) HisarMod2019.1.
Parameter Comparison
Table1 Model size and complexity comparison on the four datasets (A: RML2016.10a, B: RML2016.10b, C: RML2018.01a, D: HisarMod2019.1).
Confusion matrix
Fig.2 Confusion matrices. A, B and C represent the confusion matrices obtained on the RML2016.10a, RML2016.10b, and RML2018.01a, respectively. The numerical indexes 1 - 14 denote CNN1, CNN2, MCNET, IC-AMCNET, ResNet, DenseNet, GRU, LSTM, DAE, MCLDNN, CLDNN, CLDNN2, CGDNet, PET-CGDNN.
Dataset
Table2 Main AMR open datasets for SISO systems.
Dataset | Link | Notes |
---|---|---|
RML2016.10a, RML2016.10b, RML2018.01a | RML | If RML2018 dataset is too large, you can use SubsampleRML2018.py to sample the dataset to get a partial dataset for experimentation. |
HisarMod2019.1 | HisarMod | In our experiments, the dataset was converted from a .CSV file to a .MAT file, which can be found in Link. |
Related Papers
Environment
These models are implemented in Keras, and the environment setting is:
- Python 3.6.10
- TensorFlow-gpu 1.14.0
- Keras-gpu 2.2.4
Remarks
You will need to download the appropriate dataset and change the flie path to the corresponding dataset in your code. There is no guarantee that the code can run sucessfully under other environmental configurations, but there may be performance differences due to different hardware conditions.
About DAE: In the author's open source code, decoder uses the TimeDistributed layer. In our initial implementation, decoder unfolds the data and uses a fully connected layer to reconstruct the input, so the difference is described here. (Source code for DAE) We updated the DAE source code and experimental results with TimeDistributed layer as decoder in our website.
Acknowledgement
Our code is partly based on leena201818. Thanks leena201818 and wzjialang for their great work!
Citation
Please cite the literature we refer to if they are helpful to your work. If our work is helpful to your research, please cite:
@article{ZHANG2022103650,
title={Deep Learning Based Automatic Modulation Recognition: Models, Datasets, and Challenges},
author={Fuxin Zhang and Chunbo Luo and Jialang Xu and Yang Luo and FuChun Zheng},
journal={Digital Signal Processing},
year={2022},
doi = {https://doi.org/10.1016/j.dsp.2022.103650}
}