Awesome
TS-RSGAN
Super-Resolution of sentinel-2 images at 10m resolution with-out supervised images Sentinel-2 satellites can provide free optical remote sensing images with a spatial resolution of up to 10m, but the spatial details provided are not enough for many applications, so it is worth considering to improve the spatial resolution of Sentinel-2 satellites images through su-per-resolution (SR). Currently, the most effective SR models are mainly based on deep learning, especially the generative adversarial network (GAN). Models based on GAN need to be trained on LR-HR image pairs. There are two main methods to obtain LR-HR image pairs. One is to obtain LR images by BiCubic downscaling of HR images, but there is domain gap between the generated LR images and real images; the other one is to use the same or different satellites to capture LR and HR images separately, but these LR-HR image pairs may be misaligned. In this paper, a two-step super-resolution generative adversarial network (TS-SRGAN) model is proposed. The first step is to use GAN to train the degraded models. Without supervised HR images, only the 10m resolution images provided by Sentinel-2 satellites are used to generate the degraded images which are in the same domain as the real LR images, and then to construct the near-natural LR-HR image pairs. The second step is to design a super-resolution generative adversarial net-work with strengthened perceptual features to enhance the perceptual effects of the generated images. Results obtained outperform state-of-the-art models using NR-IQA metrics like NIQE, BRISQUE and PIQE. At the same time, the comparison of the intuitive visual effects of the gen-erated images also proves the effectiveness of TS-SRGAN. Keywords: super-resolution; generative adversarial network; Sentinel-2