Home

Awesome

This repository has been ⛔️ DEPRECATED. Please take a look at our fairly recent work:

Band-Adaptive Spectral-Spatial Feature Learning Neural Network for Hyperspectral Image Classification [paper] [Code]

Deep Learning for Land-cover Classification in Hyperspectral Images

Hyperspectral images are images captured in multiple bands of the electromagnetic spectrum. This project is focussed at the development of Deep Learned Artificial Neural Networks for robust landcover classification in hyperspectral images. Land-cover classification is the task of assigning to every pixel, a class label that represents the type of land-cover present in the location of the pixel. It is an image segmentation/scene labeling task. The following diagram describes the task.

<hr> <img src="https://github.com/KGPML/Hyperspectral/blob/master/images/landcover-classification.png?raw=True" width="600"> <hr>

This website describes our explorations with the performance of Multi-Layer Perceptrons and Convolutional Neural Networks at the task of Land-cover Classification in Hyperspectral Images. Currently we perform pixel-wise classification.

<hr> Dataset =======

We have performed our experiments on the Indian Pines Dataset. The following are the particulars of the dataset:

<hr> Input data format =================

Each pixel is described by an NxN patch centered at the pixel. N denotes the size of spatial context used for making the inference about a given pixel.

The input data was divided into training set (75%) and a test set (25%).

Hardware used

The neural networks were trained on a machine with dual Intel Xeon E5-2630 v2 CPUs, 32 GB RAM and NVIDIA Tesla K-20C GPU.

<hr>

Multi-Layer Perceptron

Multi-Layer Perceptron (MLP) is an artificial neural network with one or more hidden layers of neurons. MLP is capable of modelling highly non-linear functions between the input and output and forms the basis of Deep-learning Neural Network (DNN) models.

Architecture of Multi-Layer Perceptron used

input- [affine - relu] x 3 - affine - softmax

(Schematic representation below)

<img src="https://github.com/KGPML/Hyperspectral/blob/master/images/architecture-MLP.png?raw=True" width="400">

Ndenotes the size of the input patch.

<hr>

Specifics of the learning algorithm

The following are the details of the learning algorithm used:

<hr>

Performance

<img src="https://github.com/KGPML/Hyperspectral/blob/master/images/accuracy-bar-MLP.png?raw=True" width="500">

Decoding generated for different input patch sizes:

<img src="https://github.com/KGPML/Hyperspectral/blob/master/images/performance-MLP.png?raw=True" width="800"> <hr>

Convolutional Neural Network

(CNN or ConvNet) are a special category of artificial neural networks designed for processing data with a gridlike structure. The ConvNet architecture is based on sparse interactions and parameter sharing and is highly effective for efficient learning of spatial invariances in images. There are four kinds of layers in a typical ConvNet architecture: convolutional (conv), pooling (pool), fullyconnected (affine) and rectifying linear unit (ReLU). Each convolutional layer transforms one set of feature maps into another set of feature maps by convolution with a set of filters.

Architecture of Convolutional Neural Network used

input- [conv - relu - maxpool] x 2 - [affine - relu] x 2 - affine - softmax

(Schematic representation below)

<img src="https://github.com/KGPML/Hyperspectral/blob/master/images/architecture-CNN.png?raw=True" width="400">

Ndenotes the size of the input patch.

<hr>

Specifics of the learning algorithm

The following are the details of the learning algorithm used:

<hr>

Performance

<img src="https://github.com/KGPML/Hyperspectral/blob/master/images/accuracy-bar-CNN.png?raw=True" width="500">

Decoding generated for different input patch sizes:

<img src="https://github.com/KGPML/Hyperspectral/blob/master/images/performance-CNN.jpg?raw=True" width="800"> <hr> <hr>

Description of the repository

<hr>

Setting up the experiment

In order to make sure all codes run smoothly, you should have the following directory subtree structure under your current working directory:

|-- IndianPines_DataSet_Preparation_Without_Augmentation.ipynb
|-- Decoder_Spatial_CNN.ipynb
|-- Decoder_Spatial_MLP.ipynb
|-- IndianPinesCNN.ipynb
|-- CNN_feed.ipynb
|-- MLP_feed.ipynb
|-- credibility.ipynb
|-- IndianPinesCNN.py
|-- IndianPinesMLP.py
|-- Spatial_dataset.py
|-- patch_size.py
|-- Data
|   |-- Indian_pines_gt.mat
|   |-- Indian_pines.mat


Outputs will be displayed in the notebooks.

<hr>

Acknowledgement

This repository was developed by Anirban Santara, Ankit Singh, Pranoot Hatwar and Kaustubh Mani under the supervision of Prof. Pabitra Mitra during June-July, 2016 at the Department of Computer Science and Engineering, Indian Institute of Technology Kharagpur, India. The project is funded by Satellite Applications Centre, Indian Space Research Organization (SAC-ISRO).