Abstract– a phenomenon or variable that causes

Abstract– In Hyperspectral remote sensing, data are
collected in numerous(hundreds to thousands) narrow wavebands in one or more
regions of the electromagnetic spectrum, and large numbers of data are
collected. An important problem in hyperspectral image processing are dimension
reduction, target
detection, target identification, and target classification. In this paper we review the current activity of target
classification, most commonly used methods for dimension reduction, target detection,
target identification methodologies and
techniques. Hyperspectral image processing is a complex process which depends
on various factors. Here we also reviewed problems faced by some methods and to
overcome the problems, discuss the current techniques, problems as well as prospects
of data analysis. The main focus will be on advanced Image data analysis and
classification techniques which are used for improving accuracy. Additionally,
some important issues relating to classification performance are also
discussed.

 

Keywords— Hyperspectral Image; target
detection; dimensionality reduction; Independent Component Analysis; Principal
Component Analysis; Projection
Pursuit.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

                                                                                                                                                              
I.           
INTRODUCTION

                Hyperspectral data is classified
as Feature selection/extraction followed by Information extraction, both
feature selection/extraction and Information extraction methods could be either
supervised or unsupervised methods13. The unsupervised methods identify
patterns of interest in an image data. This group of methods do not require
prior knowledge where as supervised methods use prior knowledge on target
characteristics whereas care should be taken while selecting.  Although there are numerous unsupervised
methods, only the most commonly used methods are discussed here,

A.      
Projection
Pursuit (PP)

                Unlike
most of developed target detection algorithms that require statistical models
such as linear mixture, PP is to project a high dimensional data set into a low
dimensional data space while retaining desired information of interest. It
utilizes a projection index to explore projections of interestingness. Since targets
are small compared to their surrounding background, these targets can be viewed
as pixels that cause outliers of the background distribution. In order to find
the optimal projections, a revised Projection Pursuit evolutionary algorithm(PPEA)
is used where a zero-detection thresholding technique is introduced for the
purpose of target detection14. A new technique Legendre index for anomaly
detection is used based on projection pursuit, 
the proposed PP technique is able to detect anomalies with a degree of
separation from the normal distribution given by the level of gray of the
pixel. RX algorithm is used in addition to isolated outliers15.

B.      
Principal
Component Analysis(PCA)

                PCA is a multivariate method
commonly used for reducing data redundancy and dimensionality13. It is an
unsupervised method, if the user is interested in a phenomenon or variable that
causes subtle differences in target reflectance in specific bands, then it is
not the best method for feature selection13. Authors tells that hierarchical
PCA algorithm, which can effectively reduce the hyperspectral data to intrinsic
dimensionality. In this, image are break into various parts and then perform
PCA on each part separately and then combine the results. Therefore
hierarchical PCA provide similar information content as compared to traditional
PCA. On further investigation it was established that the classification
accuracy of hierarchical method is also very close to traditional PCA method
16. Even though PCA is widely used it suffers from high computational cost,
large memory requirement and low efficacy in dealing with high dimensional data
17. The contribution of small target is limited to the variance of the image
frames. In a much higher variance target image frame the smaller targets may
not appear after PCA analysis. A solution for above issue was addressed in 18
using the Independent Component Analysis (ICA) for unsupervised target
detection. The ICA can be used for classification, feature extraction and
target detection in hyperspectral images 18. The goals of PCA are to (1)
extract the most important information from the data table; (2) compress the
size of the data set by keeping only this important information; (3) simplify
the description of the data set; and (4) analyze the structure of the
observations and the variables22.

C.     
Independent
Component Analysis (ICA)

                Independent-component analysis
_ICA_ is a popular technique for unsupervised classification6. Introduced in
the early 1980’s is a multivariate data analysis method where, given a linear
mixture of statistically independent components, these components are recovered
by solving for an unmixing matrix. Whereas PCA finds the transform of the observed
data that de-correlates the observed variables through the use of second-order
statistics (i.e. A transform based on the eigenvectors of the covariance
matrix), ICA utilizes higher-order statistics to find projections of the data
where the components are independent, a stronger statement than
uncorrelated19. Author tells that Most target detection algorithms uses a
priori available target, target is, seldom available a priori.
Independent component analysis (ICA) is a technique that aims at finding out
components which are statistically independent or as independent as possible.
Since ICA does not require a priori target information. This
technique therefore has the potential of being used for target detection
applications5. The major advantage of using ICA is its ability to classify
objects with unknown spectral signatures in an unknown image scene. But it’s
very high computational complexity impedes its application to high-dimensional
data analysis. The common approach is to use principal component analysis(PCA)
to reduce the data dimensionality before applying the ICA classification6.

 

                                                                                                  
II.           
CHALLENGES IN HYPERSPECTRAL IMAGE ANALYSIS

                The
spectral data in the hyperspectral image can be used to identify known and
unknown objects on the basis of spectral signature. The spectral information measured for the same material differs
due to difference in material composition, atmospheric propagation and sensor
noise. These spectral variations in
the spectral signature for same material make the image analysis challenging1.
It is critical to consider spectral variations in target identification problem
for accurate identification of the targets.

Some problems of hyperspectral
remote sensing are listed below,

A.      
Processing and
Visualization Problem

                Hyperspectral images contain far
more spectral bands than can be displayed with a standard red, green and blue
(RGB) display. Here, Color Matching functions(CMF) is one of the method
specifies how much of these three primary colors must be mixed to create the
color sensation of a monochromatic light at the particular wavelength to
produce original spectrum7. A disadvantage of the CMF is that there might be
a decrease in sensitivity of human vision at the edge of the visible
spectrum7. Principal component analysis (PCA) is also used to reduce
hyperspectral data dimensionality by assigning the first three principle
components to RGB9. And also using wavelets to de-noise the spectra before
applying PCA could improve Visualization10. The disadvantage of PCA include
the difficulty to interpret the displayed image because the displayed colors
represents that do not typically represent natural colors of the features. The
colors change drastically, depending on the data, and they do not correlate
strongly with data variation. The standard saturation used in PCA display leads
to simultaneous contrast problems and the computational complexity is high7.

B.      
Data Handling Issues

                The users of hyperspectral
should have the capability to store and handle large size data sets. they would
require high performance computer with large storage capacity13.

C.     
Data Redundancy
Problem

                If often refers to the fact that
the information contained in each band of the hyperspectral image is not
unique. On the contrary, many bands are very similar or redundant.
Hyperspectral data redundancy can be visualized through covariance or
correlation between bands13. 

D.     
The Curse of Dimensionality

                As the number of bands in an
image increases, the number of observations required to train a classifier
increases exponentially to maintain the classification accuracies13.                                                                                                                                                       I.           
TARGET
DETECTION                Target
detection in hyperspectral images is important in many applications including
search and rescue operations, defense systems, mineral exploration, border
security, agricultural crops and several other anthropogenic and natural object/phenomenon. For these
purpose, several target detection algorithms have been proposed over the years,
classification of target detection and review of them is done in1. Authors, describe the fundamental structure
of the hyperspectral data and explain how these data influence the signal
models used for the development and theoretical analysis of detection
algorithms4.                However,
it is not clear which of these algorithms perform best on real data targets,
and moreover, which of these algorithms have complementary information and
should be fused together. Apart from1,  For this purpose, eight signature-based
hyperspectral target detection algorithms, namely the Generalized Likelihood
Ratio Test(GLRT), Adaptive Coherence Estimator(ACE), Signed Adaptive Coherence
Estimator(SACE), Adaptive Matched Subspace Detector(AMSD)( The use of adaptive
algorithms deals quite effectively with the problem of unknown backgrounds4),
Constrained Energy Minimization(CEM), Matched Filter(MF), Orthogonal Subspace
Projection(OSP) and Hybrid Unstructured Detector(HUD), and three anomaly
detectors, namely RX, Maxmin and Diffdet, were tested and compared. Among the
signature-based target detectors, the three best performing algorithms that
have complementary information were identified. Finally these algorithms were
fused together using four different fusion algorithms11.                 SACE,
CEM and AMSD were found to be the better-performance algorithms, and AMSD
showed the best performance. In fact, AMSD showed a good performance especially
if the sub-pixel target area was close to at least half of the pixel area.
However, AMSD requires to model the background endmembers( is a pure spectra or
pure materials and pure pixels are often referred as endmembers each having a
characteristic spectral signature) , which increases the computational
complexity11.                 With
this study, it was shown that AMSD, SACE and CEM showed success and weaknesses
on different regions, and complemented each other. Hence, these algorithms were
fused with the sum, product, MFF and hybrid fusion methods11.                 It
is seen  that the effect of CEM detection
is relatively poor3, To solve the low detection efficiency problem of
Constrained Energy Minimization (CEM) method used for hyperspectral remote
sensing imagery, Author firstly presents two improved detection methods:
principal component CEM (PCCEM) and matrix taper CEM (MTCEM). Then, based on
these two methods, a more optimized Two-Time detection (TTD) method is
proposed. Primarily, the targets of interest in the hyperspectral image are
detected by using the PCCEM and MTCEM method. show that the detection
performance of PCCEM and MTCEM algorithms varies with the image data. These
methods are not robust detector. No matter what kind of image is used, the TTD
method is able to get the good target detection result that is superior to the
above methods, and has a robust performance at target detection20.                Author21
showed that there is no “best hyperspectral detection algorithm” for all images
and targets. We noted the significant effect spatial distribution has on
detector performances, and we showed that the RBTA can be used to select the
proper detectors from among several detectors but without any need for ground
truth. However, point targets can influence their neighboring pixels, due
either to the PSF or to the target spreading across more than one pixel. To
account for this potential source of inaccuracy, therefore, we introduced the
improved RBTA (IRBTA), whose exact method of use depended on the target size.
In addition, we showed that when detectors calculated the mean for estimating
the pixel signature value, we did not need ground truth to find the best
estimate. We tested our concept through the selection of the best detectors
from among stochastic algorithms for target detection, that is, the constrained
energy minimization (CEM), generalized likelihood ratio test (GLRT), and
adaptive coherence estimator (ACE) algorithms, using the dataset and scoring
methodology of the Rochester Institute of Technology (RIT) Target Detection
Blind Test project. The results showed that our concepts predicted the best
algorithms for the particular images and targets provided by the website21.                The author made the comparative
study of tar­get detection algorithms in HSI applied to crops scenarios in
Colombia from images acquired by hyperspectral satellite sensor Hyperion. The
test­ed algorithms were ACE, CEM, MF, SAM and OSP. The results show that the
ACE algorithm has a better performance with probabilities detection PD > 90% for diverse
HSI and agricultural tar­gets, in both synthetic and real images, followed by
CEM and MF algorithms that exhibit accept­able performance with averages
detection proba­bilities PD =
80%. In contrast, the OSP and SAM algorithms are able to detect targets
with average PD =
45% however, the number of false alarms (FA) is high and their performance
decreases12.                 Target
detection from hyperspectral images using ICA and other algorithms based on
spectral modeling may be of immense interest5. Author
compares ICA with four spectral matching algorithms namely Orthogonal Subspace
Projection (OSP), Constrained Energy Minimization (CEM)3, Spectral Angle
Mapper (SAM) and Spectral Correlation Mapper (SCM), and four anomaly detection
algorithms namely OSP anomaly detector (OSPAD), Reed–Xiaoli anomaly detector
(RXD), Uniform Target Detector (UTD) and a combination of Reed–Xiaoli anomaly
detector and Uniform Target Detector (RXD–UTD) were considered. The experiments
were conducted using a set of synthetic and AVIRIS hyperspectral
images containing aircrafts as military targets. A comparison of true positive
and false positive rates of target detections obtained from ICA and other
algorithms indicates the superior performance of the ICA5 over other
algorithms3.                                                                                                                                          
II.           
PROBLEM IDENTIFICATION                Identifying
theoretical computational problem in Hyperspectral image processing itself is a
big problem. The theoretical computational
problem are : Finding / putting  data in
a certain format , dig into data , perform analysis in High Performance
Computing(HPC) way , remove sensor noise and atmospheric noise, identifying end-member
/ target. All this problems can be solved using parallel computing aspect while
doing HPC. A.       Methodology·        
Independent
Component Analysis(ICA) is used instead of Principal Component Analysis(PCA)
24-26.·        
Noise
Reduction (removing high frequency noise by cloud cover, Atmospheric
correction is an important pre-requisite step to enhance and improve
identification of spectral signatures of different objects or materials and
their compositions8).·        
Data
Spaces(Dimensionality reduction).How much of
dimensionality  reduction is needed? data-mining/data-cube
concepts are used then no need to do data reduction. This is where
computational skills come into picture to enter any part of the cube. A variety
of dimension reduction techniques exist: principal component analysis (PCA) ,
minimum noise fraction, locally linear embedding independent component analysis
, discrete wavelet transform etc17.·        
Data
Mining(data reduction).                                                                                                                          
III.           
LITERATURE GAP AND CONCLUSION                All
the Hyperspectral Image (HSI) Processing algorithms discussed were built on
their own assumptions and thus have limitations. The basic theory and the most
canonical works are discussed along with the most recent advances in each
aspect of hyperspectral image processing, both the most classic and advanced
work were introduced. From this work, directions in current research trends are
revealed2.                 An
idealized universal Hyperspectral Image Processing system has not been
developed yet. The method used for HSI processing should be a function problem
type, Target Detection, Material Mapping, Material Identification, Mapping
details and surface properties, Material Classification, Atmospheric correction
etc. from remote sensing data.                To
identify the target first we need to reduce the band size, from 224 band to
suitable size of 10-20 bands, to perform this Independent Component Analysis is used instead of Principal Component Analysis(PCA),
Next the Noise has to be Reduced( removing high frequency noise by cloud cover
). Data Spaces(Dimensionality reduction) and with proper selection of a suite
of data mining(data reduction) techniques, it is possible to reduce data
dimensionality and data redundancy, and extract unique information from HS
images13.                 After all the
operation the target
Detection methods which are used are not always reliable especially for
hyperspectral data, since they depend on image statistics only. hence we need
to find the better target detectors. But it requires much effort. Fusion
methods produce precise results.                It
is also found that hybrid methods like Genetic algorithms, swarm techniques,
Bayesian formulations23 despite the promising research results, are not often
used in the processing of hyperspectral data. Hybrid methods23 can be tried
with physical models and other scene dependent properties of the area. New
approaches like hyperspectral imageries using neural network, machine learning
techniques, open possibilities in the improvement of the topic.

                The primary disadvantages are cost and complexity.
Fast computers, sensitive detectors, and large data storage capacities are
needed for analyzing hyperspectral data. Significant data storage capacity is
necessary since hyperspectral cubes are large, multidimensional datasets,
potentially exceeding hundreds of megabytes.
All of these factors greatly increase the cost of acquiring and processing
hyperspectral data.

x

Hi!
I'm Isaac!

Would you like to get a custom essay? How about receiving a customized one?

Check it out