Multilinear principal component analysis
Encyclopedia
Multilinear principal-component analysis (MPCA)
is a mathematical procedure that uses multiple orthogonal transformations to convert a set of multidimensional objects into another set of multidimensional objects of lower dimensions. There is one orthogonal transformation for each dimension (mode). This transformation aims to capture as high a variance as possible, accounting for as much of the variability in the data as possible, subject to the constraint of mode-wise orthogonality. MPCA is a multilinear extension of principal component analysis (PCA) and it is a basic algorithm in multilinear subspace learning
Multilinear subspace learning
Multilinear subspace learning aims to learn a specific small part of a large space of multidimensional objects having a particular desired property. It is a dimensionality reduction approach for finding a low-dimensional representation with certain preferred characteristics of high-dimensional...

. Its origin can be traced back to the Tucker decomposition
Tucker decomposition
In mathematics, Tucker decomposition decomposes a tensor into a set of matrices and one small core tensor. It is named after Ledyard R. Tuckeralthough it goes back to Hitchcock in 1927....

 in 1960s and it is closely related to higher-order singular value decomposition (HOSVD) and the best rank-(R1, R2, ..., RN ) approximation of higher-order tensors .

The algorithm

MPCA performs feature extraction
Feature extraction
In pattern recognition and in image processing, feature extraction is a special form of dimensionality reduction.When the input data to an algorithm is too large to be processed and it is suspected to be notoriously redundant then the input data will be transformed into a reduced representation...

 by determining a multilinear projection that captures most of the original tensorial input variations. As in PCA, MPCA works on centered data. The MPCA solution follows the alternating least square (ALS) approach. Thus, is iterative in nature and it proceeds by decomposing the original problem to a series of multiple projection subproblems. Each subproblem is a classical PCA problem, which can be easily solved.

It should be noted that while PCA with orthogonal transformations produces uncorrelated features/variables, this is not the case for MPCA. Due to the nature of tensor-to-tensor transformation, MPCA features are not uncorrelated in general although the transformation in each mode is orthogonal . In contrast, the uncorrelated MPCA (UMPCA) generates uncorrelated multilinear features.

Feature selection

MPCA produces tensorial features. For conventional usage, vectorial features are often preferred. E.g. most classifiers in the literature takes vectors as input. On the other hand, as there are correlations among MPCA features, a further selection process often improve the performance. Supervised (discriminative) MPCA feature selection is used in for object recognition while unsupervised MPCA feature selection is employed in visualization task .

Extensions

Various extensions of MPCA have been developed:
  • Uncorrelated MPCA (UMPCA)
  • Boosting
    Boosting
    Boosting is a machine learning meta-algorithm for performing supervised learning. Boosting is based on the question posed by Kearns: can a set of weak learners create a single strong learner? A weak learner is defined to be a classifier which is only slightly correlated with the true classification...

    +MPCA
  • Non-negative MPCA (NMPCA)
  • Robust MPCA (RMPCA)

Resources

The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK