Element Rearrangement for Tensor-Based Subspace Learning

  • Shuicheng Yan ,
  • Dong Xu ,
  • ,
  • Thomas S. Huang ,
  • Shih-Fu Chang

IEEE Conference on Computer Vision and Pattern Recognition, 2007. CVPR '07. |

Publication

The success of tensor-based subspace learning depends heavily on reducing correlations along the column vectors of the mode-k flattened matrix. In this work, we study the problem of rearranging elements within a tensor in order to maximize these correlations, so that information redundancy in tensor data can be more extensively removed by existing tensor-based dimensionality reduction algorithms. An efficient iterative algorithm is proposed to tackle this essentially integer optimization problem. In each step, the tensor structure is refined with a spatially-constrained Earth Mover’s Distance procedure that incrementally rearranges tensors to become more similar to their low rank approximations, which have high correlation among features along certain tensor dimensions. Monotonic convergence of the algorithm is proven using an auxiliary function analogous to that used for proving convergence of the Expectation-Maximization algorithm. In addition, we present an extension of the algorithm for conducting supervised subspace learning with tensor data. Experiments in both unsupervised and supervised subspace learning demonstrate the effectiveness of our proposed algorithms in improving data compression performance and classification accuracy.