Registered Data

[02285] New Trends in Tensor Networks and Tensor Optimization

  • Session Time & Room :
    • 02285 (1/2) : 4D (Aug.24, 15:30-17:10) @A208
    • 02285 (2/2) : 4E (Aug.24, 17:40-19:20) @A208
  • Type : Proposal of Minisymposium
  • Abstract : Tensors have been shown to be a powerful tool for capturing multiple interactions and inherent hierarchies in data sets from wide applications in scientific and engineering communities. This minisymposium aims to bring together recent advances in tensor network analysis and large-scale tensor optimization. The topics of interest include, but are not limited to – new advances in tensor networks for machine learning, – tensorial time series analysis and deep learning, – tensor regularized generalization in reinforcement learning, – structural tensor analysis and applications, – multilinear PageRank and data clustering.
  • Organizer(s) : Qibin Zhao, Yannan Chen, Andong Wang
  • Classification : 90C90
  • Minisymposium Program :
    • 02285 (1/2) : 4D @A208 [Chair: Andong Wang]
      • [03267] Efficient Machine Learning with Tensor Networks
        • Format : Online Talk on Zoom
        • Author(s) :
          • Qibin Zhao (RIKEN AIP)
        • Abstract : Tensor Networks (TNs) are factorizations of high dimensional tensors into networks of many low-dimensional tensors, which have been studied in quantum physics, high-performance computing, and applied mathematics. In recent years, TNs have been increasingly investigated and applied to machine learning and signal processing, due to its significant advances in handling large-scale and high-dimensional problems, model compression in deep neural networks, and efficient computations for learning algorithms. This talk aims to present some recent progress of TNs technology applied to machine learning from perspectives of basic principle and algorithms, novel approaches in unsupervised learning, tensor completion, multi-model learning and various applications in DNN, CNN, RNN and etc.
      • [05506] Accelerated Doubly Stochastic Gradient Descent for Tensor CP Decomposition
        • Format : Talk at Waseda University
        • Author(s) :
          • Chunfeng Cui (Beihang University)
        • Abstract : In this talk, we focus on the doubly stochastic gradient descent (SGD) method for computing the canonical polyadic decomposition (CPD) of tensors. This method not only exploits the block structure of CPD but also enables us to handle large-scale tensors. Based on the momentum acceleration and the variance reduction technique, we propose several acceleration methods, including the heavy-ball acceleration, inertial acceleration, and variance reduction. We also present the global convergence and convergence rates of the proposed methods.
      • [03184] Tensor network strcuture search
        • Format : Talk at Waseda University
        • Author(s) :
          • Chao Li (RIKEN)
        • Abstract : In this talk, we present a novel problem related to model selection for tensor networks, which we call tensor network structure search (TN-SS). TN-SS aims to find the optimal tensor network structure for a given dataset and task by exploring a large space of possible network structures. We propose several promising solutions to the TN-SS problem, including evolutionary algorithms, stochastic search, and alternating enumeration. Our methods are designed to efficiently explore the space of tensor network structures and identify the most promising candidates based on their performance on the given task.
      • [04779] Towards Multi-modes Outlier Robust Tensor Ring Decomposition
        • Format : Talk at Waseda University
        • Author(s) :
          • Yuning Qiu (Guangdong University of Technolog)
        • Abstract : The outliers assumption in conventional robust tensor decomposition is often not true in tensors since high-order tensors are prone to be corrupted by outliers in more than one direction. To mitigate this weakness, we propose a novel outlier robust tensor decomposition (ORTD) model by capturing low-rank tensors corrupted from multi-mode outliers. To theoretically guarantee statistical performance, we rigorously analyze a non-asymptotic upper bound of the estimation error for the proposed ORTD model.
    • 02285 (2/2) : 4E @A208 [Chair: Xianping Wu]
      • [05508] Singular Value Decomposition of Dual Matrices and its Application to Traveling Wave Identification in the Brain
        • Format : Talk at Waseda University
        • Author(s) :
          • Tong Wei (Fudan University)
          • Weiyang Ding (Fudan University)
          • Yimin Wei (Fudan University)
        • Abstract : atrix factorization in dual number algebra, a hypercomplex system, has been applied to kinematics, mechanisms, and other fields recently. We develop an approach to identify spatiotemporal patterns in the brain such as traveling waves using the singular value decomposition of dual matrices. Theoretically, we propose the compact dual singular value decomposition (CDSVD) of dual complex matrices with explicit expressions as well as a necessary and sufficient condition for its existence. Furthermore, based on the CDSVD, we report on the optimal solution to the best rank-k approximation under a newly defined quasi-metric in dual complex number system. The CDSVD is also related to the dual Moore-Penrose generalized inverse. Numerically, comparisons with other available algorithms are conducted, which indicate the less computational cost of our proposed CDSVD. Next, we employ experiments on simulated time-series data and a road monitoring video to demonstrate the beneficial effect of infinitesimal parts of dual matrices in spatiotemporal pattern identification. Finally, we apply this approach to the large-scale brain fMRI data and then identify three kinds of traveling waves, and further validate the consistency between our analytical results and the current knowledge of cerebral cortex function.
      • [03043] Multilinear Pseudo-PageRank for Hypergraph Partitioning
        • Format : Talk at Waseda University
        • Author(s) :
          • Yannan Chen (South China Normal University)
        • Abstract : In this talk, we establish the higher-order pseudo-PageRank model, which is formulated as a multilinear system with nonnegative constraints. The coefficient tensor of the multilinear system is a kind of Laplacian tensor of the uniform hypergraph and no dangling corrections are involved. Then, a tensor splitting algorithm is utilized for solving the higher-order pseudo-PageRank problem, of which solutions exist but may not be unique. Numerical experiments illustrate that the proposed higher-order pseudo-PageRank method is powerful and effective for hypergraph partitioning problems.
      • [04276] Tensorial Time Series Prediction via Tensor Neural Differential Equations
        • Format : Talk at Waseda University
        • Author(s) :
          • Mingyuan Bai (RIKEN AIP)
        • Abstract : The recent decade has witnessed the surge of models and applications in multi-dimensional, i.e., tensorial time series analysis, where their entanglement of different aspects of data, i.e., modes, appeals to both academia and industry, and raises a number of challenges for modeling and analysis. To address these challenges, we aim to introduce tensor neural differential equations for tensorial time series analysis, including tensor neural ordinary differential equations and tensor neural controlled differential equations, etc.
      • [03838] A gradient projection method for semi-supervised hypergraph clustering problems
        • Format : Talk at Waseda University
        • Author(s) :
          • Jingya Chang (Guangdong University of Technology)
        • Abstract : We use the hypergraph related tensor to construct an orthogonal constrained optimization model for the semi-supervised hypergraph problems, which is solved by a retraction method. A nonmonotone curvilinear search is implemented to guarantee reduction in the objective function value. Experiments on synthetic hypergraph and hypergraph given by real data demonstrate the effectivity of our method.