Abstract : Tensor decompositions are a fundamental tool in the data sciences for extracting interpretable patterns, removing or reducing noise, and providing reduced-dimension or low-complexity models for tensor data. In recent years, significant progress has been made to propose and understand new constrained tensor models to aid in interpretability or to satisfy known constraints on the data. In this minisymposium, we present some of the state-of-the-art approaches to interpretable constrained tensor decompositions, including efficient inference algorithms with convergence guarantees, efficient implementations of these algorithms compatible with modern hardware, and application of these models to challenging data analysis problems across several domains.
Organizer(s) : Axel Marmoret, Daniel M. Dunlavy, Jeremy E. Cohen
[03825] Implicit balancing in penalized low-rank approximations
Format : Talk at Waseda University
Author(s) :
Jeremy E. Cohen (CREATIS, CNRS)
Abstract : Tensor decompositions are an important tool in machine learning, particularly due to their interpretability which makes them well-adapted to solving inverse problems for a wide range of applications. Additional constraints and penalizations are often imposed to enhance the interpretability of these models. Because of scale-invariance in tensor decompositions, one may show that adding regularization terms, which are scale-dependent, induces an implicit balancing of the factor matrices. By explicitly imposing balancing during an optimization algorithm, I show that it is possible to improve the precision and speed of that algorithm for both nonnegative PARAFAC and Tucker decompositions.
[02791] Hierarchical and neural nonnegative tensor factorizations
Format : Online Talk on Zoom
Author(s) :
Jamie Haddock (Harvey Mudd College)
Joshua Vendrow (Massachusetts Institute of Technology)
Deanna Needell (University of California, Los Angeles)
Abstract : Nonnegative matrix factorization (NMF) has found many applications including topic modeling and document analysis. Hierarchical NMF (HNMF) variants are able to learn topics at various levels of granularity and illustrate their hierarchical relationship. Recently, nonnegative tensor factorization (NTF) methods have been applied in a similar fashion in order to handle data sets with complex, multi-modal structure. Hierarchical NTF (HNTF) methods have been proposed, however these methods do not naturally generalize their matrix-based counterparts. Here, we propose a new HNTF model which directly generalizes a HNMF model special case, and provide a supervised extension. Our experimental results show that this model more naturally illuminates the topic hierarchy than previous HNMF and HNTF methods. We also describe training methods for hNTF models built upon a neural network structure.
[03781] Nonnegative canonical tensor decomposition with linear constraints: nnCANDELINC
Format : Talk at Waseda University
Author(s) :
Derek DeSantis (Los Alamos National Labs)
Boian Alexandrov (Los Alamos National Labs)
Gianmarco Manzini (Los Alamos National Labs)
Erik Skau (Los Alamos National Labs)
Abstract : Nonnegative tensor factorization produces sparse and easily understandable latent features. However, the Canonical Polyadic Decomposition (CPD) struggles with tensors that have rank deficient factors. The PARALIND algorithm addresses this for real-valued tensors by first computing a Tucker decomposition (TD) and then performing a CPD on the TD core to extract latent features. This work explores the theory behind the nonnegative version of PARALIND. We demonstrate nnPARALIND on several real and synthetic examples, highlighting its sensitivity and discussing when it can be applied.
[04213] Joint Data Fusion and Blind Unmixing using Nonnegative Tensor Decomposition
Format : Online Talk on Zoom
Author(s) :
Clémence Prévost (University of Lille, CNRS Centrale Lille, UMR 9189 CRIStAL F-59000 Lille, France)
Valentin Leplat (Skoltech Center for Artificial Intelligence Technology (CAIT) Moscow, Russia)
Abstract : We present a new method for solving hyperspectral super-resolution and spectral unmixing of the
unknown super-resolution image. Our method relies on (1) the nonnegative block-term decomposition, (2) joint tensor factorization with multiplicative updates, and (3) the formulation of optimization problems with the beta-divergence. We propose a family of algorithms, adaptable to various noise statistics. Experiments show that our approach competes favorably with state-of-the-art for solving both problems for various noise statistics.
, blockterm
decomposition, -divergence, blind spectral unmixing,
hyperspectral super-resolution.
[04776] A quadratically convergent proximal algorithm for nonnegative tensor decomposition
Format : Talk at Waseda University
Author(s) :
Nico Vervliet (KU Leuven)
Andreas Themelis (Kyushu University)
Panagiotis Patrinos (KU Leuven)
Lieven De Lathauwer (KU Leuven)
Abstract : The canonical polyadic decomposition is key in a variety of applications in signal processing and data analysis. While this decomposition is unique under mild conditions, including prior knowledge such as nonnegativity often helps to interpret the components. We derive a proximal Gauss-Newton-type algorithm for nonnegative tensor factorization. We show global convergence to local minima, as well as a $Q$-quadratic convergence rate to global optima in the exact case.
[03079] PARAFAC2-based coupled matrix and tensor factorizations with constraints
Format : Online Talk on Zoom
Author(s) :
Carla Schenker
Xiulin Wang (Dalian University of Technology)
Evrim Acar (Simula Metropolitan Center for Digital Engineering)
Abstract : There is an emerging need to jointly analyze time-evolving data sets together with static data in many areas such as social networks and omics data analysis. PARAFAC2-based coupled matrix and tensor factorizations are a promising approach in that direction, since PARAFAC2 is capable of capturing time-evolving patterns. We present a flexible algorithmic framework for such factorizations which facilitates linear couplings and various constraints on all modes, including the evolving mode in the PARAFAC2 model.
[03791] Constrained Tucker Decompositions and Conservation Principles for Direct Numerical Simulation Data Compression
Format : Talk at Waseda University
Author(s) :
Daniel Dunlavy (Sandia National Laboratories)
Abstract : Low-rank tensor decompositions applied to numerical simulation data for compression and reduced-order modeling often focus only on minimizing global error norms between data and the low-rank model. We present an approach for computing goal-oriented low-rank tensor decompositions that incorporates problem-specific quantities of interest (QoIs) through general nonlinear constraints added to the tensor model loss function. Results for compression of direct numerical simulation data from multiple applications are presented to demonstrate the utility of this approach.
[03819] Incremental Nonnegative Tucker Decomposition with Block-coordinate Descent and Recursive Approaches
Format : Online Talk on Zoom
Author(s) :
Rafal Zdunek (Wroclaw University of Science and Technology)
Krzysztof Fonal (Wroclaw University of Science and Technology)
Abstract : We extend the standard model of Nonnegative Tucker decomposition (NTD) to an incremental or online version, assuming volatility of observed multi-way data along one mode. Two computational approaches are proposed: one is based on the recursive update model, and the other uses the concept of the block-Kaczmarz method. The experimental results performed on various datasets and streaming data demonstrate high efficiently of both algorithmic approaches with respect to the baseline NTD methods.
[05162] Speeding up Nonnegative Low-rank Approximations with Parallelism and Randomization.
Format : Online Talk on Zoom
Author(s) :
Koby Hayashi (Georgia Institute of Technology)
Abstract : Many algorithms for constrained matrix and tensor factorization follow an alternating updating scheme. In such a scheme, the factor matrices are updated one at a time using some update rule. Computationally many update rules require the same bulk computations. That is the most expensive parts of the computations needed to perform an update are shared between update rules. Using this observation, we design a Framework for Alternating Updating NMF and NTF (FAUN). FAUN defines the bulk computations needed throughout an AU type algorithm allowing us to focus on optimizing these computations for multiple methods. Based on this we implement a distributed, highly parallel code called PLANC which optimizes computation and communication and allows for the use of various updating rules such as Hierarchical Least Squares, Nonnegative Least Squares, ADMM, Nesterov updates, ect… More recently we have explored the use of randomization in FAUN. As in the parallel case we can largely decouple the randomized aspects of the algorithms from the update rules.
[03142] A probabilistic nonnegative tensor factorization method for tumor microenvironment analysis
Format : Online Talk on Zoom
Author(s) :
Neriman Tokcan (University of Massachusetts Boston)
Abstract :
The tumor microenvironment (TME) comprises the intricate environment surrounding cancer cells, such as stromal, immune, and extracellular elements. We have devised a probabilistic non-negative tensor factorization approach to model the TME of Hodgkin Lymphoma. We have also incorporated a statistical pipeline to produce robust and biologically relevant factors. Our methodology will enhance our understanding of the TME by identifying cellular heterogeneity and intrinsic pathways driving tumor growth and immune evasion.
Abstract : We study the best low-rank Tucker decomposition of symmetric tensors. The motivating application is in decomposing higher-order multivariate moments, which has special structure and is important to various data science problems. We advocate for a straightforward projected gradient descent (PGD) method and the higher-order eigenvalue decomposition (HOEVD) approximation as computation schemes. Most importantly, we develop scalable adaptations of the basic PGD and HOEVD methods to decompose sample moment tensors. With the help of implicit and streaming technique, we evade the overhead cost of building and storing the moment tensor. Such reductions make computing the Tucker decomposition realizable for large data instances in high dimensions. Numerical experiments demonstrate the efficiency of the algorithms and the applicability of moment tensor decompositions to real-world datasets. Finally we study the convergence on the Grassmannian manifold and prove that the update sequence derived by the PGD solver achieves first and second-order
criticality
[02021] A tensor factorization model of mulitlayer network interdependence
Format : Online Talk on Zoom
Author(s) :
Izabel Aguiar (Stanford University)
Dane Taylor (University at Buffalo the State University of New York)
Johan Ugander (Stanford University)
Abstract : We use the nonnegative Tucker decomposition (NNTuck) with KL-divergence as an expressive factor model for multilayer networks that naturally generalizes existing methods for stochastic block models of multilayer networks. The NNTuck provides a factor-based perspective on layer dependence, enabling linear-algebraic techniques for analyzing dependence in specific layers. We propose a definition of layer dependence based on a likelihood ratio test and evaluate the NNTuck in synthetic and real-world data.