Registered Data

[02178] Efficient computational methods for data matrices: exploiting sparsity and structure

  • Session Time & Room :
    • 02178 (1/2) : 5B (Aug.25, 10:40-12:20) @E508
    • 02178 (2/2) : 5C (Aug.25, 13:20-15:00) @E508
  • Type : Proposal of Minisymposium
  • Abstract : This mini-symposium focuses on new applications from data analysis where sparse and structured matrices arise, as opposed to well-studied applications in scientific and engineering simulation. In addition to these new applications, we shall highlight algorithmic advances for these objects with a focus on techniques like randomisation, parallelisation, and use of modern computer hardware. The overall goal of this gathering is to help researchers see connections among these newer data-driven application areas and identify new computational building blocks that might benefit multiple domains.
  • Organizer(s) : Richard Vuduc, Srinivas Eswar
  • Sponsor : This session is sponsored by the SIAM Activity Group on Supercomputing.
  • Classification : 65F50, 65F55, 68W10, 68W20
  • Minisymposium Program :
    • 02178 (1/2) : 5B @E508 [Chair: Richard Vuduc]
      • [04733] Structured Matrices in Unsupervised Cross-Validation
        • Format : Talk at Waseda University
        • Author(s) :
          • Srinivas Eswar (Argonne National Laboratory)
        • Abstract : We consider matrix low-rank approximation algorithms in the setting of unsupervised cross-validation. Unlike most structured matrix computations, a cross-validation user has more control of sparsity, with different choices implying different performance trade-offs. For example, controlling the rank of these matrices results in a trade-off between efficiency and cross-validation accuracy. This talk surveys these choices and trade-offs. Additionally, this presentation will also introduce the minisymposium.
      • [04283] Randomized Algorithms for Rank Structured Matrices
        • Format : Talk at Waseda University
        • Author(s) :
          • Per-Gunnar Martinsson (University of Texas at Austin)
        • Abstract : The talk describes randomized algorithms for computing a data sparse representation of a rank structured matrix (HSS, HODLR, H-matrix, ...). The algorithms are black box in that they interact with the target matrix only through its action on vectors, making them ideal for tasks such as forming Schur complements or matrix matrix multiplication. When the target matrix can be applied in O(N) operations, the compression as a whole typically has linear complexity as well.
      • [03473] Scalable Data Analytics using Sparse Matrices
        • Format : Talk at Waseda University
        • Author(s) :
          • Giulia Guidi (Cornell University)
        • Abstract : Massively parallel systems are vital for processing large data and play a critical role in data analysis. However, programming high-performance computing systems poses productivity and scalability challenges. Here, we focus here on advances in genome sequencing and the associated flood of genomic data, which present computational challenges and require new approaches. This work demonstrates the feasibility of writing parallel code for irregular genomic computation through sparse matrix abstraction for de novo long-read genome assembly.
      • [04632] Structure of Fisher information matrices in deep learning
        • Format : Talk at Waseda University
        • Author(s) :
          • Rio Yokota (Tokyo Institute of Technology)
        • Abstract : Fisher information matrices have many applications in deep learning such as continual learning, generalization metrics, hyperparameter prediction, model sparsification, and optimization. The Fisher information matrix has a Kronecker product structure, which can be exploited to accelerate its computation. This talk will cover the various applications of Fisher information matrices, and its relation to Hessian and Gauss-Newton matrices, along with the theory behind its approximation through Kronecker factorization.
    • 02178 (2/2) : 5C @E508 [Chair: Srinivas Eswar]
      • [04795] Exploiting Supernodal Structures in Sparse All Pair Shortest Path Computation.
        • Format : Talk at Waseda University
        • Author(s) :
          • Piyush K Sao (Oak Ridge National Laboratory)
          • Prasun Gera (cerebras)
          • Hao Lu (Oak Ridge National Laboratory)
          • Ramki Kannan (Oak Ridge National Laboratory)
          • Richard W Vuduc (Georgia Institute of Technology)
          • Thomas Potok (Oak Ridge National Laboratory)
        • Abstract : We introduce a novel approach for efficiently solving all-pairs shortest path problems on sparse graphs. We leverage the techniques from sparse Cholesky factorization, including fill-in-reducing ordering, supernodal traversal, and elimination tree parallelism for APSP computation. Our method uses semi-ring notation to express graph algorithms in linear algebraic form and employs BLAS level-III semi-ring operations. Our parallel prototype implementation significantly outperforms a non-sparsity-exploiting Floyd-Warshall algorithm and competes with Dijkstra's algorithm for specific sparse graph classes.
      • [04585] Optimizations of H-matirx-vector Multiplication for Many-core Processors
        • Format : Talk at Waseda University
        • Author(s) :
          • Tetsuya Hoshino (Nagoya University)
          • Akihiro Ida (Japan Agency for Marine-Earth Science and Technology)
          • Toshihiro Hanawa (The University of Tokyo)
        • Abstract : Hierarchical matrices (H-matrices) can robustly approximate the dense matrices that appear in the boundary element method (BEM). To accelerate the solving of linear systems in the BEM, we must speed up hierarchical matrix–vector multiplication (HiMV). This presentation discusses optimization methodologies of HiMV for modern multi/many-core CPUs: an H-matrix storage method for efficient memory access, a method that avoids write contentions, an inter-thread load-balancing method, and blocking and sub-matrix sorting methods for cache efficiency.
      • [04849] Distributed Graph Neural Network for Billion-Scale Graphs
        • Format : Online Talk on Zoom
        • Author(s) :
          • Israt Nisa (AWS AI)
        • Abstract : GNN models are widely used in recommendation, fraud detection, and search. Real-world graphs from social media and co-purchase networks are large and heterogeneous, posing challenges for scaling training and inference. We demonstrate GraphStorm, a GNN framework that utilizes a distributed hybrid CPU/GPU architecture for efficient graph partitioning, asynchronous computation and data loading, data locality, and hardware utilization (CPU, GPU, network, PCIe).
      • [05081] A Framework to Exploit Data Sparsity in Tile Low-Rank Cholesky Factorization
        • Format : Online Talk on Zoom
        • Author(s) :
          • Rabab Mohammad Alomairy (King Abdullah University of Science and Technology)
          • Qinglei Cao (University of Tennessee )
          • Yu Pei (University of Tennessee )
          • george bosilca ( University of Tennessee )
          • Hatem Ltaief (King Abdullah University of Science and Technology)
          • David Keyes (King Abdullah University of Science and Technology)
          • Jack Dongarra ( University of Tennessee )
        • Abstract : We accelerate the computations of 3D unstructured mesh deformation based on radial basis function interpolations by exploiting the rank structured property of the matrix. As we increase the accuracy threshold to satisfy the application requirements, the original dense operator gets further compressed and may become sparse enough to switch the dense solver to sparse direct solver. This talk highlights how PaRSEC redistributes the matrix, mitigates the data movement overheads, and copes with the load imbalance.