Registered Data

[CT058]


  • Session Time & Room
    • CT058 (1/1) : 1C @F310 [Chair: Maggie Cheng]
  • Classification
    • CT058 (1/1) : Harmonic analysis in several variables (42B) / Approximations and expansions (41A) / Numerical methods in optimal control (49M)

[02073] Graph convolutional networks for graph signal processing

  • Session Time & Room : 1C (Aug.21, 13:20-15:00) @F310
  • Type : Contributed Talk
  • Abstract : We propose novel graph convolution models for analyzing graph-structured time series data. Graph convolutional networks (GCNs) is a generalization of convolutional neural networks from regular grid data to irregular graph data. The major building block of a GCN is the filter. Graph filters are designed for graph convolution in spatial and spectral domains. We also propose novel graph wavelet transform methods to be jointly used with graph convolution filters, which can further improve the results.
  • Classification : 42BXX, Machine learning, graph signal processing
  • Format : Talk at Waseda University
  • Author(s) :
    • Jia He (Illinois Institute of Technology)
    • Maggie Cheng (Illinois Institute of Technology)

[01472] The Arithmetic Mean iterative methods for solving brain glioma growth models

  • Session Time & Room : 1C (Aug.21, 13:20-15:00) @F310
  • Type : Contributed Talk
  • Abstract : Brain tumour is the uncontrolled growth of normal brain cells and most malignant form is known as glioma. In this work, the formulation and implementation of the Arithmetic Mean iterative methods for solving glioma growth models are presented. Numerical results and convergence analysis are included to verify the performance of the proposed methods.
  • Classification : 41A55, 45A05, 45B05, 65D32, 65F10
  • Format : Talk at Waseda University
  • Author(s) :
    • Mohana Sundaram Muthuvalu (Universiti Teknologi PETRONAS)
    • Jumat Sulaiman (Universiti Malaysia Sabah)
    • Elayaraja Aruchunan (Universiti Malaya)
    • Majid Ali (Universiti Sains Malaysia)
    • Ramoshweu Solomon Lebelo (Vaal University of Technology)

[00542] Approximations of quasi-linear elliptic optimal control problems under variational and virtual discretizations

  • Session Time & Room : 1C (Aug.21, 13:20-15:00) @F310
  • Type : Contributed Talk
  • Abstract : This talk will discuss virtual and variational discretizations for the numerical approximation of optimal control problems governed by the quasi-linear elliptic equation with distributed control. A conforming virtual element method is employed for the discretization of state and co-state equations that appeared in the model problem. The numerical approximation of the control variable is based on two different discretizations: variational and virtual. In the variational approach, the discrete space associated with the control is not discretized explicitly, whereas, for the virtual discretizations, the discrete spaces are taken as virtual element spaces that include linear polynomials and non-polynomials functions over the polygonal mesh, and a discretize-then-optimize approach is used for the computation of control. With the help of certain projection operators, optimal a priori error estimates are established for the control, state, and co-state variables in suitable norms. Numerical experiments are presented under general polygonal meshes to illustrate the performance of the proposed scheme and verify the theoretical convergence rate.
  • Classification : 49M29, 49M41, 65K15, 90C46
  • Format : Talk at Waseda University
  • Author(s) :
    • Anil Kumar (BITS Pilani KK Birla Goa Campus, Goa (India))
    • Jai Tushar (BITS Pilani KK Birla Goa Campus, Goa (India))
    • Sarvesh Kumar (Indian Institute of Space Science and Technology, Thiruvananthapuram)

[00986] Approximation results for Gradient Descent trained Shallow Neural Networks

  • Session Time & Room : 1C (Aug.21, 13:20-15:00) @F310
  • Type : Contributed Talk
  • Abstract : Neural networks show strong performance for function approximation, but provable guarantees typically rely on hand-picked weights and are therefore not fully practical. The aim for a small number of weights in approximation is opposed to over-parametrization by very wide or even infinitely wide networks in contemporary optimization results. The talk reconciles approximation and optimization results and provides approximation bounds that are guaranteed for gradient descent trained neural networks.
  • Classification : 41A46, 65K10, 68T07
  • Author(s) :
    • Gerrit Welper (University of Central Florida)
    • Russell Gentile (n/a)

[02401] A low-degree normalized B-spline-like representation for Hermite osculatory interpolation problems

  • Session Time & Room : 1C (Aug.21, 13:20-15:00) @F310
  • Type : Contributed Talk
  • Abstract : This talk deals with Hermite's osculatory interpolating splines. For a partition of a real interval endowed with a refinement consisting in dividing each subinterval into two small subintervals, we consider a space of smooth splines with super-smoothness at the vertices of the initial partition, and of the lowest possible degree. A normalized B-spline-like representation for the considered spline space is provided. In addition, several quasi-interpolation operators based on blossoming and control polynomials have also been developed. Some numerical tests are presented and compared with some recent works to illustrate the performance of the proposed approach.
  • Classification : 41A15
  • Author(s) :
    • Mohamed BOUSHABI (Abdelmalek Essaadi University, LaSAD, ENS, 93030 Tetouan, Morocco)
    • Salah Eddargani ( University of Rome Tor Vergata Rome)
    • María José Ibáñez (University of Granada)
    • Abdellah Lamnii (Abdelmalek Essaadi University, LaSAD, ENS, 93030 Tetouan, Morocco)