Registered Data

[CT122]


  • Session Time & Room
    • CT122 (1/1) : 5D @E711 [Chair: Jian-Zhou Zhang]
  • Classification
    • CT122 (1/1) : Artificial intelligence (68T)

[00020] Image Functions Approximated by CNN

  • Session Time & Room : 5D (Aug.25, 15:30-17:10) @E711
  • Type : Contributed Talk
  • Abstract : Convolutional Neural Networks (CNN) have been widely used to image understanding. However it remains an open problem to prove the approximation of image functions via CNN. In this work, it is proved that an image function can be approximated by CNN on the basis of the axiom of choice in set theory and an uncountable number of training data from the viewpoint of image decomposition.
  • Classification : Artificial neural networks and deep learning
  • Format : Talk at Waseda University
  • Author(s) :
    • Jian-Zhou Zhang (Sichuan University)

[00141] Multiscale Perturbed Gradient Descent: Chaotic Regularization and Heavy-Tailed Limits

  • Session Time & Room : 5D (Aug.25, 15:30-17:10) @E711
  • Type : Contributed Talk
  • Abstract : Recent studies have shown that gradient descent (GD) can achieve improved generalization when its dynamics exhibits a chaotic behavior. However, to obtain the desired effect, the step-size should be chosen sufficiently large, a task which is problem dependent and can be difficult in practice. In this talk, we introduce multiscale perturbed GD (MPGD), a novel optimization framework where the GD recursion is augmented with chaotic perturbations that evolve via an independent dynamical system. We analyze MPGD from three different angles: (i) By building up on recent advances in rough paths theory, we show that, under appropriate assumptions, as the step-size decreases, the MPGD recursion converges weakly to a stochastic differential equation (SDE) driven by a heavy-tailed Lévy-stable process. (ii) By making connections to recently developed generalization bounds for heavy-tailed processes, we derive a generalization bound for the limiting SDE and relate the worst-case generalization error over the trajectories of the process to the parameters of MPGD. (iii) We analyze the implicit regularization effect brought by the dynamical regularization and show that, in the weak perturbation regime, MPGD introduces terms that penalize the Hessian of the loss function. Empirical results are provided to demonstrate the advantages of MPGD.
  • Classification : 68T07, Machine learning, optimization, stochastic differential equations
  • Format : Talk at Waseda University
  • Author(s) :
    • Soon Hoe Lim (Nordita, KTH Royal Institute of Technology and Stockholm University)

[00541] Reinforcement Learning-based Data Collection and Energy Replenishment in SDIoT

  • Session Time & Room : 5D (Aug.25, 15:30-17:10) @E711
  • Type : Contributed Talk
  • Abstract : In software-defined internet of things (SDIoT) with wireless rechargeable sensor networks, a novel reinforcement learning-based method is proposed for collecting data and scheduling mobile sinks to recharge the sensor nodes. The suggested technique extends the network lifetime while ensuring the QoS of the SDIoT. Finally, the results show that the suggested approach significantly increases the energy efficiency and also increases the network's lifetime.
  • Classification : 68T05, 68T40, 68T20, 68Q06, Internet of Things; Machine learning; Reinforcement learning
  • Format : Talk at Waseda University
  • Author(s) :
    • Vishnuvarthan Rajagopal (Research scholar, Department of Electronics and Communication Engineering, Anna University Regional Campus, Coimbatore.)
    • Bhanumathi V (Assistant Professor, Department of Electronics and Communication Engineering,Anna University Regional Campus, Coimbatore.)

[00535] Reinforcement Learning with Variable Exploration

  • Session Time & Room : 5D (Aug.25, 15:30-17:10) @E711
  • Type : Contributed Talk
  • Abstract : Reinforcement learning is a powerful machine learning technique, but unreliable when multiple agents learn simultaneously. Our work applies Q learning to the Iterated Prisoner's Dilemma, an ideal setting to study AI cooperation. We investigate how different frameworks for variable exploration rates effect performance by escaping local optima. One result finds shorter learning periods produce more cooperation, potentially indicating incentive alignment. This furthers previous studies by carefully considering the ways exploration rate might vary over time.
  • Classification : 68T05, 91A26, 37N40, 91A05
  • Format : Talk at Waseda University
  • Author(s) :
    • Brian Mintz (Dartmouth College)
    • Feng Fu (Dartmouth College)