Registered Data

[01145] High dimensional recent computational approaches in finance and control

  • Session Time & Room :
    • 01145 (1/2) : 5B (Aug.25, 10:40-12:20) @D505
    • 01145 (2/2) : 5C (Aug.25, 13:20-15:00) @D505
  • Type : Proposal of Minisymposium
  • Abstract : Most high-dimensional problem in quantitative finance face computational difficulties. However, recent advances in training of neural networks provide an excellent opportunity to reconsider these models. Indeed, the influential papers of E, Han and Jentzen combine these optimization techniques with Monte-Carlo type regression for the off-line construction of optimal feedback actions.This approach, has proven to be highly effective in numerous closely related studies, reporting impressive numerical results in problems with large number of states. All proposed speakers have been contacted and agreed to participate in the session should it be approved. This list of speakers is diverse in many ways, including both senior and junior members of the community and also it represents several different scientific approaches.
  • Organizer(s) : A. Max Reppen, H. Mete Soner
  • Sponsor : This session is sponsored by the SIAM Activity Group on Financial Mathematics and Engineering.
  • Classification : 91G60, 49N35, 65C05
  • Minisymposium Program :
    • 01145 (1/2) : 5B @D505 [Chair: H. Mete Soner]
      • [03950] Learning to Simulate Tail-Risk Scenarios
        • Format : Talk at Waseda University
        • Author(s) :
          • Rama Cont (University of Oxford)
          • Mihai Cucuringu (University of Oxford)
          • Renyuan Xu (University of Southern California)
          • Chao Zhang (University of Oxford)
        • Abstract : The estimation of loss distributions for dynamic portfolios requires the simulation of scenarios representing realistic joint dynamics of their components. Scalability to large or heterogeneous portfolios involving multiple asset classes is particularly challenging, as is the accurate representation of tail risk. We propose a novel data-driven approach for the simulation of realistic multi-asset scenarios with a particular focus on the accurate characterization of tail risk for a given class of static and dynamic portfolios selected by the user. By exploiting the joint elicitability property of Value-at-Risk (VaR) and Expected Shortfall (ES), we design a Generative Adversarial Network (GAN) architecture capable of learning to simulate price scenarios that preserve tail risk features for these benchmark trading strategies, leading to consistent estimators for their VaR and ES. From a theoretical perspective, we show that different choices of score functions lead to different optimization landscapes and different complexities in GAN training. In addition, we prove that the generator in our GAN architecture enjoys a universal approximation property under the criteria of tail risk measures. In addition, we prove the bi-level optimization formulation between the generator and the discriminator is equivalent to a max-min game, leading to a more effective and practical formulation for training. From an empirical perspective, we demonstrate the accuracy and scalability of our method via extensive simulation experiments using synthetic and market data. Our results show that, in contrast to other data-driven scenario generators, our proposed scenario simulation method correctly captures tail risk for both static and dynamic portfolios in the input datasets.
      • [04737] Learning mappings on Wasserstein space with mean-field neural networks
        • Format : Talk at Waseda University
        • Author(s) :
          • HUYEN PHAM (Université Paris Cité )
          • Xavier Warin (EDF)
        • Abstract : We study the machine learning task for models with operators mapping between the Wasserstein space of probability measures and a space of functions. Two classes of neural networks based on bin density and on cylindrical approximation, are proposed to learn these so-called mean-field functions, and are theoretically supported by universal approximation theorems. We perform numerical experiments for training these two mean-field neural networks, and show their accuracy in the generalization error with various test distributions.
      • [04743] Neural Optimal Stopping Boundary
        • Format : Talk at Waseda University
        • Author(s) :
          • Anders Max Reppen (Boston University Questrom School of Business)
          • Halil Mete Soner (Princeton University)
          • Valentin Tissot-Daguette (Princeton University)
        • Abstract : A method based on deep artificial neural networks and empirical risk minimization is developed to calculate the boundary separating the stopping and continuation regions in optimal stopping. The algorithm parameterizes the stopping boundary as the graph of a function and introduces relaxed stopping rules based on fuzzy boundaries to facilitate efficient optimization. Several financial instruments, some in high dimensions, are analyzed through this method, demonstrating its effectiveness. The existence of the stopping boundary is also proved under natural structural assumptions.
      • [05236] MFG-OMO: An optimization framework for mean field game
        • Format : Talk at Waseda University
        • Author(s) :
          • Xin Guo (UC Berkeley)
        • Abstract : We propos a new mathematical paradigm to analyze discrete-time mean-field games. It removes the contractive and the monotone assumptions and the uniqueness of the Nash equilibrium imposed in existing approaches for mean-field games. We show that finding Nash equilibrium solutions for a general class of discrete-time mean-field games is equivalent to solving an optimization problem with bounded variables and simple convex constraints, called MF-OMO. This equivalence framework enables finding multiple (and possibly all) Nash equilibrium solutions of mean-field games by standard algorithms. For instance, projected gradient descent is shown to be capable of retrieving all possible Nash equilibrium solutions when there are finitely many of them, with proper initializations. Moreover, analyzing mean-field games with linear rewards and mean-field independent dynamics is reduced to solving a finite number of linear programs, hence solvable in finite time. Based on joint work with Anran Hu (University of Oxford) and Junzi Zhang (Amazon).
    • 01145 (2/2) : 5C @D505 [Chair: A. Max Reppen]
      • [03993] Statistical Learning with Sublinear Regret of Propagator Models
        • Format : Talk at Waseda University
        • Author(s) :
          • Yufei Zhang (London School of Economics and Political Science)
          • Eyal Neuman (Imperial College London)
        • Abstract : We consider a class of learning problems in which an agent liquidates a risky asset while creating both transient price impact driven by an unknown convolution propagator and linear temporary price impact with an unknown parameter. We characterize the trader’s performance as maximization of a revenue-risk functional, where the trader also exploits available information on a price predicting signal. We present a trading algorithm that alternates between exploration and exploitation phases and achieves sublinear regrets with high probability. For the exploration phase we propose a novel approach for non-parametric estimation of the price impact kernel by observing only the visible price process and derive sharp bounds on the convergence rate, which are characterised by the singularity of the propagator. These kernel estimation methods extend existing methods from the area of Tikhonov regularisation for inverse problems and are of independent interest. The bound on the regret in the exploitation phase is obtained by deriving stability results for the optimizer and value function of the associated class of infinite-dimensional stochastic control problems.
      • [04781] ROBUST UTILITY OPTIMIZATION VIA A GAN APPROACH
        • Format : Talk at Waseda University
        • Author(s) :
          • Hanna Wutte (ETH Zurich)
          • Florian Krach (ETH Zurich)
          • Josef Teichmann (ETH Zurich)
        • Abstract : We study the robust expected utility maximization problem. In this problem, an agent wants to maximize the expected utility of final wealth $X_T^\pi$ under her trading strategy $\pi$ in an uncertain market environment that chooses the worst case market measure $P$ for the given trading strategy, i.e., $\sup_{\pi} \inf_{P} \mathbb{E}_{P}[U (X^\pi_T )]$. This problem can be understood as a two-player zero-sum game between the agent and the market. We restrict our attention to markets consisting of one risk-free and $d$ risky assets $S$. Risky assets $S$ are given by Itô processes, where the drift $\mu$ and diffusion $\sigma$ are chosen by the market player out of a set of admissible candidate functions. To make this tractable, we consider a penalized version of the robust utility optimization problem, where the market model can choose any such continuous functions, but is penalized for deviating from a reference market model via a penalty functional $F$ . We suggest an algorithm to solve this problem using two recurrent neural networks (RNNs) with parameters $\theta$ and $\omega$, one for the agent and one for the market, respectively. Those RNNs are trained iteratively by competing in the zero-sum game \begin{equation}\sup_{\theta}\inf_{\omega}\mathbb{E}[U(X^{\pi_\theta,\mu_\omega,\sigma_\omega}_T) + F (\mu_\omega,\sigma_\omega ,S)] .\end{equation} On a high level, this can be interpreted as a generative adversarial network (GAN) approach, where the generator produces a trading strategy $\pi_\theta$ and the adversarial discriminator tries to find the worst case market model $(\mu_\omega,\sigma_\omega)$. Importantly, the use of RNNs allows both players to learn non-Markovian strategies. The utility function $U$ as well as the penalty function $F$ can be chosen freely. We examine several set-ups to empirically show the quality of our proposed algorithm. At first, we consider log-utility in a friction-less market and instantaneous penalization of the market parameters. In this case, an analytic solution is known to exist which is replicated by our trained model. When introducing friction to the market, or when using other utility functions or path-dependent penalties, analytic solutions no longer exist. Therefore, we construct new evaluation metrics and we observe that our trained model achieves convincing results. This is joint work with Florian Krach and Josef Teichmann.
      • [05102] Deep Learning in Portfolio Selection under Market Frictions
        • Format : Talk at Waseda University
        • Author(s) :
          • Chen Yang (The Chinese University of Hong Kong)
        • Abstract : Incorporating market frictions in portfolio selection problems often leads to high-dimensionality even when the number of stocks is low, which makes it challenging for traditional grid-based numerical method. In this talk, we explore the application of deep learning method in portfolio selection problems with market frictions such as price impact, transaction cost, and capital gain taxes, and discuss the potential challenges.
      • [05342] Machine Learning Surrogates for Parametric and Adaptive Optimal Execution
        • Format : Talk at Waseda University
        • Author(s) :
          • Michael Ludkovski (U California at Santa Barbara)
          • Tao Chen (U of Michigan)
          • Moritz Voss (U California at Los Angeles)
        • Abstract : We investigate optimal order execution with dynamic parametric uncertainty. Our base model features discrete time, stochastic transient price impact generalizing Obizhaeva and Wang (2013). We first consider learning the optimal strategy across a multi-dimensional range of model configurations, including price impact and resilience parameters, as well as initial stochastic states. We develop a numerical algorithm based on dynamic programming and deep learning, utilizing an actor-critic framework to construct two neural-network (NN) surrogates for the value function and the feedback control. We then apply the lens of adaptive robust stochastic control to consider online statistical learning of model parameters along with a worst-case min-max optimization. Thus, the controller is dynamically learning model parameters based on her observations while explicitly accounting for Bayesian uncertainty of the learned parameter estimates. We propose a modeling framework which allows a time-consistent 3-way marriage between dynamic learning, dynamic robustness and dynamic control. We extend our NN approach to tackle the resulting 8-dimensional adaptive robust optimal order execution problem, and illustrate with comparisons to alternative frameworks, such as adaptive or static robust strategies.