14:00
Forthcoming events in this series
14:00
14:30
Computing principal components via optimisation of elementary symmetric polynomials
14:00
Analysis of Magnus expansion methods in the semiclassical regime
Abstract
Magnus expansion based methods are an efficient class of integrators for solving Schrödinger equations that feature time dependent potentials such as lasers. These methods have been found to be highly effective in computational quantum chemistry since the pioneering work of Tal Ezer and Kosloff in the early 90s. The convergence of the Magnus expansion, however, is usually understood only for ODEs and traditional analysis suggests a much poorer performance of these methods than observed experimentally. It was not till the work of Hochbruck and Lubich in 2003 that a rigorous analysis justifying the application to PDEs with unbounded operators, such as the Schrödinger equation, was presented. In this talk we will extend this analysis to the semiclassical regime, where the highly oscillatory solution conventionally suggests large errors and a requirement for very small time steps.
14:30
New approaches for global optimization methods
Abstract
We present some dimensionality reduction techniques for global optimization algorithms, in order to increase their scalability. Inspired by ideas in machine learning, and extending the approach of random projections in Zhang et al (2016), we present some new algorithmic approaches for global optimisation with theoretical guarantees of good behaviour and encouraging numerical results.
14:00
Derivative-free optimisation methods for nonlinear least-squares problems
Abstract
Derivative-free optimisation (DFO) algorithms are a category of optimisation methods for situations when one is unable to compute or estimate derivatives of the objective, such as when the objective has noise or is very expensive to evaluate. In this talk I will present a flexible DFO framework for unconstrained nonlinear least-squares problems, and in particular discuss its performance on noisy problems.
14:30
The 2017 Problem Solving Squad
Abstract
Each year Prof. Trefethen gives the Problem Solving Squad a sequence of problems with no hints, one a week, where the solution of each problem is a single real number to be computed by any method available. We will present this year's three problems, involving (1) an S-shaped bifurcation curve, (2) shortest path around a random web, and (3) switching a time-varying system to maximize a matrix norm.
The 14 students this year are Simon Vary plus InFoMM cohort 2: Matteo Croci, Davin Lunz, Michael McPhail, Tori Pereira, Lindon Roberts, Caoimhe Rooney, Ian Roper, Thomas Roy, Tino Sulzer, Bogdan Toader, Florian Wechsung, Jess Williams, and Fabian Ying. The presentations will be by (1) Lindon Roberts, (2) Florian Wechsung, and (3) Thomas Roy.
14:00
Sparse Kerdock matrices for compressive sensing
Abstract
Delsarte-Goethals frames are a popular choice for deterministic measurement matrices in compressive sensing. I will show that it is possible to construct extremely sparse matrices which share precisely the same row space as Delsarte-Goethals frames. I will also describe the combinatorial block design underlying the construction and make a connection to Steiner equiangular tight frames.
14:30
14:00
Random functions in Chebfun
Abstract
What's the continuous analog of randn? In recent months I've been exploring such questions with Abdul-Lateef Haji-Ali and other members of the Chebfun team, and Chebfun now has commands randnfun, randnfun2, randnfunsphere, and randnfundisk. These are based on finite Fourier series with random coefficients, and interesting questions arise in the "white noise" limit as the lengths of the series approaches infinity and as random ODEs become stochastic DEs. This work is at an early stage and we are grateful for input from stochastic experts at Oxford and elsewhere.
14:30
Ill-conditioning and numerical stability in radial basis functions (RBFs) using frame theory
Abstract
We analyse the numerical approximation of functions using radial basis functions in the context of frames. Frames generalize the notion of a basis by allowing redundancy, while being restricted by a so-called frame condition. The theory of numerical frame approximations allows the study of ill-conditioning, inherently due to their redundancy, and suggests discretization techniques that still offer numerical stability to machine precision. We apply the theory to radial basis functions.
14:00
Computation of the joint spectral radius by optimization techniques
Abstract
The joint spectral radius (JSR) of a set of matrices characterizes the maximum growth rate that can be achieved by multiplying them in arbitrary order. This concept, which essentially generalizes the notion of the "largest eigenvalue" from one matrix to many, was introduced by Rota and Strang in the early 60s and has since emerged in many areas of application such as stability of switched linear systems, computation of the capacity of codes, convergence of consensus algorithms, tracability of graphs, and many others. The JSR is a very difficult quantity to compute even for a pair of matrices. In this talk, we present optimization-based algorithms (e.g., via semidefinite programming or dynamic programming) that can either compute the JSR exactly in special cases or approximate it with arbitrary prescribed accuracy in the general case.
Based on joint work (in different subsets) with Raphael Jungers, Pablo Parrilo, and Mardavij Roozbehani.
14:30
14:00
Nonconvex geometry of low-rank optimizations
Abstract
The past few years have seen a surge of interest in nonconvex reformulations of convex optimizations using nonlinear reparameterizations of the optimization variables. Compared with the convex formulations, the nonconvex ones typically involve many fewer variables, allowing them to scale to scenarios with millions of variables. However, one pays the price of solving nonconvex optimizations to global optimality, which is generally believed to be impossible. In this talk, I will characterize the nonconvex geometries of several low-rank matrix optimizations. In particular, I will argue that under reasonable assumptions, each critical point of the nonconvex problems either corresponds to the global optimum of the original convex optimizations, or is a strict saddle point where the Hessian matrix has a negative eigenvalue. Such a geometric structure ensures that many local search algorithms can converge to the global optimum with random initializations. Our analysis is based on studying how the convex geometries are transformed under nonlinear parameterizations.
14:30
Improving the rational Remez algorithm via adaptive barycentric representations
14:00
14:30
14:00
Efficient DC algorithm for sparse optimization
Abstract
In various research fields such as machine learning, compressed sensing and operations research, optimization problems which seek sparsity of solutions by the cardinality constraint or rank constraint are studied. We formulate such problems as DC (Difference of two Convex functions) optimization problems and apply DC Algorithm (DCA) to them. While a subproblem needs to be solved in each DCA iteration, its closed-form solution can be easily obtained by soft-thresholding operation. Numerical experiments demonstrate the efficiency of the proposed DCA in comparison with existing methods.
This is a joint work with J. Gotoh (Chuo Univ.) and K. Tono (U. Tokyo).
14:00
Radial basis function approximation of solutions to elliptic PDEs
14:00
Perturbation of higher-order singular values
Abstract
Joint work with Wolfgang Hackbusch and Daniel Kressner
14:30
A robust parallel algorithm for combinatorial compressed sensing
14:00
Learning sparse additive models with interactions in high dimensions
14:30
Quadrature rules and rational approximations to the exponential function
14:00
Maximizing the AUC function for classification using black-box optimization methods
14:30
Sync-Rank: Robust ranking, constrained ranking and rank aggregation via eigenvector and SDP synchronization
Abstract
We consider the classic problem of establishing a statistical ranking of a set of n items given a set of inconsistent and incomplete pairwise comparisons between such items. Instantiations of this problem occur in numerous applications in data analysis (e.g., ranking teams in sports data), computer vision, and machine learning. We formulate the above problem of ranking with incomplete noisy information as an instance of the group synchronization problem over the group SO(2) of planar rotations, whose usefulness has been demonstrated in numerous applications in recent years. Its least squares solution can be approximated by either a spectral or a semidefinite programming (SDP) relaxation, followed by a rounding procedure. We perform extensive numerical simulations on both synthetic and real-world data sets (Premier League soccer games, a Halo 2 game tournament and NCAA College Basketball games) showing that our proposed method compares favorably to other algorithms from the recent literature.
We propose a similar synchronization-based algorithm for the rank-aggregation problem, which integrates in a globally consistent ranking pairwise comparisons given by different rating systems on the same set of items. We also discuss the problem of semi-supervised ranking when there is available information on the ground truth rank of a subset of players, and propose an algorithm based on SDP which recovers the ranks of the remaining players. Finally, synchronization-based ranking, combined with a spectral technique for the densest subgraph problem, allows one to extract locally-consistent partial rankings, in other words, to identify the rank of a small subset of players whose pairwise comparisons are less noisy than the rest of the data, which other methods are not able to identify.
14:00
Interpolation and quadrature in perturbed points
Abstract
The trigonometric interpolants to a periodic function f in equispaced points converge if f is Dini-continuous, and the associated quadrature formula, the trapezoidal rule, converges if f is continuous. What if the points are perturbed? Amazingly little has been done on this problem, or on its algebraic (i.e. nonperiodic) analogue. I will present new results joint with Anthony Austin which show some surprises.
14:30
On the spectral problem for trivariate functions
Abstract
Using a variational approach applied to generalized Rayleigh functionals, we extend the concepts of singular values and singular functions to trivariate functions defined on a rectangular parallelepiped. We also consider eigenvalues and eigenfunctions for trivariate functions whose domain is a cube. For a general finite-rank trivariate function, we describe an algorithm for computing the canonical polyadic (CP) decomposition, provided that the CP factors are linearly independent in two variables. All these notions are computed using Chebfun. Application in finding the best rank-1 approximation of trivariate functions is investigated. We also prove that if the function is analytic and two-way orthogonally decomposable (odeco), then the CP values decay geometrically, and optimal finite-rank approximants converge at the same rate.
14:00
Least-squares spectral methods for operator eigenvalue problems
14:30
14:00
Antitriangular factorization of saddle point matrices and the Null Space method
Abstract
Joint work with Jen Pestana.
14:30
Random plane waves and other classes of random functions
Abstract
There are several classes of random function that appear naturally in mathematical physics, probability, number theory, and other areas of mathematics. I will give a brief overview of some of these random functions and explain what they are and why they are important. Finally, I will explain how I use chebfun to study these functions.
14:00
Stochastic discrete Hamiltonian variational integrators
Abstract
Stochastic Hamiltonian systems with multiplicative noise are a mathematical model for many physical systems with uncertainty. For example, they can be used to describe synchrotron oscillations of a particle in a storage ring. Just like their deterministic counterparts, stochastic Hamiltonian systems possess several important geometric features; for instance, their phase flows preserve the canonical symplectic form. When simulating these systems numerically, it is therefore advisable that the numerical scheme also preserves such geometric structures. In this talk we propose a variational principle for stochastic Hamiltonian systems and use it to construct stochastic Galerkin variational integrators. We show that such integrators are indeed symplectic, preserve integrals of motion related to Lie group symmetries, demonstrate superior long-time energy behavior compared to nonsymplectic methods, and they include stochastic symplectic Runge-Kutta methods as a special case. We also analyze their convergence properties and present the results of several numerical experiments.
14:30
14:00
PDE-constrained shape optimization with FEM-tailored discretization of diffeomorphisms
14:30
SNIPE for memory-limited PCA with incomplete data: From failure to success
Abstract
Consider the problem of identifying an unknown subspace S from data with erasures and with limited memory available. To estimate S, suppose we group the measurements into blocks and iteratively update our estimate of S with each new block.
In the first part of this talk, we will discuss why estimating S by computing the "running average" of span of these blocks fails in general. Based on the lessons learned, we then propose SNIPE for memory-limited PCA with incomplete data, useful also for streaming data applications. SNIPE provably converges (linearly) to the true subspace, in the absence of noise and given sufficient measurements, and shows excellent performance in simulations. This is joint work with Laura Balzano and Mike Wakin.
14:00
14:30
Solving commutators while preserving structure
Abstract
Nested commutators of differential operators appear frequently in the numerical solution of equations of quantum mechanics. These are expensive to compute with and a significant effort is typically made to avoid such commutators. In the case of Magnus-Lanczos methods, which remain the standard approach for solving Schrödinger equations featuring time-varying potentials, however, it is not possible to avoid the nested commutators appearing in the Magnus expansion.
We show that, when working directly with the undiscretised differential operators, these commutators can be simplified and are fairly benign, cost-wise. The caveat is that this direct approach compromises structure -- we end up with differential operators that are no longer skew-Hermitian under discretisation. This leads to loss of unitarity as well as resulting in numerical instability when moderate to large time steps are involved. Instead, we resort to working with symmetrised differential operators whose discretisation naturally results in preservation of structure, conservation of unitarity and stability
14:00
14:30
Finding interesting patterns using submodular function optimization
14:00
Matrix iteration for a Helmholtz problem based on Faber polynomials
14:30
14:00
Finite element approximation of implicitly constituted incompressible flow models
14:30
Multi-index methods for quadrature
Abstract
Multi-index methods are a generalization of multilevel methods in high dimensional problem and are based on taking mixed first-order differences along all dimensions. With these methods, we can accurately and efficiently compute a quadrature or construct an interpolation where the integrand requires some form of high dimensional discretization. Multi-index methods are related to Sparse Grid methods and the Combination Technique and have been applied to multiple sampling methods, i.e., Monte Carlo, Stochastic Collocation and, more recently, Quasi Monte Carlo.
In this talk, we describe and analyse the Multi-Index Monte Carlo (MIMC) and Multi-Index Stochastic Collocation (MISC) methods for computing statistics of the solution of a PDE with random data. Provided sufficient mixed regularity, MIMC and MISC achieve better complexity than their corresponding multilevel methods. We propose optimization procedures to select the most effective mixed differences to include in these multi-index methods. We also observe that in the optimal case, the convergence rate of MIMC and MISC is only dictated by the convergence of the deterministic solver applied to a one-dimensional spatial problem. We finally show the effectiveness of MIMC and MISC in some computational tests, including PDEs with random coefficients and Stochastic Particle Systems.
14:00
ODE IVPs and BVPs
Abstract
I will discuss some of the relationships between ODE IVPs, usually solved by marching, and ODE BVPs, usually solved by global discretizations.
14:30
14:00
Consistent piecewise polynomial approximation of Sobolev space H^m in R^n
14:30
14:00
14:30
Computing Stieltjes and log transforms of functions with algebraic endpoint singularities
14:00