Tue, 06 Feb 2024

14:30 - 15:00
L6

Computing $H^2$-conforming finite element approximations without having to implement $C^1$-elements

Charlie Parker
(Mathematical Institute (University of Oxford))
Abstract

Fourth-order elliptic problems arise in a variety of applications from thin plates to phase separation to liquid crystals. A conforming Galerkin discretization requires a finite dimensional subspace of $H^2$, which in turn means that conforming finite element subspaces are $C^1$-continuous. In contrast to standard $H^1$-conforming $C^0$ elements, $C^1$ elements, particularly those of high order, are less understood from a theoretical perspective and are not implemented in many existing finite element codes. In this talk, we address the implementation of the elements. In particular, we present algorithms that compute $C^1$ finite element approximations to fourth-order elliptic problems and which only require elements with at most $C^0$-continuity. We also discuss solvers for the resulting subproblems and illustrate the method on a number of representative test problems.

Tue, 06 Feb 2024

14:00 - 14:30
L6

Fast High-Order Finite Element Solvers on Simplices

Pablo Brubeck Martinez
(Mathematical Institute (University of Oxford))
Abstract

We present new high-order finite elements discretizing the $L^2$ de Rham complex on triangular and tetrahedral meshes. The finite elements discretize the same spaces as usual, but with different basis functions. They allow for fast linear solvers based on static condensation and space decomposition methods.

The new elements build upon the definition of degrees of freedom given by (Demkowicz et al., De Rham diagram for $hp$ finite element spaces. Comput.~Math.~Appl., 39(7-8):29--38, 2000.), and consist of integral moments on a symmetric reference simplex with respect to a numerically computed polynomial basis that is orthogonal in both the $L^2$- and $H(\mathrm{d})$-inner products ($\mathrm{d} \in \{\mathrm{grad}, \mathrm{curl}, \mathrm{div}\}$).

On the reference symmetric simplex, the resulting stiffness matrix has diagonal interior block, and does not couple together the interior and interface degrees of freedom. Thus, on the reference simplex, the Schur complement resulting from elimination of interior degrees of freedom is simply the interface block itself.

This sparsity is not preserved on arbitrary cells mapped from the reference cell. Nevertheless, the interior-interface coupling is weak because it is only induced by the geometric transformation. We devise a preconditioning strategy by neglecting the interior-interface coupling. We precondition the interface Schur complement with the interface block, and simply apply point-Jacobi to precondition the interior block.

The combination of this approach with a space decomposition method on small subdomains constructed around vertices, edges, and faces allows us to efficiently solve the canonical Riesz maps in $H^1$, $H(\mathrm{curl})$, and $H(\mathrm{div})$, at very high order. We empirically demonstrate iteration counts that are robust with respect to the polynomial degree.

Fri, 08 Mar 2024
16:00
L1

Maths meets Stats

James Taylor (Mathematical Institute) and Anthony Webster (Department of Statistics)
Abstract

Speaker: James Taylor
Title: D-Modules and p-adic Representations

Abstract: The representation theory of finite groups is a beautiful and well-understood subject. However, when one considers more complicated groups things become more interesting, and to classify their representations is often a much harder problem. In this talk, I will introduce the classical theory, the particular groups I am interested in, and explain how one might hope to understand their representations through the use of D-modules - the algebraic incarnation of differential equations.

 

Speaker: Anthony Webster
Title: An Introduction to Epidemiology and Causal Inference

Abstract: This talk will introduce epidemiology and causal inference from the perspective of a statistician and former theoretical physicist. Despite their studies being underpinned by deep and often complex mathematics, epidemiologists are generally more concerned by seemingly mundane information about the relationships between potential risk factors and disease. Because of this, I will argue that a good epidemiologist with minimal statistical knowledge, will often do better than a highly trained statistician. I will also argue that causal assumptions are a necessary part of epidemiology, should be made more explicitly, and allow a much wider range of causal inferences to be explored. In the process, I will introduce ideas from epidemiology and causal inference such as Mendelian Randomisation and the "do calculus", methodological approaches that will increasingly underpin data-driven population research.  

Fri, 26 Jan 2024
16:00
L1

North meets South

Dr Cedric Pilatte (North Wing) and Dr Boris Shustin (South Wing)
Abstract

Speaker: Cedric Pilatte 
Title: Convolution of integer sets: a galaxy of (mostly) open problems

Abstract: Let S be a set of integers. Define f_k(n) to be the number of representations of n as the sum of k elements from S. Behind this simple definition lie fascinating conjectures that are very easy to state but seem unattackable. For example, a famous conjecture of Erdős and Turán predicts that if f_2 is bounded then it has infinitely many zeroes. This talk is designed as an accessible overview of these questions. 
 
Speaker: Boris Shustin

Title: Manifold-Free Riemannian Optimization

Abstract: Optimization problems constrained to a smooth manifold can be solved via the framework of Riemannian optimization. To that end, a geometrical description of the constraining manifold, e.g., tangent spaces, retractions, and cost function gradients, is required. In this talk, we present a novel approach that allows performing approximate Riemannian optimization based on a manifold learning technique, in cases where only a noiseless sample set of the cost function and the manifold’s intrinsic dimension are available.

Tue, 07 May 2024

14:00 - 14:30
L3

The Approximation of Singular Functions by Series of Non-integer Powers

Mohan Zhao
(University of Toronto)
Abstract
In this talk, we describe an algorithm for approximating functions of the form $f(x) = \langle \sigma(\mu),x^\mu \rangle$ over the interval $[0,1]$, where $\sigma(\mu)$ is some distribution supported on $[a,b]$, with $0<a<b<\infty$. Given a desired accuracy and the values of $a$ and $b$, our method determines a priori a collection of non-integer powers, so that functions of this form are approximated by expansions in these powers, and a set of collocation points, such that the expansion coefficients can be found by collocating a given function at these points. Our method has a small uniform approximation error which is proportional to the desired accuracy multiplied by some small constants, and the number of singular powers and collocation points grows logarithmically with the desired accuracy. This method has applications to the solution of partial differential equations on domains with corners.
A Search for IceCube Sub-TeV Neutrinos Correlated with Gravitational-wave Events Detected By LIGO/Virgo
Abbasi, R Ackermann, M Adams, J Agarwalla, S Aguilar, J Ahlers, M Alameddine, J Amin, N Andeen, K Anton, G Argüelles, C Ashida, Y Athanasiadou, S Axani, S Bai, X V., A Baricevic, M Barwick, S Basu, V Bay, R Beatty, J Becker, K Tjus, J Beise, J Bellenghi, C BenZvi, S Berley, D Bernardini, E Besson, D Binder, G Bindig, D Blaufuss, E Blot, S Bontempo, F Book, J Meneguolo, C Böser, S Botner, O Böttcher, J Bourbeau, E Braun, J Brinson, B Brostean-Kaiser, J Burley, R Busse, R Butterfield, D Campana, M Carloni, K Carnie-Bronca, E Chattopadhyay, S Chau, N Chen, C Chen, Z Chirkin, D Choi, S Clark, B Classen, L Coleman, A Collin, G Connolly, A Conrad, J Coppin, P Correa, P Countryman, S Cowen, D Dave, P De Clercq, C DeLaunay, J López, D Dembinski, H Deoskar, K Desai, A Desiati, P de Vries, K de Wasseige, G DeYoung, T Diaz, A Díaz-Vélez, J Dittmer, M Domi, A Dujmovic, H DuVernois, M Ehrhardt, T Eller, P Engel, R Erpenbeck, H Evans, J Evenson, P Fan, K Fang, K Fazely, A Fedynitch, A Feigl, N Fiedlschuster, S Finley, C Fischer, L Fox, D Franckowiak, A Friedman, E Fritz, A Fürst, P Gaisser, T Gallagher, J Ganster, E Garcia, A Gerhardt, L Ghadimi, A Glaser, C Glauch, T Glüsenkamp, T Goehlke, N Gonzalez, J Goswami, S Grant, D Gray, S Griffin, S Griswold, S Günther, C Gutjahr, P Haack, C Hallgren, A Halliday, R Halve, L Halzen, F Hamdaoui, H Minh, M Hanson, K Hardin, J Harnisch, A Hatch, P Haungs, A Helbing, K Hellrung, J Henningsen, F Heuermann, L Heyer, N Hickford, S Hidvegi, A Hill, C Hill, G Hoffman, K Hoshina, K Hou, W Huber, T Hultqvist, K Hünnefeld, M Hussain, R Hymon, K In, S Ishihara, A Jacquart, M Jansson, M Japaridze, G Jayakumar, K Jeong, M Jin, M Jones, B Kang, D Kang, W Kang, X Kappes, A Kappesser, D Kardum, L Karg, T Karl, M Karle, A Katz, U Kauer, M Kelley, J Zathul, A Kheirandish, A Kiryluk, J Klein, S Kochocki, A Koirala, R Kolanoski, H Kontrimas, T Köpke, L Kopper, C Koskinen, D Koundal, P Kovacevich, M Kowalski, M Kozynets, T Kruiswijk, K Krupczak, E Kumar, A Kun, E Kurahashi, N Lad, N Gualda, C Lamoureux, M Larson, M Lauber, F Lazar, J Lee, J DeHolton, K Leszczyńska, A Lincetto, M Liu, Q Liubarska, M Lohfink, E Love, C Mariscal, C Lu, L Lucarelli, F Ludwig, A Luszczak, W Lyu, Y Madsen, J Mahn, K Makino, Y Mancina, S Sainte, W Mariş, I Marka, S Marka, Z Marsee, M Martinez-Soler, I Maruyama, R Mayhew, F McElroy, T McNally, F Mead, J Meagher, K Mechbal, S Medina, A Meier, M Meighen-Berger, S Merckx, Y Merten, L Micallef, J Montaruli, T Moore, R Morii, Y Morse, R Moulai, M Mukherjee, T Naab, R Nagai, R Nakos, M Naumann, U Necker, J Neumann, M Niederhausen, H Nisa, M Noell, A Nowicki, S Pollmann, A O’Dell, V Oehler, M Oeyen, B Olivas, A Orsoe, R Osborn, J O’Sullivan, E Pandya, H Park, N Parker, G Paudel, E Paul, L de los Heros, C Peterson, J Philippen, S Pieper, S Pizzuto, A Plum, M Pontén, A Popovych, Y Rodriguez, M Pries, B Procter-Murphy, R Przybylski, G Rack-Helleis, J Rawlins, K Rechav, Z Rehman, A Reichherzer, P Renzi, G Resconi, E Reusch, S Rhode, W Richman, M Riedel, B Roberts, E Robertson, S Rodan, S Roellinghoff, G Rongen, M Rott, C Ruhe, T Ruohan, L Ryckbosch, D Safa, I Saffer, J Salazar-Gallegos, D Sampathkumar, P Herrera, S Sandrock, A Santander, M Sarkar, S Savelberg, J Savina, P Schaufel, M Schieler, H Schindler, S Schlüter, B Schlüter, F Schmidt, T Schneider, J Schröder, F Schumacher, L Schwefer, G Sclafani, S Seckel, D Seunarine, S Sharma, A Shefali, S Shimizu, N Silva, M Skrzypek, B Smithers, B Snihur, R Soedingrekso, J Søgaard, A Soldin, D Sommani, G Spannfellner, C Spiczak, G Spiering, C Stamatikos, M Stanev, T Stezelberger, T Stürwald, T Stuttard, T Sullivan, G Taboada, I Ter-Antonyan, S Thompson, W Thwaites, J Tilav, S Tollefson, K Tönnis, C Toscano, S Tosi, D Trettin, A Tung, C Turcotte, R Twagirayezu, J Ty, B Elorrieta, M Upadhyay, A Upshaw, K Valtonen-Mattila, N Vandenbroucke, J van Eijndhoven, N Vannerom, D van Santen, J Vara, J Veitch-Michaelis, J Venugopal, M Verpoest, S Veske, D Walck, C Watson, T Weaver, C Weigel, P Weindl, A Weldert, J Wendt, C Werthebach, J Weyrauch, M Whitehorn, N Wiebusch, C Willey, N Williams, D Wolf, M Wrede, G Xu, X Yanez, J Yildizci, E Yoshida, S Yu, F Yu, S Yuan, T Zhang, Z Zhelnin, P Collaboration, T The Astrophysical Journal volume 959 issue 2 96 (01 Dec 2023)
Strong suppression of heat conduction in a laboratory replica of galaxy-cluster turbulent plasmas
Meinecke, J Tzeferacos, P Ross, J Bott, A Feister, S Park, H Bell, A Blandford, R Berger, R Bingham, R Casner, A Chen, L Foster, J Froula, D Goyon, C Kalantar, D Koenig, M Lahmann, B Li, C Lu, Y Palmer, C Petrasso, R Poole, H Remington, B Reville, B Reyes, A Rigby, A Ryu, D Swadling, G Zylstra, A Miniati, F Sarkar, S Schekochihin, A Lamb, D Gregori, G (18 May 2021)
Fri, 08 Mar 2024

15:00 - 16:00
L6

Topological Perspectives to Characterizing Generalization in Deep Neural Networks

Tolga Birdal
((Imperial College)
Further Information

 

Dr. Tolga Birdal is an Assistant Professor in the Department of Computing at Imperial College London, with prior experience as a Senior Postdoctoral Research Fellow at Stanford University in Prof. Leonidas Guibas's Geometric Computing Group. Tolga has defended his master's and Ph.D. theses at the Computer Vision Group under Chair for Computer Aided Medical Procedures, Technical University of Munich led by Prof. Nassir Navab. He was also a Doktorand at Siemens AG under supervision of Dr. Slobodan Ilic working on “Geometric Methods for 3D Reconstruction from Large Point Clouds”. His research interests center on geometric machine learning and 3D computer vision, with a theoretical focus on exploring the boundaries of geometric computing, non-Euclidean inference, and the foundations of deep learning. Dr. Birdal has published extensively in leading academic journals and conference proceedings, including NeurIPS, CVPR, ICLR, ICCV, ECCV, T-PAMI, and IJCV. Aside from his academic life, Tolga has co-founded multiple companies including Befunky, a widely used web-based image editing platform.

Abstract

 

Training deep learning models involves searching for a good model over the space of possible architectures and their parameters. Discovering models that exhibit robust generalization to unseen data and tasks is of paramount for accurate and reliable machine learning. Generalization, a hallmark of model efficacy, is conventionally gauged by a model's performance on data beyond its training set. Yet, the reliance on vast training datasets raises a pivotal question: how can deep learning models transcend the notorious hurdle of 'memorization' to generalize effectively? Is it feasible to assess and guarantee the generalization prowess of deep neural networks in advance of empirical testing, and notably, without any recourse to test data? This inquiry is not merely theoretical; it underpins the practical utility of deep learning across myriad applications. In this talk, I will show that scrutinizing the training dynamics of neural networks through the lens of topology, specifically using 'persistent-homology dimension', leads to novel bounds on the generalization gap and can help demystifying the inner workings of neural networks. Our work bridges deep learning with the abstract realms of topology and learning theory, while relating to information theory through compression.

 

Thu, 01 Feb 2024
16:00
L3

Some mathematical results on generative diffusion models

Dr Renyuan Xu
(University of Southern California)
Further Information

Join us for refreshments from 330 outside L3.

Abstract

Diffusion models, which transform noise into new data instances by reversing a Markov diffusion process, have become a cornerstone in modern generative models. A key component of these models is to learn the score function through score matching. While the practical power of diffusion models has now been widely recognized, the theoretical developments remain far from mature. Notably, it remains unclear whether gradient-based algorithms can learn the score function with a provable accuracy. In this talk, we develop a suite of non-asymptotic theory towards understanding the data generation process of diffusion models and the accuracy of score estimation. Our analysis covers both the optimization and the generalization aspects of the learning procedure, which also builds a novel connection to supervised learning and neural tangent kernels.

This is based on joint work with Yinbin Han and Meisam Razaviyayn (USC).

Subscribe to