The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter
Jaiswal, A Liu, S Chen, T Wang, Z Advances in Neural Information Processing Systems volume 36 (01 Jan 2023)
Don't Just Prune by Magnitude! Your Mask Topology is Another Secret Weapon
Hoang, D Kundu, S Liu, S Wang, Z Advances in Neural Information Processing Systems volume 36 (01 Jan 2023)
Towards Data-Agnostic Pruning At Initialization: What Makes a Good Sparse Mask?
Pham, H Ta, T Liu, S Xiang, L Le, D Wen, H Tran-Thanh, L Advances in Neural Information Processing Systems volume 36 (01 Jan 2023)
MORE CONVNETS IN THE 2020S: SCALING UP KERNELS BEYOND 51 × 51 USING SPARSITY
Liu, S Chen, T Chen, X Xiao, Q Wu, B Kärkkäinen, T Pechenizkiy, M Mocanu, D Wang, Z 11th International Conference on Learning Representations, ICLR 2023 (01 Jan 2023)
SPARSE MOE AS THE NEW DROPOUT: SCALING DENSE AND SELF-SLIMMABLE TRANSFORMERS
Chen, T Zhang, Z Jaiswal, A Liu, S Wang, Z 11th International Conference on Learning Representations, ICLR 2023 (01 Jan 2023)
Supervised Feature Selection with Neuron Evolution in Sparse Neural Networks
Atashgahi, Z Zhang, X Kichler, N Liu, S Yin, L Pechenizkiy, M Veldhuis, R Mocanu, D Transactions on Machine Learning Research volume 2023-February (01 Feb 2023)
K-theory of fine topological algebras, Chern character, and assembly
Tillmann, U K-Theory volume 6 issue 1 57-86 (01 Jan 1992)
Factorization of the (relative) Chern character through Lie algebra homology
Tillmann, U K-Theory volume 6 issue 5 457-463 (01 Sep 1992)
Relation of the van est spectral sequence to K-theory and cyclic homology
Tillmann, U Illinois Journal of Mathematics volume 37 issue 4 589-608 (01 Jan 1993)
Fri, 06 Jun 2025
16:00
C3

Sharp mixed moment bounds for zeta times a Dirichlet L-function

Markus Valås Hagen
(NTNU)
Abstract

A famous theorem of Selberg asserts that $\log|\zeta(\tfrac12+it)|$ is approximately a normal distribution with mean $0$ and variance $\tfrac12\log\log T$, when we sample $t\in [T,2T]$ uniformly. This extends in a natural way to a plethora of other $L$-functions, one of them being Dirichlet $L$-functions $L(s,\chi)$ with $\chi$ a primitive Dirichlet character. Viewing $\zeta(\tfrac12+it)$ and $L(\tfrac12+it,\chi)$ as normal variables, we expect indepedence between them, meaning that for fixed $V_1,V_2 \in \mathbb{R}$: $$\textrm{meas}_{t \in [T,2T]} \left\{\frac{\log|\zeta(\tfrac12+it)|}{\sqrt{\tfrac12 \log\log T}}\geq V_1 \text{   and   } \frac{\log|L(\tfrac12+it,\chi)|}{\sqrt{\tfrac12 \log\log T}}\geq V_2\right\} \sim \prod_{j=1}^2 \int_{V_j}^\infty e^{-x^2/2} \frac{\textrm{d}x}{\sqrt{2\pi}}.$$
    When $V_j\asymp \sqrt{\log\log T}$, i.e. we are considering values of order of the variance, the asymptotic above breaks down, but the Gaussian behaviour is still believed to hold to order. For such $V_j$ the behaviour of the joint distribution is decided by the moments $$I_{k,\ell}(T)=\int_T^{2T} |\zeta(\tfrac12+it)|^{2k}|L(\tfrac12+it,\chi)|^{2\ell}\, dt.$$ We establish that $I_{k,\ell}(T)\asymp T(\log T)^{k^2+\ell^2}$ for $0<k,\ell \leq 1$. The lower bound holds for all $k,\ell >0$. This allows us to decide the order of the joint distribution when $V_j =\alpha_j\sqrt{\log\log T}$ for $\alpha_j \in (0,\sqrt{2}]$. Other corollaries include sharp moment bounds for Dedekind zeta functions of quadratic number fields, and Hurwitz zeta functions with rational parameter. 
    

Subscribe to