The effect of the definition of ‘pandemic’ on quantitative assessments of infectious disease outbreak risk
Bonsall, M Singer, B Thompson, R Scientific Reports volume 11 (28 Jan 2021)

We are very sad to hear the news of the death of Peter Neumann earlier today. Peter was the son of the mathematicians Bernhard Neumann and Hanna Neumann and, after gaining a B.A. from The Queen's College, Oxford in 1963, obtained his D.Phil from Oxford University in 1966.

Follow-up of astrophysical transients in real time with the IceCube
Neutrino Observatory
Abbasi, R Ackermann, M Adams, J Aguilar, J Ahlers, M Ahrens, M Alispach, C Jr, A Amin, N An, R Andeen, K Anderson, T Ansseau, I Anton, G Argüelles, C Axani, S Bai, X V, A Barbano, A Barwick, S Bastian, B Basu, V Baum, V Baur, S Bay, R Beatty, J Becker, K Tjus, J Bellenghi, C BenZvi, S Berley, D Bernardini, E Besson, D Binder, G Bindig, D Blaufuss, E Blot, S Böser, S Botner, O Böttcher, J Bourbeau, E Bourbeau, J Bradascio, F Braun, J Bron, S Brostean-Kaiser, J Burgman, A Busse, R Campana, M Chen, C Chirkin, D Choi, S Clark, B Clark, K Classen, L Coleman, A Collin, G Conrad, J Coppin, P Correa, P Cowen, D Cross, R Dave, P Clercq, C DeLaunay, J Dembinski, H Deoskar, K Ridder, S Desai, A Desiati, P Vries, K Wasseige, G With, M DeYoung, T Dharani, S Diaz, A Díaz-Vélez, J Dujmovic, H Dunkman, M DuVernois, M Dvorak, E Ehrhardt, T Eller, P Engel, R Evans, J Evenson, P Fahey, S Fazely, A Fiedlschuster, S Fienberg, A Filimonov, K Finley, C Fischer, L Fox, D Franckowiak, A Friedman, E Fritz, A Fürst, P Gaisser, T Gallagher, J Ganster, E Garrappa, S Gerhardt, L Ghadimi, A Glaser, C Glauch, T Glüsenkamp, T Goldschmidt, A Gonzalez, J Goswami, S Grant, D Grégoire, T Griffith, Z Griswold, S Gündüz, M Haack, C Hallgren, A Halliday, R Halve, L Halzen, F Minh, M Hanson, K Hardin, J Harnisch, A Haungs, A Hauser, S Hebecker, D Helbing, K Henningsen, F Hettinger, E Hickford, S Hignight, J Hill, C Hill, G Hoffman, K Hoffmann, R Hoinka, T Hokanson-Fasig, B Hoshina, K Huang, F Huber, M Huber, T Hultqvist, K Hünnefeld, M Hussain, R In, S Iovine, N Ishihara, A Jansson, M Japaridze, G Jeong, M Jones, B Joppe, R Kang, D Kang, W Kang, X Kappes, A Kappesser, D Karg, T Karl, M Karle, A Katz, U Kauer, M Kellermann, M Kelley, J Kheirandish, A Kim, J Kin, K Kintscher, T Kiryluk, J Klein, S Koirala, R Kolanoski, H Köpke, L Kopper, C Kopper, S Koskinen, D Koundal, P Kovacevich, M Kowalski, M Krings, K Krückl, G Kurahashi, N Kyriacou, A Gualda, C Lanfranchi, J Larson, M Lauber, F Lazar, J Leonard, K Leszczyńska, A Li, Y Liu, Q Lohfink, E Mariscal, C Lu, L Lucarelli, F Ludwig, A Luszczak, W Lyu, Y Ma, W Madsen, J Mahn, K Makino, Y Mallik, P Mancina, S Mariş, I Maruyama, R Mase, K McNally, F Meagher, K Medina, A Meier, M Meighen-Berger, S Merz, J Micallef, J Mockler, D Momenté, G Montaruli, T Moore, R Morse, R Moulai, M Naab, R Nagai, R Naumann, U Necker, J Nguyên, L Niederhausen, H Nisa, M Nowicki, S Nygren, D Pollmann, A Oehler, M Olivas, A O'Sullivan, E Pandya, H Pankova, D Park, N Parker, G Paudel, E Peiffer, P Heros, C Philippen, S Pieloth, D Pieper, S Pizzuto, A Plum, M Popovych, Y Porcelli, A Rodriguez, M Price, P Pries, B Przybylski, G Raab, C Raissi, A Rameez, M Rawlins, K Rea, I Rehman, A Reimann, R Renschler, M Renzi, G Resconi, E Reusch, S Rhode, W Richman, M Riedel, B Robertson, S Roellinghoff, G Rongen, M Rott, C Ruhe, T Ryckbosch, D Cantu, D Safa, I Herrera, S Sandrock, A Sandroos, J Santander, M Sarkar, S Satalecka, K Scharf, M Schaufel, M Schieler, H Schlunder, P Schmidt, T Schneider, A Schneider, J Schröder, F Schumacher, L Sclafani, S Seckel, D Seunarine, S Shefali, S Silva, M Skrzypek, B Smithers, B Snihur, R Soedingrekso, J Soldin, D Spiczak, G Spiering, C Stachurska, J Stamatikos, M Stanev, T Stein, R Stettner, J Steuer, A Stezelberger, T Stokstad, R Stuttard, T Sullivan, G Taboada, I Tenholt, F Ter-Antonyan, S Tilav, S Tischbein, F Tollefson, K Tomankova, L Tönnis, C Toscano, S Tosi, D Trettin, A Tselengidou, M Tung, C Turcati, A Turcotte, R Turley, C Twagirayezu, J Ty, B Elorrieta, M Vandenbroucke, J Eijk, D Eijndhoven, N Vannerom, D Santen, J Verpoest, S Vraeghe, M Walck, C Wallace, A Watson, T Weaver, C Weindl, A Weiss, M Weldert, J Wendt, C Werthebach, J Weyrauch, M Whelan, B Whitehorn, N Wiebe, K Wiebusch, C Williams, D Wolf, M Woschnagg, K Wrede, G Wulff, J Xu, X Xu, Y Yanez, J Yoshida, S Yuan, T Zhang, Z The Astrophysical Journal: an international review of astronomy and astronomical physics http://arxiv.org/abs/2012.04577v1
Practical considerations for measuring the effective reproductive number, Rt
Gostic, K McGough, L Baskerville, E Abbott, S Joshi, K Tedijanto, C Kahn, R Niehus, R Hay, J De Salazar, P Hellewell, J Meakin, S Munday, J Bosse, N Sherrat, K Thompson, R White, L Huisman, J Scire, J Bonhoeffer, S Stadler, T Wallinga, J Funk, S Lipsitch, M Cobey, S PLoS Computational Biology volume 16 issue 12 (10 Dec 2020)
A Randomized Algorithm to Reduce the Support of Discrete Measures.
Cosentino, F Oberhauser, H Abate, A NeurIPS (01 Jan 2020)
Tue, 23 Feb 2021
14:00
Virtual

Dense for the price of sparse: Initialising deep nets with efficient sparse affine transforms

Ilan Price
(Mathematical Institute)
Abstract

That neural networks may be pruned to high sparsities and retain high accuracy is well established. Recent research efforts focus on pruning immediately after initialization so as to allow the computational savings afforded by sparsity to extend to the training process. In this work, we introduce a new `DCT plus Sparse' layer architecture, which maintains information propagation and trainability even with as little as 0.01% trainable kernel parameters remaining. We show that standard training of networks built with these layers, and pruned at initialization, achieves state-of-the-art accuracy for extreme sparsities on a variety of benchmark network architectures and datasets. Moreover, these results are achieved using only simple heuristics to determine the locations of the trainable parameters in the network, and thus without having to initially store or compute with the full, unpruned network, as is required by competing prune-at-initialization algorithms. Switching from standard sparse layers to DCT plus Sparse layers does not increase the storage footprint of a network and incurs only a small additional computational overhead.

--

A link for this talk will be sent to our mailing list a day or two in advance.  If you are not on the list and wish to be sent a link, please contact @email.

Subscribe to