REST: Enhancing Group Robustness in DNNs Through Reweighted Sparse Training
Zhao, J
Yin, L
Liu, S
Fang, M
Pechenizkiy, M
313-329
(17 Sep 2023)
REVISITING PRUNING AT INITIALIZATION THROUGH THE LENS OF RAMANUJAN GRAPH
Hoang, D
Liu, S
Marculescu, R
Wang, Z
11th International Conference on Learning Representations, ICLR 2023
(01 Jan 2023)
Dynamic Sparsity Is Channel-Level Sparsity Learner
Yin, L
Li, G
Fang, M
Shen, L
Huang, T
Wang, Z
Menkovski, V
Ma, X
Pechenizkiy, M
Liu, S
Advances in Neural Information Processing Systems
volume 36
(01 Jan 2023)
The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter
Jaiswal, A
Liu, S
Chen, T
Wang, Z
Advances in Neural Information Processing Systems
volume 36
(01 Jan 2023)
Don't Just Prune by Magnitude! Your Mask Topology is Another Secret Weapon
Hoang, D
Kundu, S
Liu, S
Wang, Z
Advances in Neural Information Processing Systems
volume 36
(01 Jan 2023)
Towards Data-Agnostic Pruning At Initialization: What Makes a Good Sparse Mask?
Pham, H
Ta, T
Liu, S
Xiang, L
Le, D
Wen, H
Tran-Thanh, L
Advances in Neural Information Processing Systems
volume 36
(01 Jan 2023)
MORE CONVNETS IN THE 2020S: SCALING UP KERNELS BEYOND 51 × 51 USING SPARSITY
Liu, S
Chen, T
Chen, X
Xiao, Q
Wu, B
Kärkkäinen, T
Pechenizkiy, M
Mocanu, D
Wang, Z
11th International Conference on Learning Representations, ICLR 2023
(01 Jan 2023)
SPARSE MOE AS THE NEW DROPOUT: SCALING DENSE AND SELF-SLIMMABLE TRANSFORMERS
Chen, T
Zhang, Z
Jaiswal, A
Liu, S
Wang, Z
11th International Conference on Learning Representations, ICLR 2023
(01 Jan 2023)
Supervised Feature Selection with Neuron Evolution in Sparse Neural Networks
Atashgahi, Z
Zhang, X
Kichler, N
Liu, S
Yin, L
Pechenizkiy, M
Veldhuis, R
Mocanu, D
Transactions on Machine Learning Research
volume 2023-February
(01 Feb 2023)
K-theory of fine topological algebras, Chern character, and assembly
Tillmann, U
K-Theory
volume 6
issue 1
57-86
(01 Jan 1992)