Lottery Pools: Winning More by Interpolating Tickets without Increasing Training or Inference Cost
Yin, L Liu, S Fang, M Huang, T Menkovski, V Pechenizkiy, M Proceedings of the AAAI Conference on Artificial Intelligence volume 37 issue 9 10945-10953 (26 Jun 2023)
Are Large Kernels Better Teachers than Transformers for ConvNets?
Huang, T Yin, L Zhang, Z Shen, L Fang, M Pechenizkiy, M Wang, Z Liu, S Proceedings of Machine Learning Research volume 202 14023-14038 (01 Jan 2023)
Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models
Jaiswal, A Liu, S Chen, T Ding, Y Wang, Z Proceedings of Machine Learning Research volume 202 14691-14701 (01 Jan 2023)
Graph Ladling: Shockingly Simple Parallel GNN Training without Intermediate Communication
Jaiswal, A Liu, S Chen, T Ding, Y Wang, Z Proceedings of Machine Learning Research volume 202 14679-14690 (01 Jan 2023)
Enhancing Adversarial Training via Reweighting Optimization Trajectory
Huang, T Liu, S Chen, T Fang, M Shen, L Menkovski, V Yin, L Pei, Y Pechenizkiy, M 113-130 (17 Sep 2023)
REST: Enhancing Group Robustness in DNNs Through Reweighted Sparse Training
Zhao, J Yin, L Liu, S Fang, M Pechenizkiy, M 313-329 (17 Sep 2023)
REVISITING PRUNING AT INITIALIZATION THROUGH THE LENS OF RAMANUJAN GRAPH
Hoang, D Liu, S Marculescu, R Wang, Z 11th International Conference on Learning Representations, ICLR 2023 (01 Jan 2023)
Dynamic Sparsity Is Channel-Level Sparsity Learner
Yin, L Li, G Fang, M Shen, L Huang, T Wang, Z Menkovski, V Ma, X Pechenizkiy, M Liu, S Advances in Neural Information Processing Systems volume 36 (01 Jan 2023)
The Emergence of Essential Sparsity in Large Pre-trained Models: The Weights that Matter
Jaiswal, A Liu, S Chen, T Wang, Z Advances in Neural Information Processing Systems volume 36 (01 Jan 2023)
Don't Just Prune by Magnitude! Your Mask Topology is Another Secret Weapon
Hoang, D Kundu, S Liu, S Wang, Z Advances in Neural Information Processing Systems volume 36 (01 Jan 2023)
Subscribe to