Don’t Be So Dense: Sparse-to-Sparse GAN Training Without Sacrificing Performance
Liu, S Tian, Y Chen, T Shen, L International Journal of Computer Vision volume 131 issue 10 2635-2648 (01 Oct 2023)
Dynamic Sparse Network for Time Series Classification: Learning What to “See”
Xiao, Q Wu, B Zhang, Y Liu, S Pechenizkiy, M Mocanu, E Mocanu, D Advances in Neural Information Processing Systems volume 35 (01 Jan 2022)
Superposing Many Tickets into One: A Performance Booster for Sparse Neural Network Training
Yin, L Menkovski, V Fang, M Huang, T Pei, Y Pechenizkiy, M Mocanu, D Liu, S Proceedings of Machine Learning Research volume 180 2267-2277 (01 Jan 2022)
You Can Have Better Graph Neural Networks by Not Training Weights at All: Finding Untrained GNNs Tickets
Huang, T Chen, T Fang, M Menkovski, V Zhao, J Yin, L Pei, Y Mocanu, D Wang, Z Pechenizkiy, M Liu, S Proceedings of Machine Learning Research volume 198 (01 Jan 2022)
Many-Task Federated Learning: A New Problem Setting and A Simple Baseline
Cai, R Chen, X Liu, S Srinivasa, J Lee, M Kompella, R Wang, Z IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops volume 2023-June 5037-5045 (01 Jan 2023)
Data Augmented Flatness-aware Gradient Projection for Continual Learning
Yang, E Shen, L Wang, Z Liu, S Guo, G Wang, X Proceedings of the IEEE International Conference on Computer Vision 5607-5616 (01 Jan 2023)
Lottery Pools: Winning More by Interpolating Tickets without Increasing Training or Inference Cost
Yin, L Liu, S Fang, M Huang, T Menkovski, V Pechenizkiy, M Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023 volume 37 10945-10953 (27 Jun 2023)
Are Large Kernels Better Teachers than Transformers for ConvNets?
Huang, T Yin, L Zhang, Z Shen, L Fang, M Pechenizkiy, M Wang, Z Liu, S Proceedings of Machine Learning Research volume 202 14023-14038 (01 Jan 2023)
Instant Soup: Cheap Pruning Ensembles in A Single Pass Can Draw Lottery Tickets from Large Models
Jaiswal, A Liu, S Chen, T Ding, Y Wang, Z Proceedings of Machine Learning Research volume 202 14691-14701 (01 Jan 2023)
Graph Ladling: Shockingly Simple Parallel GNN Training without Intermediate Communication
Jaiswal, A Liu, S Chen, T Ding, Y Wang, Z Proceedings of Machine Learning Research volume 202 14679-14690 (01 Jan 2023)
Subscribe to