Publications

(See the full publication list at Google scholar)

Selected Publications

* represents equal contribution

  1. Shuai Zhang, Hongkang Li, Meng Wang, Miao Liu, Pin-Yu Chen, Songtao Lu, Sijia Liu, Keerthiram Murugesan, Subhajit Chaudhury. “On the Convergence and Sample Complexity Analysis of Deep Q-Networks with Epsilon-Greedy Exploration.” Neural Information Processing Systems (NeurIPS), 2023.

  2. Nowaz Chowdhury, Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen. “Patch-level Routing in Mixture-of-Experts is Provably Sample-efficient for Convolutional Neural Networks.” In Proc. of 2023 International Conference on Machine Learning (ICML), 2023. [pdf]

  3. Shuai Zhang, Meng Wang, Pin-Yu Chen, Sijia Liu, Songtao Lu, Miao Liu. “Joint Edge-Model Sparse Learning is Provably Efficient for Graph Neural Networks.” In International Conference on Learning Representations (ICLR), 2023. [pdf]

  4. Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, and Jinjun Xiong. “How unlabeled data improve generalization in self-training? A one-hidden-layer theoretical analysis.” In Proc. of The Tenth International Conference on Learning Representations (ICLR), 2022. [pdf]

  5. Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, and Jinjun Xiong. “Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Sparse Neural Networks.” In Proc. of the Thirty-fifth Conference on Neural Information Processing Systems (NeurIPS), 2021. [pdf]

  6. Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, and Jinjun Xiong. “Fast Learning of Graph Neural Networks with Guaranteed Generalizability: Onehidden-layer Case.” In Proc. of 2020 International Conference on Machine Learning (ICML), pp. 11268-11277. PMLR, 2020. [pdf]

  7. Shuai Zhang, MengWang, JinjunXiong, Sijia Liu, and Pin-Yu Chen. “Improved Linear Convergence of Training CNNs With Generalizability Guarantees: A One-Hidden-Layer Case.” IEEE Transactions on Neural Networks and Learning Systems (TNNLS). IEEE, 2020. [pdf]

  8. Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, and Jinjun Xiong. Shuai Zhang, and Meng Wang. “Correction of corrupted columns through fast robust Hankel matrix completion.” IEEE Transactions on Signal Processing (TSP), no. 10: 2580-2594. IEEE, 2019. [pdf]

  9. Shuai Zhang *, Yingshuai Hao *, Meng Wang, and Joe H. Chow. “Multichannel Hankel matrix completion through nonconvex optimization.” IEEE Journal of Selected Topics in Signal Processing (JSTSP), no. 4: 617-632. IEEE, 2018. [pdf]

  10. Shuai Zhang, and Meng Wang. “Correction of simultaneous bad measurements by exploiting the low-rank hankel structure.” In 2018 IEEE International Symposium on Information Theory (ISIT), pp. 646-650. IEEE, 2018.

  11. Hongkang Li, Shuai Zhang, Meng Wang. “Learning and generalization of onehidden-layer neural networks, going beyond standard Gaussian data.” In 2022 56th Annual Conference on Information Sciences and Systems (CISS), pp. 1-6. IEEE, 2022.

  12. Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, and Jinjun Xiong. “Guaranteed Convergence of Training Convolutional Neural Networks via Accelerated Gradient Descent.” In 2020 54th Annual Conference on Information Sciences and Systems (CISS), pp. 1-6. IEEE, 2020.

  13. Shuai Zhang *, Yingshuai Hao *, Meng Wang, and Joe H. Chow. “Multi-channel missing data recovery by exploiting the low-rank hankel structures.” In 2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), pp. 1-5. IEEE, 2017.

Preprint

  1. Hongkang Li, Shuai Zhang, Meng Wang, Yihua Zhang, Pin-Yu Chen, Sijia Liu. “Does promoting the minority group always improve generalization? A theoretical study of one-hidden-layer neural networks on group imbalance.” Submitted to IEEE Journal of Selected Topics in Signal Processing (JSTSP).