Research

Selected Publications
(* denotes equal contributions. Including preprints)

Weakly-supervised Learning



Demystifying how self-supervised features improve training from noisy labels. [ICLR 2023].
Hao Cheng*, Zhaowei Zhu*, Xing Sun, and Yang Liu.


Beyond Images: Label Noise Transition Matrix Estimation for Tasks with Lower-Quality Features. [ICML 2022].
Zhaowei Zhu, Jialu Wang, and Yang Liu.


Detecting Corrupted Labels Without Training a Model to Predict. [ICML 2022].
Zhaowei Zhu, Zihao Dong, and Yang Liu.


Learning with noisy labels revisited: a study using real-world human annotations. [ICLR 2022].
Jiaheng Wei*, Zhaowei Zhu*, Hao Cheng, Tongliang Liu, Gang Niu, and Yang Liu.


Clusterability as an alternative to anchor points when learning with noisy labels. [ICML 2021].
Zhaowei Zhu, Yiwen Song, and Yang Liu.


A second-order approach to learning with instance-dependent label noise. [CVPR 2021 (oral])
Zhaowei Zhu, Tongliang Liu, and Yang Liu.


Learning with instance-dependent label noise: a sample sieve approach. [ICLR 2021]
Hao Cheng*, Zhaowei Zhu*, Xingyu Li, Yifei Gong, Xing Sun, and Yang Liu.


Policy learning using weak supervision. [NeurIPS 2021]
Jingkang Wang*, Hongyi Guo*, Zhaowei Zhu*, and Yang Liu.

Fairness in Machine Learning



Weak proxies are sufficient and preferable for fairness with missing sensitive attributes. [Preprint]
Zhaowei Zhu*, Yuanshun Yao*, Jiankai Sun, Hang Li, and Yang Liu.

The rich get richer: disparate impact of semi-supervised learning. [ICLR 2022]
Zhaowei Zhu*, Tianyi Luo*, and Yang Liu.

Federated Learning



Federated bandit: a gossiping approach. [ACM SIGMETRICS 2021] (acceptance rate = 12.1%)
Zhaowei Zhu*, Jingxuan Zhu*, Ji Liu, and Yang Liu.

Other Publications

See the full list at Google Scholar.