RUIQI
profile photo

Ruiqi Zhang

I am a third-year Ph.D. in the Department of Statistics at the University of California, Berkeley, advised by Prof. Peter L. Bartlett and Prof. Song Mei Previously, I got my bachelor's degree in the School of Mathematical Science(SMS) at Peking University(PKU), majoring in Mathematics and Statistics.

My research mainly focuses on the theory and application of modern Machine Learning(ML), Deep Learning(DL), and Large Language Models(LLMs). More specifically, currently, I focus on

Email  /  Language: Chinese, English.  /  Coding: Python, R, Matlab, LaTeX, Prompting GPT.
Reviewer experience: ICML 2022,2024, NIPS 2023,2024, NIPS 2023 R0-FoMo Workshop, NIPS 2023 Math+A Workshop, ICLR 2024, 2025, AISTATS 2024, 2025, TMLR, DMLR, JMLR.

Publication

  1. Fast Best-of-N Decoding via Speculative Rejection.
    Hanshi Sun*, Momin Haider*, Ruiqi Zhang*, Huitao Yang, Ming Yin, Mengdi Wang, Peter L. Bartlett, Andrea Zanette* (* for core authors).
    NIPS, 2024 | To appear

  2. Choose Your Anchor Wisely: Effective Unlearning Diffusion Models via Concept Reconditioning.
    Jingyu Zhu*, Ruiqi Zhang*, Licong Lin, Song Mei (* for co-first authors).
    Submitted, 2024 | To appear

  3. Simplicity Prevails: Rethinking Negative Preference Optimization for LLM Unlearning.
    Chongyu Fan*, Jiancheng Liu*, Licong Lin*, Jinghan Jia, Ruiqi Zhang, Song Mei, Sijia Liu (* for co-first authors).
    Submitted, 2024 | To appear

  4. Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning.
    Ruiqi Zhang*, Licong Lin*, Yu Bai, Song Mei (* for co-first authors).
    COLM, 2024 | Paper

  5. In-Context Learning of a Linear Transformer Block: Benefits of the MLP Component and One-Step GD Initialization.
    Ruiqi Zhang, Jingfeng Wu, Peter L. Bartlett
    NIPS, 2024 | Paper

  6. Is Offline Decision Making Possible with Only Few Samples? Reliable Decisions in Data-Starved Bandits via Trust Region Enhancement.
    Ruiqi Zhang, Yuexiang Zhai, Andrea Zanette
    Submitted, 2024 | Paper

  7. AutoPRM: Automating Procedural Supervision for Multi-Step Reasoning via Controllable Question Decomposition.
    Zhaorun Chen, Zhuokai Zhao, Zhihong Zhu, Ruiqi Zhang, Xiang Li, Bhiksha Raj, Huaxiu Yao
    NAACL 2024 | Prior version at ICLR 2024 Workshop on Reliable and Responsible Foundation Models | Paper

  8. Trained Transformers Learn Linear Model In-Context.
    Ruiqi Zhang, Spencer Frei, Peter L. Bartlett
    Journal of Machine Learning Research (JMLR) 2024 25(49):1−55 | Prior version in NIPS 2023 Workshop on robustness of zero/few-shot learning in foundation models (R0-FoMo) | Paper (Arxiv) | Paper (JMLR) | Talk by Spencer Frei

  9. Policy Finetuning in Reinforcement Learning via Design of Experiments using Offline Data.
    Ruiqi Zhang, Andrea Zanette
    NIPS 2023 | Paper

  10. Off-Policy Fitted Q-Evaluation with Differentiable Function Approximators: Z-Estimation and Inference Theory.
    Ruiqi Zhang, Xuezhou Zhang, Chengzhuo Ni, Mengdi Wang
    ICML 2022 | RLDM 2022 | Paper | Talk

  11. Optimal Estimation of Off-Policy Policy Gradient via Double Fitted Iteration.
    Chengzhuo Ni, Ruiqi Zhang, Xiang Ji, Xuezhou Zhang, Mengdi Wang
    ICML 2022 | RLDM 2022 | Paper