RUIQI
profile photo

Ruiqi Zhang

I am a second-year Ph.D. in the Department of Statistics at University of California, Berkeley, advised by Prof. Peter L. Bartlett. Previously, I got my bachelor's degree in the School of Mathematical Science(SMS) at Peking University(PKU), majoring Mathematics and Statistics.

My research mainly focuses on the theory and application of modern Machine Learning(ML), Deep Learning(DL), and Large Language Models(LLMs). More specifically, currently, I focus on

Email  /  Language: Chinese, English, French  /  Coding: Python, R, Matlab, LaTeX, Prompting GPT.
Reviewer experience: ICML 2022,2024, NIPS 2023, NIPS 2023 R0-FoMo Workshop, NIPS 2023 Math+A Workshop, ICLR 2024, AISTATS 2024, TMLR.

Publication

  1. Negative Preference Optimization: From Catastrophic Collapse to Effective Unlearning.
    Ruiqi Zhang*, Licong Lin*, Yu Bai, Song Mei (* for co-first authors).
    Submitted, 2024 | Paper

  2. In-Context Learning of a Linear Transformer Block: Benefits of the MLP Component and One-Step GD Initialization.
    Ruiqi Zhang, Jingfeng Wu, Peter L. Bartlett
    Submitted, 2024 | Paper

  3. Is Offline Decision Making Possible with Only Few Samples? Reliable Decisions in Data-Starved Bandits via Trust Region Enhancement.
    Ruiqi Zhang, Yuexiang Zhai, Andrea Zanette
    Submitted, 2024 | Paper

  4. AutoPRM: Automating Procedural Supervision for Multi-Step Reasoning via Controllable Question Decomposition.
    Zhaorun Chen, Zhuokai Zhao, Zhihong Zhu, Ruiqi Zhang, Xiang Li, Bhiksha Raj, Huaxiu Yao
    NAACL 2024 | Prior version at ICLR 2024 Workshop on Reliable and Responsible Foundation Models | Paper

  5. Trained Transformers Learn Linear Model In-Context.
    Ruiqi Zhang, Spencer Frei, Peter L. Bartlett
    Journal of Machine Learning Research (JMLR) 2024 25(49):1−55 | Prior version in NIPS 2023 Workshop on robustness of zero/few-shot learning in foundation models (R0-FoMo) | Paper (Arxiv) | Paper (JMLR) | Talk by Spencer Frei

  6. Policy Finetuning in Reinforcement Learning via Design of Experiments using Offline Data.
    Ruiqi Zhang, Andrea Zanette
    NIPS 2023 | Paper

  7. Off-Policy Fitted Q-Evaluation with Differentiable Function Approximators: Z-Estimation and Inference Theory.
    Ruiqi Zhang, Xuezhou Zhang, Chengzhuo Ni, Mengdi Wang
    ICML 2022 | RLDM 2022 | Paper | Talk

  8. Optimal Estimation of Off-Policy Policy Gradient via Double Fitted Iteration.
    Chengzhuo Ni, Ruiqi Zhang, Xiang Ji, Xuezhou Zhang, Mengdi Wang
    ICML 2022 | RLDM 2022 | Paper