Ryan Liu
Hi! I am an incoming PhD student at Princeton computer science, with advisors Tom Griffiths and Andrés Monroy-Hernández. The central focus of my research is how large language models can transform how our society communicates and learns information. Previously, I was a Masters student at Carnegie Mellon working with Nihar Shah on solving central problems in conference peer review.
I am happy to chat about my current research and future opportunities! Please contact me via email at rl5886@princeton.edu.
Publications
- ReviewerGPT? An Exploratory Study on Using Large Language Models for Paper Reviewing
Ryan Liu and Nihar B. Shah
[arXiv]
- LLMs as Workers in Human-Computational Algorithms? Replicating Crowdsourcing Pipelines with LLMs
Tongshuang Wu, Haiyi Zhu, Maya Albayrak, Alexis Axon, Amanda Bertsch, Wenxing Deng, Ziqi Ding, Bill Guo, Sireesh Gururaja, Tzu-Sheng Kuo, Jenny T Liang, Ryan Liu, Ihita Mandal, Jeremiah Milbauer, Xiaolin Ni, Namrata Padmanabhan, Subhashini Ramkumar, Alexis Sudjianto, Jordan Taylor, Ying-Jui Tseng, Patricia Vaidos, Zhijin Wu, Wei Wu, Chenyang Yang
[arXiv]
- Testing for Reviewer Anchoring in Peer Review: A Randomized Controlled Trial
Ryan Liu, Steven Jecmen, Fei Fang, Vincent Conitzer, and Nihar B. Shah
Under revision.
[arXiv]
- Cite-seeing and Reviewing: A Study on Citation Bias in Peer Review
Ivan Stelmakh, Charvi Rastogi, Ryan Liu, Shuchi Chawla, Federico Echenique, and Nihar B. Shah
Peer Review Congress 2022 (abstract)
[arXiv]
- Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing & Conference Experiment Design
Steven Jecmen, Hanrui Zhang, Ryan Liu, Fei Fang, Vincent Conitzer, and Nihar B. Shah
AAAI HCOMP 2022, Best Paper Honorable Mention
[arXiv]
- Mitigating Manipulation in Peer Review via Randomized Reviewer Assignments
Steven Jecmen, Hanrui Zhang, Ryan Liu, Nihar B. Shah, Vincent Conitzer, and Fei Fang
NeurIPS 2020
[arXiv]
Presentations
- Moderator @ London Machine Learning Meetup
Joon Sung Park | Generative Agents: Interactive Simulacra of Human Behavior [recording]
- Podcast @ Data Skeptic
Automated Peer Review [link]
- Visit @ Allen Institute for AI, Semantic Scholar Team
- Talk @ Carnegie Mellon University Meeting of the Minds 2022
Identifying Human Biases in Peer Review via Real-Subject Experiments
- Poster @ Carnegie Mellon University Meeting of the Minds 2021
Improving Algorithmic Tools for Conference Peer Review Research
- Poster @ Carnegie Mellon University Fall Undergraduate Research Showcase 2020
Creating Robustness within Conference Peer Review
- Poster @ Carnegie Mellon University Meeting of the Minds 2020
Assignment Algorithms to Prevent Quid-Pro-Quo in Conference Peer Review
Experience
- AI/ML SWE Internship @ Meta
- Teaching Assistant @ CMU 15-112 Fundamentals of Programming
- Research Assistant @ CMU School of Computer Science
Academic Honors
- NSF Research Experience for Undergraduates Grant (CMU)
- Bachelor of Science, CMU School of Computer Science, College & University Honors
- Fifth-Year Master's, CMU School of Computer Science, Thesis: Testing for Reviewer Anchoring in the Conference Rebuttal Process [link]