Ryan Liu
Hi! I am a second year PhD student at Princeton Computer Science advised by Tom Griffiths and Andrés Monroy-Hernández. The central focus of my research is how large language models can transform how our society communicates and learns information. Previously, I was a Masters student at Carnegie Mellon working with Nihar Shah on solving central problems in conference peer review.
I am happy to chat about my current research and future opportunities! Please contact me via email at ryanliu@princeton.edu.
Papers
- Large Language Models Assume People are More Rational than We Really are
Ryan Liu*, Jiayi Geng*, Joshua C. Peterson, Ilia Sucholutsky, and Thomas L. Griffiths
Preprint [arXiv]
- How do Large Language Models Navigate Conflicts between Honesty and Helpfulness?
Ryan Liu*, Theodore R. Sumers*, Ishita Dasgupta, and Thomas L. Griffiths
ICML 2024, Oral [arXiv]
- Improving Interpersonal Communication by Simulating Audiences with Language Models
Ryan Liu, Howard Yen, Raja Marjieh, Thomas L. Griffiths, and Ranjay Krishna
Preprint [arXiv]
- API-Assisted Code Generation for Question Answering on Varied Table Structures
Yihan Cao*, Shuyi Chen*, Ryan Liu*, Zhiruo Wang, and Daniel Fried
EMNLP 2023 [arXiv]
- ReviewerGPT? An Exploratory Study on Using Large Language Models for Paper Reviewing
Ryan Liu and Nihar B. Shah
Oral, AAAI SDU Workshop 2024 [arXiv]
- LLMs as Workers in Human-Computational Algorithms? Replicating Crowdsourcing Pipelines with LLMs
Tongshuang Wu, Haiyi Zhu, Maya Albayrak, Alexis Axon, Amanda Bertsch, Wenxing Deng, Ziqi Ding, Bill Guo, Sireesh Gururaja, Tzu-Sheng Kuo, Jenny T Liang, Ryan Liu, Ihita Mandal, Jeremiah Milbauer, Xiaolin Ni, Namrata Padmanabhan, Subhashini Ramkumar, Alexis Sudjianto, Jordan Taylor, Ying-Jui Tseng, Patricia Vaidos, Zhijin Wu, Wei Wu, Chenyang Yang
[arXiv]
- Testing for Reviewer Anchoring in Peer Review: A Randomized Controlled Trial
Ryan Liu, Steven Jecmen, Fei Fang, Vincent Conitzer, and Nihar B. Shah
PLoS ONE [arXiv]
- Cite-seeing and Reviewing: A Study on Citation Bias in Peer Review
Ivan Stelmakh, Charvi Rastogi, Ryan Liu, Shuchi Chawla, Federico Echenique, and Nihar B. Shah
PLoS ONE [arXiv]
- Near-Optimal Reviewer Splitting in Two-Phase Paper Reviewing & Conference Experiment Design
Steven Jecmen, Hanrui Zhang, Ryan Liu, Fei Fang, Vincent Conitzer, and Nihar B. Shah
AAAI HCOMP 2022, Best Paper Honorable Mention [arXiv]
- Mitigating Manipulation in Peer Review via Randomized Reviewer Assignments
Steven Jecmen, Hanrui Zhang, Ryan Liu, Nihar B. Shah, Vincent Conitzer, and Fei Fang
NeurIPS 2020 [arXiv]
Presentations
- Moderator @ London Machine Learning Meetup
Joon Sung Park | Generative Agents: Interactive Simulacra of Human Behavior [recording]
- Podcast @ Data Skeptic
Automated Peer Review [link]
- Visit @ Allen Institute for AI, Semantic Scholar Team
- Talk @ Carnegie Mellon University Meeting of the Minds 2022
Identifying Human Biases in Peer Review via Real-Subject Experiments
- Poster @ Carnegie Mellon University Meeting of the Minds 2021
Improving Algorithmic Tools for Conference Peer Review Research
- Poster @ Carnegie Mellon University Fall Undergraduate Research Showcase 2020
Creating Robustness within Conference Peer Review
- Poster @ Carnegie Mellon University Meeting of the Minds 2020
Assignment Algorithms to Prevent Quid-Pro-Quo in Conference Peer Review
Experience
- Assistant in Instruction @ Princeton COS 350 Ethics of Computing
- AI/ML SWE Internship @ Meta
- Teaching Assistant @ CMU 15-112 Fundamentals of Programming
- Research Assistant @ CMU School of Computer Science
Academic Honors
- Reviewer, NeurIPS 2024 Workshop on Behavioral ML
- Reviewer, NeurIPS 2024
- Student Organizer, Decentralized Social Media Workshop @Princeton
- NSF Research Experience for Undergraduates Grant (CMU)
- Bachelor of Science, CMU School of Computer Science, College & University Honors
- Fifth-Year Master's, CMU School of Computer Science, Thesis: Testing for Reviewer Anchoring in the Conference Rebuttal Process [link]