Siyu Yuan

Siyu Yuan

Ph.D. in Statistics

Fudan University

Biography

Siyu Yuan (员司雨) is a third-year Ph.D. student at Fudan University. She is devoted to making machines have human-like cognitive abilities and aligning autonomous generative agents with human cognition. Her research topics are mostly around cognitive science with generative agents, including (but not limited to)

  • Cognitive Reasoning, especially on exploring the cognitive reasoning abilities of generative agents, including analogical reasoning, concept understanding, role-playing and belief exploration of language agents. The ultimate goal is to enhance the understanding of these agents about themselves and others, thereby enabling them to generate responses that align better with human cognition.
  • Strategic Planning, especially on equipping generative agents with human-level planning capabilities revolving around constrained planning, tool invocation and multitasking planning.
  • Knowledge Acquisition, especially on excavating knowledge based on generative agents, including concept acquisition, script generation, idiom construction, pun generation and analogy making. These aim to construct rich knowledge resources that can be effectively utilized.

(Download my resumé. The last update was on 2024-06.)

Interests
  • Cognitive Science
  • Applications of LLMs
  • Writing Novels
Education
  • Ph.D., Statistics, 2021-2026 (estimated)

    Fudan University

  • B.S., Bachelor of Data Science and big data technology, 2017-2021

    Fudan University

News

  • Jun. 2024 How to automatically extend the specialized agent to multi-agent systems to improve task-solving capability? We propose EvoAgent, a generic method to automatically extend expert agents to multi-agent systems via the evolutionary algorithm. EvoAgent can be generalized to any LLM-based agent framework, and significantly enhance the task-solving capabilities of LLM-based agents!!
  • Jun. 2024 How can we enable autonomous language agents to consistently achieve high-level goals without training? Try SelfGoal, a novel automatic approach designed to enhance agents’ capabilities to achieve high-level goals with limited human prior and environmental feedback.
  • May. 2024 Congratulations on AnalogyKB, TimeArena and InCharacter being accepted to ACL 2024 Main Conference and one paper being accepted to ACL 2024 Findings! See you in Bangkok, Thailand!
  • Apr. 2024 Explore the First Survey on Role-Playing Agents! Discover insights into RPLA technologies, applications, and the potential for human-AI coexistence.
  • Apr. 2024 Can large language models understand puns? Check out our new pre-prints. We leverage three popular pun tasks to systematically evaluate LLMs’ capability of understanding puns.
  • Apr. 2024 Check out two pre-prints on Role-playing Agents, which extend InCharacter! CROSS systematically evaluate LLMs’ capability on the character profiling task, i.e., summarizing profiles for characters from fictional works. LIFECHOICE investigate whether LLMs can predict characters’ decisions provided with the preceding stories in high-quality novels.
  • Feb. 2024 Congratulations on EasyTool and TaskBench being accepted to ICLR 2024 Workshop on LLM Agents!
  • Feb. 2024 Introducing TimeArena, a Time-Aware simulated textual environment for language agents to complete multiple tasks in the shortest time, which means simulating realistic temporal & resource constraints! Check out our project page for more details!
  • Feb. 2024 InCharacter is out! A new method to test personality fidelity in Role-Playing Agents using psychological interviews. Play with InCharacter demo!
  • Jan. 2024 Enhance LLM-based agents with EasyTool! Effortlessly convert complex, varied tool documentation into streamlined, unified tool instructions. Significantly improve performance and reduce token consumption!
  • Dec. 2023 Gave a talk at Tencent AI Lab about Coscript. Thanks for the invitation!
  • Dec. 2023 Congratulations on our paper IdiomKB being accepted to AAAI 2024! Our work focuses on creating a multilingual knowledge base for idioms with the help of Large Language Models (LLMs), aiming to improve idiomatic translation in smaller models.
  • Dec. 2023 Join in EMNLP 2023, Singapore! Our work SCAR will be in the poster session!
  • Nov. 2023 We released TaskBench, a benchmark for evaluating the task automation capabilities of large language models.
  • Oct. 2023 Check out our Auction Arena! We explore how LLMs navigate the complex and dynamic environment of auctions! We introduce AucArena, a novel simulation environment to evaluate the planning and strategic abilities of LLMs. Play with arena demo and see if you can beat AI!
  • Oct. 2023 Our paper SCAR on analogical reasoning got accepted at EMNLP 2023 (Findings)! See you in Singapore.
  • Sept. 2023 Start Student Researcher Internship at Microsoft Research Asia, advised by Dr. Kaitao Song!
  • July 2023 Our paper Coscript got an Outstanding Paper Award in ACL 2023 (top 1%)!
  • July 2023 Gave a talk for Peking University Shenzhen Graduate School Shanghai Alumni Association.
  • May 2023 Check out two pre-prints on Analogical Reasoning. AnalogyKB is a million-scale analogy KB derived from existing KGs, to enable machines to achieve analogical reasoning skills. SCAR is a new challenge for evaluating the structure abduction ability of LLMs for scientific analogies, which is essential for human-like analogical reasoning.
  • May 2023 Two papers accepted to ACL 2023! One is Coscript on constraint language planning, and the other is KPCE on concept extraction through the lens of a Structural Causal Model.
  • Jan. 2023 Start Student Researcher Internship at Bytedance, working with the great AILab!

Experience

 
 
 
 
 
Microsoft Research Lab Asia
Research Intern
Microsoft Research Lab Asia
September 2023 – Present Shanghai, China
Mentored by Dr. Kaitao Song and Dr. Kan Ren. Autonomous Agents with Planning and Tool Use.
 
 
 
 
 
ByteDance AI Lab
Research Intern
ByteDance AI Lab
January 2023 – May 2023 Shanghai, China
Mentored by Dr. Jiaze Chen and Dr. Changzhi Sun. Working on LLM Evaluation and Instruction Tuning on LLMs.
 
 
 
 
 
Brain Technologies Inc
Research Intern
Brain Technologies Inc
June 2022 – September 2022 Remote
Mentored by Dr. Charles Jankowski. Working on Symbolic Knowledge Distillation and LLM Prompt Engineering.
 
 
 
 
 
Knowledge Works Lab at Fudan University
Student Researcher
Knowledge Works Lab at Fudan University
July 2019 – Present Shanghai, China
Working on Knowledge Generation and Knowledge Graph.

Awards

ACL 2023 Outstanding Paper Award
Outstanding Graduate Student of Shanghai Colleges and University
Outstanding Student Pacemaker of Fudan University
China National Scholarship

Recent Publications

Quickly discover relevant content by filtering publications.
(2024). EvoAgent: Towards Automatic Multi-Agent Generation via Evolutionary Algorithms. Preprint.

PDF Code Project

(2024). SelfGoal: Your Language Agents Already Know How to Achieve High-level Goals. Preprint.

PDF Code Project

(2024). ANALOGYKB: Unlocking Analogical Reasoning of Language Models with A Million-scale Knowledge Base. In The 62th Annual Meeting of the Association for Computational Linguistics (ACL 2024).

PDF Code

(2024). InCharacter: Evaluating Personality Fidelity in Role-Playing Agents through Psychological Interviews. In The 62th Annual Meeting of the Association for Computational Linguistics (ACL 2024).

PDF Code Project