Siyu Yuan (员司雨) is a third-year Ph.D. student at Fudan University in the School of Data Science and a member of KW FUDAN Lab. She is devoted to making machines have human-like cognitive abilities and aligning autonomous generative agents with human cognition. Her interested research topics are mostly around cognitive science with generative agents, including (but not limited to)
(Download my resumé.)
Ph.D., Statistics, 2021-2026 (estimated)
B.S., Bachelor of Data Science and big data technology, 2017-2021
We demonstrate that word analogies do not adequately reflect the analogical reasoning ability of LMs to align with human cognition and propose the analogical structure abduction task with a benchmark of scientific analogical reasoning with structure abduction to evaluate LLMs from a cognitive perspective to align with humans.
we propose equipping the PLM-based extractor with a knowledge-guided prompt as an intervention to alleviate concept bias through the lens of a Structural Causal Model (SCM). The prompt adopts the topic of the given entity from the existing knowledge in KGs to mitigate the spurious co-occurrence correlations between entities and biased concepts.
We propose an overgenerate-then-filter approach to improve large language models (LLMs) on this task, and use it to distill a novel constrained language planning dataset, CoScript, which consists of 55,000 scripts.
We employ curriculum learning (CL) to train a generative entity typing model to overcome issues with coarse-grained types and heterogeneous data, where the curriculum could be self-adjusted with the self-paced learning according to its comprehension of the type granularity and data heterogeneity.