PhD student at Stevens, advised by Prof. Xueqing Liu. I build LLM-based multi-agent systems for software engineering and security, with a focus on making agent reasoning more reliable through neural-symbolic methods.
My work sits at the intersection of large language models and software engineering / security. I focus on three directions: (1) LLM Agent design — planning and acting on open-ended software tasks; (2) Multi-Agent Systems — orchestrating specialized agents to produce structured outputs such as process models and code; and (3) Neural-Symbolic Methods — integrating formal representations with neural reasoning to improve agent reliability and verifiability.
Started PhD at Stevens Institute of Technology, Text Mining Lab, advised by Prof. Xueqing Liu.
Paper MAO published in IEEE Transactions on Services Computing.
Paper on behavior clustering presented at IEEE WCNC 2024.
LLM-based classification of why bug bounty reports are rejected, surfacing patterns invisible to human reviewers.
Multi-agent orchestration framework that automatically generates formal process models from natural-language requirements.
Graph convolutional approach for clustering individual behaviors from heterogeneous IoT sensor streams.
Agents collaboratively generate and refine process graphs on-the-fly during multi-agent software development.