Skip to content

Research

Sanchari’s current research focuses on the vulnerabilities of LLMs in different domains and ways to mitigate them, under Prof. Chiu C. Tan at Temple University.

Her research aims to provide a systematic and principled framework to identify, classify, and mitigate hallucinations in large language models. She investigates how hallucinations manifest across tasks such as question answering, document analysis, security applications, and policy-driven systems, with an emphasis on aligning model outputs with user intent and safety guardrails.

Additionally, she explores domain-specific evaluation benchmarks and formal verification-inspired techniques to improve the reliability of LLM-generated responses. By combining structured representations, consistency checking, and multi-source validation mechanisms, her work seeks to build scalable and interpretable solutions for high-stakes domains such as education, accreditation, healthcare, and cybersecurity.

Her broader research interests lie at the intersection of trustworthy AI, security, and formal reasoning. She is particularly interested in building scalable evaluation methodologies, multilingual robustness testing, and policy-aware reasoning frameworks that enhance the transparency, accountability, and safety of large language models in heterogeneous and dynamic application settings.