Colorless green ideas sleep furiously.

— Noam Chomsky

I am currently a Ph.D. candidate in Computer Science at UIUC, advised by Dr. Ismini Lourentzou. I began my doctoral studies at Virginia Tech, where I had also worked under Dr. Kurt Luther. Prior to my Ph.D., I earned an M.S. in Computer Science from Georgetown University and a B.A. in Computer Science (Linguistics concentration) from University of Minnesota, Twin Cities.

My research centers on reasoning in 3D and embodied settings, grounded in a broader foundation of natural language processing and multimodal learning. I study how structured representations - linguistic, spatial, and geometric — can be integrated to enable machines to reason about the world in a grounded, interpretable, and generative way.

My recent work explores:

  • Collaborative reasoning frameworks that unify language and 3D generation
  • Part-level 3D generation and articulation for modeling structured objects
  • Uncertainty-aware embodied agents that adapt reasoning and planning in dynamic environments

Across these directions, I aim to build language-enabled systems that do not merely predict tokens or pixels, but instead reason over structure, parts, space, and action. My work draws inspiration from theoretical linguistics, geometric modeling, and generative learning to develop agents that can understand, construct, and interact with complex 3D worlds.



Publications

⭐⭐⭐ DreamPartGen: 3D Generation with Part‑Level Text Guidance through Collaborative Part‑Latent Denoising, under review, 2025
Tianjiao Yu, Muntasir Wahed, Jerry Yuyang Xiong, Xinzhuo Li, Yifan Shen, Ying Shen, Ismini Lourentzou.
⭐⭐⭐ CoRe3D: Collaborative Reasoning as a Foundation for 3D Intelligence, Arxiv, 2025
Tianjiao Yu, Xinzhuo Li, Yifan Shen, Yuanzhe Liu, Ismini Lourentzou. Project Page · Paper Link
⭐⭐⭐ Part²GS: Part-aware Modeling of articulated Objects using 3D Gaussian Splatting, Arxiv, 2025
Tianjiao Yu, Vedant Shah, Muntasir Wahed, Ying Shen, Kiet A. Nguyen, Ismini Lourentzou. Project Page · Paper Link
⭐⭐⭐ Uncertainty in Action: Confidence Elicitation in Embodied Agents, Arxiv, 2024
Tianjiao Yu, Vedant Shah, Muntasir Wahed, Kiet A. Nguyen, Adheesh Sunil Juvekar, Tal August, Ismini Lourentzou. Project Page · Paper Link
MOCHA: Are Code Language Models Robust Against Multi-Turn Malicious Coding Prompts?, Arxiv, 2025
Muntasir Wahed, Xiaona Zhou, Kiet A Nguyen, Tianjiao Yu, Nirav Diwan, Gang Wang, Dilek Hakkani-Tür, Ismini Lourentzou. Paper Link
PurpCode: Reasoning for Safer Code Generation, Arxiv, 2025
Jiawei Liu, Nirav Diwan, Zhe Wang, Haoyu Zhai, Xiaona Zhou, Kiet A Nguyen, Tianjiao Yu, Muntasir Wahed, Yinlin Deng, Hadjer Benkraouda, Yuxiang Wei, Lingming Zhang, Ismini Lourentzou, Gang Wang. Paper Link
CALICO: Part-Focused Semantic Co-Segmentation with Large Vision-Language Models, CVPR, 2025
Kiet A Nguyen, Adheesh Juvekar, Tianjiao Yu, Muntasir Wahed, Ismini Lourentzou. Project Page · Paper Link
PRIMA: Multi-Image Vision-Language Models for Reasoning Segmentation, Arxiv, 2024
Muntasir Wahed, Kiet A Nguyen, Adheesh Sunil Juvekar, Xinzhuo Li, Xiaona Zhou, Vedant Shah, Tianjiao Yu, Pinar Yanardag, Ismini Lourentzou. Project Page · Paper Link
⭐⭐⭐ Sedition Hunters: A Quantitative Study of the Crowdsourced Investigation into the 2021 US Capitol Attack, Proceedings of the ACM Web Conference 2023
Tianjiao Yu, Sukrit Venkatagiri, Ismini Lourentzou, Kurt Luther. Paper Link
Sedition Hunters: Countering Extremism through Collective Action, CSCW 2021 Workshop on Addressing Challenges and Opportunities in Online Extremism Research: An Interdisciplinary Perspective
Sukrit Venkatagiri, Tianjiao Yu, Vikram Mohanty, Kurt Luther. Paper Link
FARM: Fine‐Grained Alignment for Cross‐Modal Recipe Retrieval, Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision. 2024.
Muntasir Wahed, Xiaona Zhou,Tianjiao Yu, Ismini Lourentzou. Paper Link
PLAB‐Bot: Contextualized & Knowledge‐grounded Multimodal Taskbot, In Alexa Prize TaskBot Challenge 2 Proceedings, 2023.
Afrina Tabassum, Muntasir Wahed, Tianjiao Yu, Amarachi B. Mbakwe, Makanjuola Ogunleye, and Ismini Lourentzou. Paper Link