Our group is mainly working on the fields of Natural Language Processing, Multimodal Learning, and Machine Learning. We are particularly interested in building efficient models and benchmarks that can encourage machines to perform human-level intelligence. We are grateful to NSF, DARPA, IARPA, U.S. Air Force, Amazon (AWS and Alexa AI), Meta AI, Google, Intuit and Washington Post for supporting our research!

Ph.D. Students

  • Zhiyang Xu: Multimodal Foundation Models, (Fall 2021)
  • Minqian Liu: Continual Learning, Generative AI and Evaluation, (Fall 2021)
  • Ying Shen: Multimodal Learning, Parameter Efficient Tuning, (Fall 2021)
  • Jingyuan Qi: Knowledge-Enhanced Learning, Reasoning and Generation with LLMs, (Fall 2022)
  • Menglong (Barry) Yao: Multimodal Knowledge Extraction and Verification, Curriculum Learning, (Fall 2022)
  • Zihao Lin: Knowledge Editing of LLMs, Retrieval-Augmented LLMs, (Fall 2023)
  • Mohammad Beigi: Interpretation of LLMs, Uncertainty Estimation and Calibration for LLMs, (Fall 2023)

MS Students

  • Tong Zhou: Dual Learning for Information Extraction, Multimodal Learning

Alumi

  • Sijia Wang (PhD, 2020-2024), now a Research Scientist at Amazon AWS AI
  • Zoe Zheng (MS from VT, co-advised with Chris Thomas)
  • Xiaochu Li (MS from VT)
  • Trevor Ashby (MS from VT)
  • Sai Gurrapu (MS from VT, co-advised with Feras Batarseh)
  • Pei Wang (MS from VT, co-advised with Jin-Hee Cho)
  • Zaid Al Nouman (undergrad from VT)
  • Zijian Jin (visiting MS from New York University)
  • Moksh Shukla (visiting undergrad from IIT Kanpur)
  • Sidhant Chandak (visiting undergrad from IIT Kanpur)
  • Barry Menglong Yao (visiting MS from University of Buffalo)
  • Mingchen Li (visiting MS from Georgia State University)
  • Pritika Ramu (visiting undergrad from BITS Pilani)