My primary research interests are in the fields of Natural Language Processing, Machine Learning and Artificial Intelligence. I'm interested in building efficient models and benchmarks that can encourage machines to perform human-level intelligence, especially for NLP applications such as information extraction, machine reading comprehension and natural language generation.

Knowledge extraction with limited supervision: bottom-up discovering structured knowledge with zero or few human annotations and automatically inducing a type schema.

Traditional knowledge extraction approaches tend to follow a top-down manner - learning effective features for each predefined type according to human annotations, and then discovering facts specific to the predefined types. It has at least two limitations which make it difficult to be directly applied to new domains or languages: (1) First, this paradigm is not fully automatic because it involves human in the loop during the first two steps. Both of the predefined type schema and human annotated data are very expensive. (2) a predefined schema can only cover a limited number of types and relations. We are exploring various techniques to leverage available limited supervision from existing resources and unsupervised learning (ACL'2016, BigData'2017), Zero-Shot Learning (ACL'2018) as well as advanced neural architectures, e.g., VAE, pretrained large-scale language models such as BERT to automatically extract knowledge and induce a corpus-customized schema or extend knowledge extraction to unlimited types.

Natural language understanding and reasoning: learning across long-range of texts or in-bewteen lines to make machines interpret what is not explicitly stated yet obviously true, and leveraging external factual and commonsense knowledge to perform human-level intelligence.

Natural language, especially text about processes, commonly describes state changes in a dynamic world. It is usually easy for human to determine the state changes of a particular entity or object, whereas machines mostly cannot infer this fact because the causal effect of actions is implicit and it requires machines to undertand the text across a long range. In addition, current AI can identify who did what to whom, when and where from natural language text, but most machine intelligence lacks the most basic understanding of everyday situations and events, in other words, common sense. We are exploring various benchmarks and algorithms, e.g., procedural text reading comprehension (ProPara) commonsense reading comprehension (Cosmos QA) to encourage machines to understand the implicit information and incorporating human commonsense into the understanding process.

Automatic Knowledge Network Construction and Population for any Domains: leveraging advanced machine learning algorithms and existing knowledge or resources to fully automate the construction of high-quality knowledge graphs for any domains.

It has been a long challenge in Natural Language Processing of how to automate the construction of high-quality knowledge graphs. Prior programs, such as Automatic Content Extraction (ACE) or TAC-KBP predefined a set of target types, e.g., 33 event types in ACE and 38 in TAC-KBP. However, even for such limited number of target types, the performance of current machine learning algorithms is still quite low, e.g., the state-of-the-art event extraction performance based on BERT is still about 60%, which is far from enough to make the knowledge graph really applicable to downstream applications. Similar situations are also true for other domains, such as Biomedical domain or Scientific domain. We are keeping exploring advanced algorithms towards the ultimate goal.

Low-Resource Natural Language Processing: extending natural language processing and understanding to low-resource languages, domains or other settings by leveraging existing resources.

It's quite common to see a great demand to rapidly develop a Natural Language Processing(NLP) system, such as name tagger, for a new language or domain with very few resources. Traditional supervised learning methods that rely on large-scale manual annotations would be too costly. Considering these, a promising direction is to transfer available resources and annotations from high-resource languages/domains to low-resource languages/domains. We are exploring various ideas, e.g., multilingual or multimedia common semantic space to enable cross-lingual or cross-media transfer (EMNLP'2018, NAACL'2019), and domain adaptation techniques.