Modern Artificial Intelligence research
We are dedicated to exploring computational models for semantic representation, extraction, and generation based on text, as well as images, videos, and large-scale knowledge across multiple modalities. Through large model training, self-evolution theories, and data-driven text generation technologies, we aim to enhance the ability to process and understand multimodal data. Our research group’s goal is to develop and apply advanced natural language processing technologies, promoting applications in text summarization, cross-lingual/cross-modal translation, style transfer, and intelligent question-answering systems. We strive to address semantic understanding and generation challenges across different contexts and modalities, aiming for breakthroughs in information extraction, content generation, and human-computer interaction. Through our research, we hope to achieve more intelligent and natural human-computer interactions and provide innovative solutions for comprehensive multimodal data processing. The research group actively collaborates with leading academic institutions and industries both domestically and internationally, promoting technological applications while focusing on training high-quality research talent, collectively advancing the field of natural language processing.
Yang Gao, Doctoral Supervisor, primarily engages in large language model training, automatic text generation technologies, and the application and transformation of these technologies. She has published over 60 high-level papers in international journals and conferences, including ACL, AAAI, WWW, IJCAI, EMNLP, TKDE and others. She has served as a chair in the text generation area for conferences like EMNLP and COLING, as an editorial board member for journals like Web Intelligence and Natural Language Processing Journal, as a program committee member for international conferences such as AAAI, ACL, EMNLP, NAACL, ICDM, and as a reviewer for journals including TNNLS and Computing Surveys. She has led the development of the Mingde Foundation large language model (MindLLM) trained from scratch and the construction of domestic ecological large models.
PhD in Data Science,
Queensland University of Technology