By : Basel Khaled
As the world's first graduate-level, research-based artificial intelligence (AI) university, Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) is continuing to increase the breadth and pace of publication of ground-breaking research in artificial intelligence (AI).
Between January and June 2024, the MBZUAI community—made up of more than 80 world-class faculty, 200-plus researchers, and hundreds of students—published more than 300 papers at top-tier AI venues. This included 39 papers at the prestigious International Conference on Learning Representations 2024 (ICLR) held in May.
This follows last year’s success of 612 published papers at top-tier venues in 2023. Highlights included delivering 30 papers at the International Conference on Computer Vision (ICCV), 34 papers at the IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR), 44 papers at Empirical Methods in Natural Language Processing (EMNLP), and 53 papers at the Conference on Neural Information Processing Systems (NeurIPS).
Five years since its inception, MBZUAI is now recognized as one of the world’s top 100 universities across all of computer science, and is ranked in the top 20 globally across AI, computer vision, machine learning, natural language processing (NLP), and robotics (CSRankings).
Five stand-out research papers published by MBZUAI in the past six months are listed below “ Tackling misuse of LLM-generated text “ , “ Improving gene-sequencing analysis to manage diseases “ , “ New algorithm to enhance complex machine learning tasks “ , “ First-of-its-kind large multimodal model (LMM) for detailed visual understanding “ and New method to boost efficiency of AI vision transformers
Dr. Xiaodan Liang and Professor Xiaojun Chang, both MBZUAI professors from the Computer Vision Department, have teamed up with international collaborators to develop a new technique that can make vision transformers, a core component of most modern models for image and video analysis, more efficient. As set out in their paper, ‘MLP can be a good Transformer Learner’, the key discovery is that certain layers in the transformer can be replaced with much simpler multilayer perceptron (MLP) layers. This change, guided by a measure of randomness known as entropy, helps maintain model performance with much smaller models. The new method supports more streamlined and efficient AI model training, potentially paving the way for faster and less resource-intensive technologies.