Hisham Cholakkal has received MBZUAI’s inaugural Award for Teaching Excellence, launched by the University’s Center for Teaching and Learning. Cholakkal, an Assistant Professor of Computer Vision who joined MBZUAI in 2020, was recognized for his innovative teaching methods and positive impact on students. The award considers course evaluations and student feedback to recognize impactful, student-centered teaching. Why it matters: This award highlights MBZUAI's commitment to recognizing and promoting excellence in AI education within the region.
MBZUAI NLP master's graduate Hasan Iqbal developed OpenFactCheck, a framework for fact-checking and evaluating the factual accuracy of large language models. The framework consists of three modules: ResponseEvaluator, LLMEvaluator, and CheckerEvaluator. OpenFactCheck was published at EMNLP 2024 and accepted at NAACL 2025 and COLING 2025, with Iqbal playing an active role at COLING in Abu Dhabi. Why it matters: The development of automated fact-checking frameworks is crucial for ensuring the reliability and trustworthiness of information generated by increasingly prevalent LLMs, especially in the Arabic-speaking world.
MBZUAI master's student Sayed Hashim is applying machine learning to improve cancer diagnosis and treatment, motivated by personal loss. He and fellow student Muhammad Ali developed algorithms for cancer type classification from multi-omics data, achieving over 96% accuracy. Their work, supervised by MBZUAI faculty, resulted in a published paper on multi-omics data representation learning. Why it matters: This research demonstrates the potential of AI and machine learning to advance cancer research and personalized medicine in the region.
MBZUAI researchers demonstrated a low-latency, multilingual multimodal AI system at GITEX that integrates speech, text, and visual capabilities for more lifelike human-machine conversation. The demo, led by Dr. Hisham Cholakkal, includes a mobile app where users can point their camera at an object and ask questions, receiving spoken answers in multiple languages. They are also integrating the model into a robot dog that can respond to voice commands. Why it matters: This work addresses key challenges in deploying LLMs to real-world applications in the Middle East, such as multilingual support and real-time responsiveness.
KAUST alumnus Hassan Al-Ismail (M.S. '14) leads a team at Saudi Aramco implementing vibrational wave modeling of 2D data. He returned to Saudi Arabia to work for Saudi Aramco after receiving his bachelor's degree and was later sponsored by the company to study at KAUST. Al-Ismail also emphasized the value of his time at KAUST for academic and personal growth. Why it matters: This highlights KAUST's role in developing talent for key industries in Saudi Arabia, particularly in areas relevant to energy and resource management.
MBZUAI will present two assistive AI prototypes at GITEX 2025: smart glasses with a camera and eye tracker that identify objects and medication, and a brain-computer interface (BCI) device integrated with robotics to control a robotic dog's movements. The smart glasses use a multimodal large language model (LLM) to help visually impaired individuals, while the BCI aims to restore hands-free communication for people with mobility limitations. Hisham Cholakkal leads the research team, which received a Meta Regional Research Grant 2025 for its work on multimodal LLM for smart wearables. Why it matters: The research demonstrates the potential of AI to improve the quality of life for vulnerable populations and addresses the challenge of providing cost-effective care for aging societies.
KAUST alumnus Dr. Hesham Omran won the UNESCO-Al Fozan International Prize for achievements in STEM. Omran was recognized for his Analog Designer’s Toolbox (ADT) and his Mastering Microelectronics YouTube channel, which has over 1.2 million views. Omran aims to boost microelectronics innovation in the Arab world. Why it matters: The award highlights the impact of KAUST graduates on STEM fields in the region and recognizes contributions to education and innovation in microelectronics.
This article discusses the increasing concerns about the interpretability of large deep learning models. It highlights a talk by Danish Pruthi, an Assistant Professor at the Indian Institute of Science (IISc), Bangalore, who presented a framework to quantify the value of explanations and the need for holistic model evaluation. Pruthi's talk touched on geographically representative artifacts from text-to-image models and how well conversational LLMs challenge false assumptions. Why it matters: Addressing interpretability and evaluation is crucial for building trustworthy and reliable AI systems, particularly in sensitive applications within the Middle East and globally.