Tom M. Mitchell from Carnegie Mellon University discussed using machine learning to study how the brain processes natural language, using fMRI and MEG to record brain activity while reading text. The research explores neural encodings of word meaning, information flow during word comprehension, and how meanings of words combine in sentences and stories. He also touched on how understanding of the brain aligns with current AI approaches to NLP. Why it matters: This interdisciplinary research could bridge the gap between neuroscience and AI, potentially leading to more human-like NLP models.
MBZUAI researchers are developing spiking neural networks (SNNs) to emulate the energy efficiency of the human brain. Traditional deep learning models like those powering ChatGPT consume significant energy, with a single query using 3.96 watts. SNNs aim to mimic biological neurons more closely to reduce energy consumption, as the human brain uses only a fraction of the energy compared to these models. Why it matters: This research could lead to more sustainable and energy-efficient AI technologies, addressing a major challenge in deploying large-scale AI systems.
Fatima Ali AlNuaimi from the Autonomous Robotics Research Center (ARRC) had two research papers on brain-computer interface (BCI) technology published at the IEEE International Conference on Bioinformatics and Biomedicine (BIBM) 2022. The papers are titled “Real-time Control of UGV Robot in Gazebo Simulator using P300-based Brain-Computer Interface” and “Secure Password Using EEG-based BrainPrint System: Unlock Smartphone Password Using Brain-Computer Interface Technology”. AlNuaimi is recognized as a young Emirati scientist advancing BCI knowledge in the UAE. Why it matters: This highlights growing BCI research capabilities in the UAE and the contributions of Emirati researchers to this emerging field.
Olivier Oullier, Visiting Professor at MBZUAI, is working on brain-computer interfaces, founding Inclusive Brains to develop a Neural Foundation Model using neurophysiological and behavioral signals. This model integrates data from brainwaves, eye-tracking, and other modalities to allow machines to build a representation of the world closer to human cognition. Why it matters: Such advancements can transform human-computer interaction, with particular implications for people of determination in the region.
A talk introduces a computational framework for learning a compact structured representation for real-world datasets, that is both discriminative and generative. It proposes to learn a closed-loop transcription between the distribution of a high-dimensional multi-class dataset and an arrangement of multiple independent subspaces, known as a linear discriminative representation (LDR). The optimality of the closed-loop transcription can be characterized in closed-form by an information-theoretic measure known as the rate reduction. Why it matters: The framework unifies concepts and benefits of auto-encoding and GAN and generalizes them to the settings of learning a both discriminative and generative representation for multi-class visual data.
Giulio Tononi, director of the Wisconsin Institute for Sleep and Consciousness, lectured at KAUST's 2019 Winter Enrichment Program on the topic of consciousness. He discussed how consciousness is not just about the environment, citing examples such as dreaming and brain activity in vegetative states. Tononi proposed five axioms to better understand consciousness: intrinsic existence, composition, information, integration, and exclusion. Why it matters: The lecture highlights KAUST's engagement with fundamental questions in neuroscience and cognitive science, showcasing the university's interdisciplinary approach to research.
Vicky Kalogeiton from École Polytechnique discussed the importance of multimodality for story-level recognition and generation using video, audio, text, masks and clinical data. She presented on multimodal video understanding using FunnyNet-W and Short Film Dataset. She further showed examples of visual generation from text and other modalities (ET, CAD, DynamicGuidance). Why it matters: Multimodal AI research is growing globally, and this talk highlights the potential of combining different data types for enhanced understanding and generation, which could have implications for various applications, including those relevant to the Middle East.
A Caltech researcher presented at MBZUAI on memory representation and retrieval, contrasting AI and neuroscience approaches. Current AI retrieval systems like RAG retrieve via fine-tuning and embedding similarity, while the presenter argued for exploring retrieval via combinatorial object identity or spatial proximity. The research explores circuit-level retrieval via domain fine-tuned LLMs and distributed memory for image retrieval using semantic similarity. Why it matters: The work suggests structured databases and retrieval-focused training can allow smaller models to outperform larger general-purpose models, offering efficiency gains for AI development in the region.