Skip to content
GCC AI Research

Search

Results for "connectionism"

Using Machine Learning to Study How Brains Process Natural Language

MBZUAI ·

Tom M. Mitchell from Carnegie Mellon University discussed using machine learning to study how the brain processes natural language, using fMRI and MEG to record brain activity while reading text. The research explores neural encodings of word meaning, information flow during word comprehension, and how meanings of words combine in sentences and stories. He also touched on how understanding of the brain aligns with current AI approaches to NLP. Why it matters: This interdisciplinary research could bridge the gap between neuroscience and AI, potentially leading to more human-like NLP models.

Memory representation and retrieval in neuroscience and AI

MBZUAI ·

A Caltech researcher presented at MBZUAI on memory representation and retrieval, contrasting AI and neuroscience approaches. Current AI retrieval systems like RAG retrieve via fine-tuning and embedding similarity, while the presenter argued for exploring retrieval via combinatorial object identity or spatial proximity. The research explores circuit-level retrieval via domain fine-tuned LLMs and distributed memory for image retrieval using semantic similarity. Why it matters: The work suggests structured databases and retrieval-focused training can allow smaller models to outperform larger general-purpose models, offering efficiency gains for AI development in the region.

A Rising Star In The East Lights A Path To Responsible Artificial Intelligence: The Mohammed Bin Zayed University Of AI

MBZUAI ·

An article in Forbes highlights the Mohammed bin Zayed University of Artificial Intelligence (MBZUAI) as the first university devoted exclusively to AI advancement. MBZUAI President Eric Xing champions a 'connectionism' approach, designing computational models inspired by interconnected human cognition networks. AI's ability to process and analyze data at high speeds unlocks new knowledge realms, acting as a universal translator between humans and the digital world. Why it matters: MBZUAI is positioned as a key institution driving AI innovation and responsible AI practices in the Middle East.

Emulating the energy efficiency of the brain

MBZUAI ·

MBZUAI researchers are developing spiking neural networks (SNNs) to emulate the energy efficiency of the human brain. Traditional deep learning models like those powering ChatGPT consume significant energy, with a single query using 3.96 watts. SNNs aim to mimic biological neurons more closely to reduce energy consumption, as the human brain uses only a fraction of the energy compared to these models. Why it matters: This research could lead to more sustainable and energy-efficient AI technologies, addressing a major challenge in deploying large-scale AI systems.

Building applications inspired by the human eye

KAUST ·

KAUST researchers in the Sensors Lab are developing neuromorphic circuits for vision sensors, drawing inspiration from the human eye. They created flexible photoreceptors using hybrid perovskite materials, with capacitance tunable by light stimulation, mimicking the human retina. The team collaborates with experts in image characterization and brain pattern recognition to connect the 'eye' to the 'brain' for object identification. Why it matters: This biomimetic approach promises advancements in AI, machine learning, and smart city development within the region.

Research Talks: Bridging neuroscience and AI

MBZUAI ·

Caltech graduate student Surya Narayanan Hari presented his research on replicating human-like memory in machines at MBZUAI. He discussed how the thalamus, which filters sensory and motor signals in the brain, inspires the development of routed monolithic models in AI. Hari explained that memory retrieval occurs on object, embedding, and circuit levels in the human brain. Why it matters: This talk highlights the potential of neuroscience-inspired AI architectures for improving memory and information processing in AI systems, which could accelerate the development of more efficient and context-aware AI models in the region.

A Glass Bead Game of *-ology: Contemporary Computational Approaches to Linguistic Morphology, Typology and Social Psychology

MBZUAI ·

Ekaterina Vylomova from the University of Melbourne gave a talk on using NLP models to advance research in linguistic morphology, typology, and social psychology. The talk covered using models to study morphology, phonetic changes in words over time, and diachronic changes in language semantics. Vylomova presented the UniMorph project, a cross-lingual annotation schema and database with morphological paradigms for over 150 languages. Why it matters: This research demonstrates the potential of NLP to contribute to a deeper understanding of language evolution and structure, with applications in linguistic research and the study of social and cultural changes.

Art as a window into sight

KAUST ·

Margaret Livingstone, a neurobiology professor at Harvard Medical School, lectured at KAUST's Winter Enrichment Program 2018 on how art can reveal insights into the human brain. She discussed how artists have long understood the independent roles of color and luminance in visual perception. Livingstone highlighted examples from Picasso, Monet, and Warhol to illustrate how artists manipulate visual cues. Why it matters: This interdisciplinary approach can potentially lead to new understandings of how the brain processes visual information and inform advances in both neuroscience and art.