Skip to content
GCC AI Research

Search

Results for "neuromorphic"

Building applications inspired by the human eye

KAUST ·

KAUST researchers in the Sensors Lab are developing neuromorphic circuits for vision sensors, drawing inspiration from the human eye. They created flexible photoreceptors using hybrid perovskite materials, with capacitance tunable by light stimulation, mimicking the human retina. The team collaborates with experts in image characterization and brain pattern recognition to connect the 'eye' to the 'brain' for object identification. Why it matters: This biomimetic approach promises advancements in AI, machine learning, and smart city development within the region.

Emulating the energy efficiency of the brain

MBZUAI ·

MBZUAI researchers are developing spiking neural networks (SNNs) to emulate the energy efficiency of the human brain. Traditional deep learning models like those powering ChatGPT consume significant energy, with a single query using 3.96 watts. SNNs aim to mimic biological neurons more closely to reduce energy consumption, as the human brain uses only a fraction of the energy compared to these models. Why it matters: This research could lead to more sustainable and energy-efficient AI technologies, addressing a major challenge in deploying large-scale AI systems.

Research Talks: Bridging neuroscience and AI

MBZUAI ·

Caltech graduate student Surya Narayanan Hari presented his research on replicating human-like memory in machines at MBZUAI. He discussed how the thalamus, which filters sensory and motor signals in the brain, inspires the development of routed monolithic models in AI. Hari explained that memory retrieval occurs on object, embedding, and circuit levels in the human brain. Why it matters: This talk highlights the potential of neuroscience-inspired AI architectures for improving memory and information processing in AI systems, which could accelerate the development of more efficient and context-aware AI models in the region.

Memory representation and retrieval in neuroscience and AI

MBZUAI ·

A Caltech researcher presented at MBZUAI on memory representation and retrieval, contrasting AI and neuroscience approaches. Current AI retrieval systems like RAG retrieve via fine-tuning and embedding similarity, while the presenter argued for exploring retrieval via combinatorial object identity or spatial proximity. The research explores circuit-level retrieval via domain fine-tuned LLMs and distributed memory for image retrieval using semantic similarity. Why it matters: The work suggests structured databases and retrieval-focused training can allow smaller models to outperform larger general-purpose models, offering efficiency gains for AI development in the region.

Uncertainty Modeling of Emerging Device-based Computing-in-Memory Neural Accelerators with Application to Neural Architecture Search

arXiv ·

This paper analyzes the impact of device uncertainties on deep neural networks (DNNs) in emerging device-based Computing-in-memory (CiM) systems. The authors propose UAE, an uncertainty-aware Neural Architecture Search scheme, to identify DNN models robust to these uncertainties. The goal is to mitigate accuracy drops when deploying trained models on real-world platforms.

Perovskites used to make efficient artificial retina

KAUST ·

KAUST researchers have developed an artificial electronic retina mimicking the behavior of rod retina cells, utilizing a hybrid perovskite material (MAPbBr3) embedded in PVDF-TrFE-CEF. The photoreceptor array, made of metal-insulator-metal capacitors, detects light intensity through changes in electrical capacitance. Connected to a CMOS-sensing circuit and a spiking neural network, the 4x4 array achieved around 70 percent accuracy in recognizing handwritten numbers. Why it matters: This research paves the way for energy-efficient neuromorphic vision sensors and advanced computer vision applications, potentially revolutionizing camera technology.

Reliability Exploration of Neural Network Accelerator

MBZUAI ·

This article discusses the reliability of Deep Neural Networks (DNNs) and their hardware platforms, especially regarding soft errors caused by cosmic rays. It highlights that while DNNs are robust against bit flips, errors can still lead to miscalculations in AI accelerators. The talk, led by Prof. Masanori Hashimoto from Kyoto University, will cover identifying vulnerabilities in neural networks and reliability exploration of AI accelerators for edge computing. Why it matters: As DNNs are deployed in safety-critical applications in the region, ensuring the reliability of AI hardware is crucial for safe and trustworthy operation.

Bruteforce computing is the next “winter of AI”

MBZUAI ·

Prof. Mérouane Debbah of the Technology Innovation Institute (TII) warns that current AI development relies on unsustainable, energy-intensive "bruteforce computing." He argues that the field needs more energy-efficient algorithms instead of simply scaling up GPUs. Debbah suggests neuromorphic computing as a potential solution, drawing inspiration from the human brain's energy efficiency. Why it matters: This critique highlights a crucial sustainability challenge for AI in the GCC and globally, as the region invests heavily in compute-intensive AI models.