Skip to content
GCC AI Research

Search

Results for "optical neural networks"

Ph.D. student Valerio Mazzone wins best paper award

KAUST ·

KAUST Ph.D. student Valerio Mazzone won the best paper award at the 9th International Conference on Metamaterials, Photonic Crystals and Plasmonics (META). Mazzone's paper demonstrated the design of a new type of fully optical neural network using dielectric nano-lasers with invisible emission. The research showed the system can produce ultrafast optical pulses with controllable period and time duration in an optical chip. Why it matters: This award recognizes KAUST's contribution to innovative research in nanophotonics and optical computing, potentially leading to more efficient and compact laser technology.

Building applications inspired by the human eye

KAUST ·

KAUST researchers in the Sensors Lab are developing neuromorphic circuits for vision sensors, drawing inspiration from the human eye. They created flexible photoreceptors using hybrid perovskite materials, with capacitance tunable by light stimulation, mimicking the human retina. The team collaborates with experts in image characterization and brain pattern recognition to connect the 'eye' to the 'brain' for object identification. Why it matters: This biomimetic approach promises advancements in AI, machine learning, and smart city development within the region.

Emulating the energy efficiency of the brain

MBZUAI ·

MBZUAI researchers are developing spiking neural networks (SNNs) to emulate the energy efficiency of the human brain. Traditional deep learning models like those powering ChatGPT consume significant energy, with a single query using 3.96 watts. SNNs aim to mimic biological neurons more closely to reduce energy consumption, as the human brain uses only a fraction of the energy compared to these models. Why it matters: This research could lead to more sustainable and energy-efficient AI technologies, addressing a major challenge in deploying large-scale AI systems.

Green Learning — New Generation Machine Learning and Applications

MBZUAI ·

A recent talk at MBZUAI discussed "Green Learning" and Operational Neural Networks (ONNs) as efficient alternatives to CNNs. ONNs use "nodal" and "pool" operators and "generative neurons" to expand neuron learning capacity. Moncef Gabbouj from Tampere University presented Self-Organized ONNs (Self-ONNs) and their signal processing applications. Why it matters: Exploring more efficient AI models is crucial for sustainable development of AI in the region, as it addresses computational resource constraints and promotes broader accessibility.

Perovskites used to make efficient artificial retina

KAUST ·

KAUST researchers have developed an artificial electronic retina mimicking the behavior of rod retina cells, utilizing a hybrid perovskite material (MAPbBr3) embedded in PVDF-TrFE-CEF. The photoreceptor array, made of metal-insulator-metal capacitors, detects light intensity through changes in electrical capacitance. Connected to a CMOS-sensing circuit and a spiking neural network, the 4x4 array achieved around 70 percent accuracy in recognizing handwritten numbers. Why it matters: This research paves the way for energy-efficient neuromorphic vision sensors and advanced computer vision applications, potentially revolutionizing camera technology.

Learned Optics — Improving Computational Imaging Systems through Deep Learning and Optimization

MBZUAI ·

KAUST Professor Wolfgang Heidrich is researching computational imaging systems that jointly design optics and image reconstruction algorithms. He focuses on hardware-software co-design for imaging systems with applications in HDR, compact cameras, and hyperspectral imaging. Heidrich's work on HDR displays was the basis for Brightside Technologies, acquired by Dolby in 2007. Why it matters: This research aims to advance imaging technology through AI-driven design, potentially impacting various fields from consumer electronics to scientific research within the region and globally.

Memory representation and retrieval in neuroscience and AI

MBZUAI ·

A Caltech researcher presented at MBZUAI on memory representation and retrieval, contrasting AI and neuroscience approaches. Current AI retrieval systems like RAG retrieve via fine-tuning and embedding similarity, while the presenter argued for exploring retrieval via combinatorial object identity or spatial proximity. The research explores circuit-level retrieval via domain fine-tuned LLMs and distributed memory for image retrieval using semantic similarity. Why it matters: The work suggests structured databases and retrieval-focused training can allow smaller models to outperform larger general-purpose models, offering efficiency gains for AI development in the region.

Research Talks: Bridging neuroscience and AI

MBZUAI ·

Caltech graduate student Surya Narayanan Hari presented his research on replicating human-like memory in machines at MBZUAI. He discussed how the thalamus, which filters sensory and motor signals in the brain, inspires the development of routed monolithic models in AI. Hari explained that memory retrieval occurs on object, embedding, and circuit levels in the human brain. Why it matters: This talk highlights the potential of neuroscience-inspired AI architectures for improving memory and information processing in AI systems, which could accelerate the development of more efficient and context-aware AI models in the region.