KAUST researchers in the Sensors Lab are developing neuromorphic circuits for vision sensors, drawing inspiration from the human eye. They created flexible photoreceptors using hybrid perovskite materials, with capacitance tunable by light stimulation, mimicking the human retina. The team collaborates with experts in image characterization and brain pattern recognition to connect the 'eye' to the 'brain' for object identification. Why it matters: This biomimetic approach promises advancements in AI, machine learning, and smart city development within the region.
KAUST Ph.D. student Valerio Mazzone won the best paper award at the 9th International Conference on Metamaterials, Photonic Crystals and Plasmonics (META). Mazzone's paper demonstrated the design of a new type of fully optical neural network using dielectric nano-lasers with invisible emission. The research showed the system can produce ultrafast optical pulses with controllable period and time duration in an optical chip. Why it matters: This award recognizes KAUST's contribution to innovative research in nanophotonics and optical computing, potentially leading to more efficient and compact laser technology.
KAUST researchers have developed an artificial electronic retina mimicking the behavior of rod retina cells, utilizing a hybrid perovskite material (MAPbBr3) embedded in PVDF-TrFE-CEF. The photoreceptor array, made of metal-insulator-metal capacitors, detects light intensity through changes in electrical capacitance. Connected to a CMOS-sensing circuit and a spiking neural network, the 4x4 array achieved around 70 percent accuracy in recognizing handwritten numbers. Why it matters: This research paves the way for energy-efficient neuromorphic vision sensors and advanced computer vision applications, potentially revolutionizing camera technology.
MBZUAI researchers are developing spiking neural networks (SNNs) to emulate the energy efficiency of the human brain. Traditional deep learning models like those powering ChatGPT consume significant energy, with a single query using 3.96 watts. SNNs aim to mimic biological neurons more closely to reduce energy consumption, as the human brain uses only a fraction of the energy compared to these models. Why it matters: This research could lead to more sustainable and energy-efficient AI technologies, addressing a major challenge in deploying large-scale AI systems.
A recent talk at MBZUAI discussed "Green Learning" and Operational Neural Networks (ONNs) as efficient alternatives to CNNs. ONNs use "nodal" and "pool" operators and "generative neurons" to expand neuron learning capacity. Moncef Gabbouj from Tampere University presented Self-Organized ONNs (Self-ONNs) and their signal processing applications. Why it matters: Exploring more efficient AI models is crucial for sustainable development of AI in the region, as it addresses computational resource constraints and promotes broader accessibility.
KAUST Professor Wolfgang Heidrich is researching computational imaging systems that jointly design optics and image reconstruction algorithms. He focuses on hardware-software co-design for imaging systems with applications in HDR, compact cameras, and hyperspectral imaging. Heidrich's work on HDR displays was the basis for Brightside Technologies, acquired by Dolby in 2007. Why it matters: This research aims to advance imaging technology through AI-driven design, potentially impacting various fields from consumer electronics to scientific research within the region and globally.
A new neural network architecture called Orchid was introduced that uses adaptive convolutions to achieve quasilinear computational complexity O(N logN) for sequence modeling. Orchid adapts its convolution kernel dynamically based on the input sequence. Evaluations across language modeling and image classification show that Orchid outperforms attention-based architectures like BERT and Vision Transformers, often with smaller model sizes. Why it matters: Orchid extends the feasible sequence length beyond the practical limits of dense attention layers, representing progress toward more efficient and scalable deep learning models.
This article discusses the reliability of Deep Neural Networks (DNNs) and their hardware platforms, especially regarding soft errors caused by cosmic rays. It highlights that while DNNs are robust against bit flips, errors can still lead to miscalculations in AI accelerators. The talk, led by Prof. Masanori Hashimoto from Kyoto University, will cover identifying vulnerabilities in neural networks and reliability exploration of AI accelerators for edge computing. Why it matters: As DNNs are deployed in safety-critical applications in the region, ensuring the reliability of AI hardware is crucial for safe and trustworthy operation.