Skip to content
GCC AI Research

Search

Results for "echo state network"

Modeling High-Resolution Spatio-Temporal Wind with Deep Echo State Networks and Stochastic Partial Differential Equations

arXiv ·

Researchers propose a spatio-temporal model for high-resolution wind forecasting in Saudi Arabia using Echo State Networks and stochastic partial differential equations. The model reduces spatial information via energy distance, captures dynamics with a sparse recurrent neural network, and reconstructs data using a non-stationary stochastic partial differential equation approach. The model achieves more accurate forecasts of wind speed and energy, potentially saving up to one million dollars annually compared to existing models.

CESAR: A Convolutional Echo State AutoencodeR for High-Resolution Wind Forecasting

arXiv ·

Researchers introduce CESAR, a convolutional echo state autoencoder for high-resolution wind forecasting. The model extracts spatial features using a deep convolutional autoencoder and models their dynamics with an echo state network. Tested on high-resolution simulations in Riyadh, Saudi Arabia, CESAR improved wind speed and power forecasting by up to 17% compared to other methods. Why it matters: Accurate wind forecasting is critical for efficient wind farm planning and management in Saudi Arabia and the broader region.

Uncertainty Modeling of Emerging Device-based Computing-in-Memory Neural Accelerators with Application to Neural Architecture Search

arXiv ·

This paper analyzes the impact of device uncertainties on deep neural networks (DNNs) in emerging device-based Computing-in-memory (CiM) systems. The authors propose UAE, an uncertainty-aware Neural Architecture Search scheme, to identify DNN models robust to these uncertainties. The goal is to mitigate accuracy drops when deploying trained models on real-world platforms.

Emulating the energy efficiency of the brain

MBZUAI ·

MBZUAI researchers are developing spiking neural networks (SNNs) to emulate the energy efficiency of the human brain. Traditional deep learning models like those powering ChatGPT consume significant energy, with a single query using 3.96 watts. SNNs aim to mimic biological neurons more closely to reduce energy consumption, as the human brain uses only a fraction of the energy compared to these models. Why it matters: This research could lead to more sustainable and energy-efficient AI technologies, addressing a major challenge in deploying large-scale AI systems.

Memory representation and retrieval in neuroscience and AI

MBZUAI ·

A Caltech researcher presented at MBZUAI on memory representation and retrieval, contrasting AI and neuroscience approaches. Current AI retrieval systems like RAG retrieve via fine-tuning and embedding similarity, while the presenter argued for exploring retrieval via combinatorial object identity or spatial proximity. The research explores circuit-level retrieval via domain fine-tuned LLMs and distributed memory for image retrieval using semantic similarity. Why it matters: The work suggests structured databases and retrieval-focused training can allow smaller models to outperform larger general-purpose models, offering efficiency gains for AI development in the region.

Nonlinear Traffic Prediction as a Matrix Completion Problem with Ensemble Learning

arXiv ·

The paper introduces a novel method for short-term, high-resolution traffic prediction, modeling it as a matrix completion problem solved via block-coordinate descent. An ensemble learning approach is used to capture periodic patterns and reduce training error. The method is validated using both simulated and real-world traffic data from Abu Dhabi, demonstrating superior performance compared to other algorithms.

Continuously Streaming Artificial Intelligence

MBZUAI ·

MBZUAI hosted a talk by Visiting Associate Professor Adrian Bors on continuously streaming AI and the challenge of catastrophic forgetting. The talk covered approaches to continual learning like expanding mixtures of models and generative replay mechanisms. Results were presented on image classification and generation tasks. Why it matters: Continual learning is crucial for AI systems to adapt to new environments and real-world data without forgetting previous knowledge.