Skip to content
GCC AI Research

Search

Results for "neural network search"

MedNNS: Supernet-based Medical Task-Adaptive Neural Network Search

arXiv ·

The paper introduces MedNNS, a neural network search framework designed for medical imaging, addressing challenges in architecture selection and weight initialization. MedNNS constructs a meta-space encoding datasets and models based on their performance using a Supernetwork-based approach, expanding the model zoo size by 51x. The framework incorporates rank loss and Fréchet Inception Distance (FID) loss to capture inter-model and inter-dataset relationships, improving alignment in the meta-space and outperforming ImageNet pre-trained DL models and SOTA NAS methods.

Uncertainty Modeling of Emerging Device-based Computing-in-Memory Neural Accelerators with Application to Neural Architecture Search

arXiv ·

This paper analyzes the impact of device uncertainties on deep neural networks (DNNs) in emerging device-based Computing-in-memory (CiM) systems. The authors propose UAE, an uncertainty-aware Neural Architecture Search scheme, to identify DNN models robust to these uncertainties. The goal is to mitigate accuracy drops when deploying trained models on real-world platforms.

Low-Complexity NN Technology: Model and Precision Search, Acceleration Circuit, and Applications

MBZUAI ·

Researchers at National Taiwan University are developing low-complexity neural network technologies using quantization to reduce model size while maintaining accuracy. Their work includes binary-weighted CNNs and transformers, along with a neural architecture search scheme (TPC-NAS) applied to image recognition, object detection, and NLP tasks. They have also built a PE-based CNN/transformer hardware accelerator in Xilinx FPGA SoC with a PyTorch-based software framework. Why it matters: This research provides practical methods for deploying efficient deep learning models on resource-constrained hardware, potentially enabling broader adoption of AI in embedded systems and edge devices.

Memory representation and retrieval in neuroscience and AI

MBZUAI ·

A Caltech researcher presented at MBZUAI on memory representation and retrieval, contrasting AI and neuroscience approaches. Current AI retrieval systems like RAG retrieve via fine-tuning and embedding similarity, while the presenter argued for exploring retrieval via combinatorial object identity or spatial proximity. The research explores circuit-level retrieval via domain fine-tuned LLMs and distributed memory for image retrieval using semantic similarity. Why it matters: The work suggests structured databases and retrieval-focused training can allow smaller models to outperform larger general-purpose models, offering efficiency gains for AI development in the region.

Reliability Exploration of Neural Network Accelerator

MBZUAI ·

This article discusses the reliability of Deep Neural Networks (DNNs) and their hardware platforms, especially regarding soft errors caused by cosmic rays. It highlights that while DNNs are robust against bit flips, errors can still lead to miscalculations in AI accelerators. The talk, led by Prof. Masanori Hashimoto from Kyoto University, will cover identifying vulnerabilities in neural networks and reliability exploration of AI accelerators for edge computing. Why it matters: As DNNs are deployed in safety-critical applications in the region, ensuring the reliability of AI hardware is crucial for safe and trustworthy operation.

Using Machine Learning to Study How Brains Process Natural Language

MBZUAI ·

Tom M. Mitchell from Carnegie Mellon University discussed using machine learning to study how the brain processes natural language, using fMRI and MEG to record brain activity while reading text. The research explores neural encodings of word meaning, information flow during word comprehension, and how meanings of words combine in sentences and stories. He also touched on how understanding of the brain aligns with current AI approaches to NLP. Why it matters: This interdisciplinary research could bridge the gap between neuroscience and AI, potentially leading to more human-like NLP models.

An AI trained to spot hidden objects can see through camouflage - New Scientist

Inception ·

Researchers at the University of Maryland have developed an AI system that can identify objects hidden by camouflage. The AI uses a convolutional neural network trained on synthetic data to detect partially occluded objects. The system outperformed existing object detection methods in tests on real-world images. Why it matters: The work demonstrates potential applications of AI in defense, security, and search and rescue operations in the Middle East and elsewhere.

Video search gets closer to how humans look for clips

MBZUAI ·

A new paper at ICCV 2025, co-authored by MBZUAI Ph.D. student Dmitry Demidov, introduces Dense-WebVid-CoVR, a 1.6-million sample benchmark for composed video retrieval (CoVR). The benchmark features longer, context-rich descriptions and modification texts, generated using Gemini Pro and GPT-4o, with manual verification. The paper also presents a unified fusion approach that jointly reasons across video and text inputs, improving performance on fine-grained edit details. Why it matters: This work advances video search capabilities by enabling more human-like queries, which is crucial for creative and analytic workflows that require nuanced video retrieval.