Skip to content
GCC AI Research

Search

Results for "parameter sharing"

Parameter-Efficient Fine-Tuning for NLP Models

MBZUAI ·

The article discusses parameter-efficient fine-tuning methods for large NLP models, highlighting their importance due to the increasing size and computational demands of state-of-the-art language models. It provides an overview of these methods, presenting them in a unified view to emphasize their similarities and differences. Indraneil, a PhD candidate at TU Darmstadt's UKP Lab, is researching parameter-efficient fine-tuning, sparsity, and conditional computation methods to improve LLM performance in multilingual, multi-task settings. Why it matters: Efficient fine-tuning techniques are crucial for democratizing access to and accelerating the deployment of large language models in the region and beyond.

Frontiers of federation at the AI Quorum

MBZUAI ·

MBZUAI hosted the Second Workshop on Collaborative Learning as part of the AI Quorum in Abu Dhabi, focusing on collaborative and federated learning for sustainable development. Researchers discussed applications in medicine, biology, ecological conservation, and humanitarian aid. Eric Xing highlighted the potential of large biology models, similar to LLMs, to revolutionize biological data analysis. Why it matters: This workshop underscores the UAE's commitment to advancing AI research in crucial sectors like healthcare and sustainability through collaborative learning approaches.

Building Planetary-Scale Collaborative Intelligence

MBZUAI ·

Sai Praneeth Karimireddy from UC Berkeley presented a talk on building planetary-scale collaborative intelligence, highlighting the challenges of using distributed data in machine learning due to data silos and ethical-legal restrictions. He proposed collaborative systems like federated learning as a solution to bring together distributed data while respecting privacy. The talk addressed the need for efficiency, reliability, and management of divergent goals in these systems, suggesting the use of tools from optimization, statistics, and economics. Why it matters: Collaborative AI systems can unlock valuable distributed data in the region, especially in sensitive sectors like healthcare, while ensuring privacy and addressing ethical concerns.

Multi-agent Time-based Decision-making for the Search and Action Problem

arXiv ·

This paper introduces a decentralized multi-agent decision-making framework for search and action problems under time constraints, treating time as a budgeted resource where actions have costs and rewards. The approach uses probabilistic reasoning to optimize decisions, maximizing reward within the given time. Evaluated in a simulated search, pick, and place scenario inspired by the Mohamed Bin Zayed International Robotics Challenge (MBZIRC), the algorithm outperformed benchmark strategies. Why it matters: The framework's validation in a Gazebo environment signals potential for real-world robotic applications, particularly in time-sensitive and cooperative tasks within the robotics domain in the UAE.

Graph neural network approach for decentralized multi-robot coordination

MBZUAI ·

Qingbiao Li from the Oxford Robotics Institute is researching decentralized multi-robot coordination using Graph Neural Networks (GNNs). The approach builds an information-sharing mechanism within a decentralized multi-robot system through GNNs and imitation learning. It also uses visual machine learning-assisted navigation with panoramic cameras to guide robots in unseen environments. Why it matters: This research could improve the effectiveness of automated mobile robot systems in urban rail transit and warehousing logistics in the GCC region, where smart city initiatives are growing.

Cross-disciplinary collaboration results in groundbreaking earthquake research

KAUST ·

KAUST researchers from statistics and earth science collaborated to improve earthquake source modeling. They developed a statistical ranking tool to classify 2D fields, applicable to geoscience models like temperature or precipitation. The tool helps compare different 2D fields describing the earthquake source process and quantify inter-event variability. Why it matters: This cross-disciplinary approach enhances the reliability of earthquake rupture models, contributing to better hazard assessment and risk management in seismically active regions.