Skip to content
GCC AI Research

Search

Results for "ArBanking77"

AraFinNLP 2024: The First Arabic Financial NLP Shared Task

arXiv ·

The AraFinNLP 2024 shared task introduced two subtasks focused on Arabic financial NLP: multi-dialect intent detection and cross-dialect translation with intent preservation. It utilized the updated ArBanking77 dataset, containing 39k parallel queries in MSA and four dialects, labeled with 77 banking-related intents. 45 teams registered, with 11 participating in intent detection (achieving a top F1 score of 0.8773) and only 1 team attempting translation (achieving a BLEU score of 1.667). Why it matters: This initiative addresses the need for specialized Arabic NLP tools in the growing Arab financial sector, promoting advancements in areas like banking chatbots and machine translation.

ARRC Appoints Globally-Renowned Experts to Board of Advisors

TII ·

The Autonomous Robotics Research Center (ARRC) at Abu Dhabi’s Technology Innovation Institute (TII) has appointed a board of advisors composed of globally-recognized experts in robotics and autonomous systems. The advisors include professors from Georgia Tech, ETH Zurich, University of Bologna, Vrije Universiteit Amsterdam, NYU, and Czech Technical University. The board will guide ARRC's research into robotics technologies aimed at building hybrid biological and artificial systems. Why it matters: This signals the UAE's continued investment in attracting top international expertise to advance its AI and robotics research capabilities.

ARRC's Groundbreaking Advancements in Underwater Communication Technology

TII ·

The Autonomous Robotics Research Center (ARRC) is developing underwater communication systems, including a multimode modem prototype, and has filed three patents. One key technology is the Universal Underwater Software Defined Modem (UniSDM), which supports sound, magnetic induction, light, and radio waves. ARRC also developed a network management framework for automatic network slicing (ANS) of communication resources. Why it matters: These advancements are crucial for improving underwater exploration, industrial maintenance, and marine monitoring in the region, enabling more efficient and reliable communication for underwater robots.

DaringFed: A Dynamic Bayesian Persuasion Pricing for Online Federated Learning under Two-sided Incomplete Information

arXiv ·

This paper introduces DaringFed, a novel dynamic Bayesian persuasion pricing mechanism for online federated learning (OFL) that addresses the challenge of two-sided incomplete information (TII) regarding resources. It formulates the interaction between the server and clients as a dynamic signaling and pricing allocation problem within a Bayesian persuasion game, demonstrating the existence of a unique Bayesian persuasion Nash equilibrium. Evaluations on real and synthetic datasets demonstrate that DaringFed optimizes accuracy and convergence speed and improves the server's utility.

Merchants in innovation

KAUST ·

KAUST hosted the KAUST Research Conference: Advances in Well Construction with Focus on Near-Wellbore Physics and Chemistry from November 7 to 9. The conference was co-chaired by Eric van Oort, a professor at UT Austin, and Tadeusz Patzek, director of the University’s Upstream Petroleum Engineering Research Center. Attendees included professors from the University of Queensland and UT Austin, and directors from GenesisRTS and Labyrinth Consulting Services, Inc. Why it matters: The conference facilitates international collaboration on advancements in petroleum engineering and well construction technologies, which are strategically important for Saudi Arabia.

ArabicNumBench: Evaluating Arabic Number Reading in Large Language Models

arXiv ·

The paper introduces ArabicNumBench, a benchmark for evaluating LLMs on Arabic number reading using both Eastern and Western Arabic numerals. It evaluates 71 models from 10 providers on 210 number reading tasks, using zero-shot, zero-shot CoT, few-shot, and few-shot CoT prompting strategies. The results show substantial performance variation, with few-shot CoT prompting achieving 2.8x higher accuracy than zero-shot approaches. Why it matters: The benchmark establishes baselines for Arabic number comprehension and provides guidance for model selection in production Arabic NLP systems.