Skip to content
GCC AI Research

Search

Results for "symbolic AI"

Neural Models with Symbolic Representations for Perceptuo-Reasoning Tasks

MBZUAI ·

Mausam, head of Yardi School of AI at IIT Delhi and affiliate professor at University of Washington, will discuss Neuro-Symbolic AI. The talk will cover recent research threads with applications in NLP, probabilistic decision-making, and constraint satisfaction. Mausam's research explores neuro-symbolic machine learning, computer vision for radiology, NLP for robotics, multilingual NLP, and intelligent information systems. Why it matters: Neuro-Symbolic AI is gaining importance as it combines the strengths of neural and symbolic approaches, potentially leading to more robust and explainable AI systems.

Empowering Large Language Models with Reliable Reasoning

MBZUAI ·

Liangming Pan from UCSB presented research on building reliable generative AI agents by integrating symbolic representations with LLMs. The neuro-symbolic strategy combines the flexibility of language models with precise knowledge representation and verifiable reasoning. The work covers Logic-LM, ProgramFC, and learning from automated feedback, aiming to address LLM limitations in complex reasoning tasks. Why it matters: Improving the reliability of LLMs is crucial for high-stakes applications in finance, medicine, and law within the region and globally.

A Panoramic Survey of Natural Language Processing in the Arab World

arXiv ·

This survey paper reviews the landscape of Natural Language Processing (NLP) research and applications in the Arab world. It discusses the unique challenges posed by the Arabic language, such as its morphological complexity and dialectal diversity. The paper also presents a historical overview of Arabic NLP and surveys various research areas, including machine translation, sentiment analysis, and speech recognition. Why it matters: The survey provides a comprehensive resource for researchers and practitioners interested in the current state and future directions of Arabic NLP, a field critical for enabling AI technologies to serve Arabic-speaking communities.

Improving through argument: a symbolic approach to fake-news detection

MBZUAI ·

MBZUAI researchers developed a symbolic adversarial learning framework (SALF) for fake news detection using LLM-powered agents. SALF employs a generator and a detector in a debate-like setup, judged by another LLM, to improve the agents' ability to create and identify fake news. Testing showed that the SALF generator degraded the performance of existing fake news detectors by 53.4% on Chinese and 34.2% on English datasets. Why it matters: This research offers a novel approach to combating the evolving threat of LLM-generated disinformation, a critical issue for maintaining reliable information ecosystems in the region and globally.

‘Rising Stars’ in AI research explore reasoning, trust, and real-world impact

KAUST ·

KAUST hosted the fifth Rising Stars in AI Symposium, convening 25 early-career AI researchers from over 430 applicants. Discussions centered on reasoning in AI models, AI's role in addressing global challenges, embodied systems, and the necessity of trustworthy AI. Participants, including Dr. Sahar Abdelnabi from the ELLIS Institute Tübingen, emphasized the symposium's value for collaboration and identifying future AI research directions. Why it matters: The event highlights KAUST's commitment to fostering emerging AI talent and addressing critical issues in the field, with a focus on AI's real-world impact and ethical considerations.

Sir Michael Brady on why healthcare AI must move from detection to articulation

MBZUAI ·

Sir Michael Brady, professor at Oxford and MBZUAI, argues that AI in healthcare must move beyond pattern recognition to causal understanding. He states that clinicians require AI models to articulate their reasoning behind diagnoses and therapy recommendations, not just provide statistical scores. He believes AI's immediate impact will be in personalized medicine, tailoring treatments to the individual rather than relying on epidemiological averages. Why it matters: This perspective highlights the critical need for explainable AI in sensitive domains like healthcare, paving the way for more trustworthy and clinically relevant AI applications in the region.

Advancing computer vision with common sense

MBZUAI ·

MBZUAI researchers are working to improve computer vision models by incorporating common sense knowledge. They aim to address issues like the generation of unrealistic human features, such as hands with incorrect numbers of fingers. By integrating common-sense knowledge, like the fact that humans typically have five fingers per hand, they seek to make deep learning models more reliable. Why it matters: This research could improve the accuracy and trustworthiness of AI-generated content, making it more suitable for real-world applications.