Skip to content
GCC AI Research

Search

Results for "symbolic learning"

Empowering Large Language Models with Reliable Reasoning

MBZUAI ·

Liangming Pan from UCSB presented research on building reliable generative AI agents by integrating symbolic representations with LLMs. The neuro-symbolic strategy combines the flexibility of language models with precise knowledge representation and verifiable reasoning. The work covers Logic-LM, ProgramFC, and learning from automated feedback, aiming to address LLM limitations in complex reasoning tasks. Why it matters: Improving the reliability of LLMs is crucial for high-stakes applications in finance, medicine, and law within the region and globally.

Neural Models with Symbolic Representations for Perceptuo-Reasoning Tasks

MBZUAI ·

Mausam, head of Yardi School of AI at IIT Delhi and affiliate professor at University of Washington, will discuss Neuro-Symbolic AI. The talk will cover recent research threads with applications in NLP, probabilistic decision-making, and constraint satisfaction. Mausam's research explores neuro-symbolic machine learning, computer vision for radiology, NLP for robotics, multilingual NLP, and intelligent information systems. Why it matters: Neuro-Symbolic AI is gaining importance as it combines the strengths of neural and symbolic approaches, potentially leading to more robust and explainable AI systems.

Beyond self-driving simulations: teaching machines to learn

KAUST ·

KAUST researchers in the Image and Video Understanding Lab are applying machine learning to computer vision for automated navigation, including self-driving cars and UAVs. They tested their algorithms on KAUST roads, aiming to replicate the brain's efficiency in tasks like activity and object recognition. The team is also exploring the possibility of creative algorithms that can transfer skills without direct training. Why it matters: This research contributes to the advancement of autonomous systems and explores the fundamental questions of replicating human intelligence in machines within the GCC region.

Using child’s play for machine learning

MBZUAI ·

MBZUAI Professor Salman Khan is researching continuous, lifelong learning systems for computer vision, aiming to mimic human learning processes like curiosity and discovery. His work focuses on learning from limited data and adversarial robustness of deep neural networks. Khan, along with MBZUAI professors Fahad Khan and Rao Anwer, and partners from other universities, presented research at CVPR 2022. Why it matters: This research has the potential to significantly improve the ability of AI systems to understand and adapt to the real world, enabling more intelligent autonomous systems.

Machine learning 101

MBZUAI ·

Machine learning (ML) algorithms use data to make decisions or predictions, improving over time as more data is provided. ML is a subset of AI, focused on models that learn from data, contrasting with rule-based systems. ML is superior in scenarios where rules are not exhaustive, such as medical scans, but rule-based systems and ML often complement each other. Why it matters: This overview clarifies the role of machine learning within the broader field of AI, highlighting its data-driven approach and its advantages over traditional rule-based systems in complex decision-making scenarios.

Developing an AI system that thinks like a scientist

KAUST ·

KAUST researchers developed a new algorithm for detecting cause and effect in large datasets. The algorithm aims to find underlying models that generate data, helping uncover cause-and-effect dynamics. It could aid researchers across fields like cell biology and genetics by answering questions that typical machine learning cannot. Why it matters: This advancement could equip current machine learning methods with abilities to better deal with abstraction, inference, and concepts such as cause and effect.

Learning structured representations for accelerating scientific discovery and simulation

MBZUAI ·

Tailin Wu from Stanford presented research on using machine learning to accelerate scientific discovery and simulation at MBZUAI. The work covers learning theories from dynamical systems with improved accuracy and interpretability. It also introduces LAMP, a deep learning model optimizing spatial resolutions in simulations. Why it matters: Efficient AI-driven scientific simulation has broad implications for research in physics, biomedicine, materials science and engineering across the region.

Green Learning — New Generation Machine Learning and Applications

MBZUAI ·

A recent talk at MBZUAI discussed "Green Learning" and Operational Neural Networks (ONNs) as efficient alternatives to CNNs. ONNs use "nodal" and "pool" operators and "generative neurons" to expand neuron learning capacity. Moncef Gabbouj from Tampere University presented Self-Organized ONNs (Self-ONNs) and their signal processing applications. Why it matters: Exploring more efficient AI models is crucial for sustainable development of AI in the region, as it addresses computational resource constraints and promotes broader accessibility.