Skip to content
GCC AI Research

Search

Results for "algorithmic reasoning"

CoVR-R:Reason-Aware Composed Video Retrieval

arXiv ·

A new approach to composed video retrieval (CoVR) is presented, which leverages large multimodal models to infer causal and temporal consequences implied by an edit. The method aligns reasoned queries to candidate videos without task-specific finetuning. A new benchmark, CoVR-Reason, is introduced to evaluate reasoning in CoVR.

An algorithm for success

KAUST ·

The article mentions several KAUST faculty and staff, including Matteo Parsani (Assistant Professor of Applied Mathematics), Teofilo Abrajano (Director of Sponsored Research), and David Keyes (Director of the Extreme Computing Research Center). It also references a talk by NASA Senior Scientist Mark Carpenter at the SIAM CSE 2017 conference. The article includes a photograph of King Abdullah bin Abdulaziz Al Saud. Why it matters: This appears to be general information about KAUST faculty and activities, but lacks specific details on research or AI developments.

Understanding Machine Learning on Graphs: From Node Classification to Algorithmic Reasoning.

MBZUAI ·

Kimon Fountoulakis from the University of Waterloo presented a talk on machine learning on graphs, covering node classification and algorithmic reasoning. The talk discussed the limitations and strengths of graph neural networks (GNNs). It also covered novel optimal architectures for node classification and the ability of looped GNNs to execute classical algorithms. Why it matters: Understanding GNN capabilities is crucial for advancing AI applications in areas like recommendation systems and drug discovery that rely on relational data.

Developing an AI system that thinks like a scientist

KAUST ·

KAUST researchers developed a new algorithm for detecting cause and effect in large datasets. The algorithm aims to find underlying models that generate data, helping uncover cause-and-effect dynamics. It could aid researchers across fields like cell biology and genetics by answering questions that typical machine learning cannot. Why it matters: This advancement could equip current machine learning methods with abilities to better deal with abstraction, inference, and concepts such as cause and effect.

Shorter but not Worse: Frugal Reasoning via Easy Samples as Length Regularizers in Math RLVR

arXiv ·

A new method is proposed to reduce the verbosity of LLMs in step-by-step reasoning by retaining moderately easy problems during Reinforcement Learning with Verifiable Rewards (RLVR) training. This approach acts as an implicit length regularizer, preventing the model from excessively increasing output length on harder problems. Experiments using Qwen3-4B-Thinking-2507 show the model achieves baseline accuracy with nearly twice shorter solutions.

Fact-Checking Complex Claims with Program-Guided Reasoning

arXiv ·

This paper introduces ProgramFC, a fact-checking model that decomposes complex claims into simpler sub-tasks using a library of functions. The model uses LLMs to generate reasoning programs and executes them by delegating sub-tasks, enhancing explainability and data efficiency. Experiments on fact-checking datasets demonstrate ProgramFC's superior performance compared to baseline methods, with publicly available code and data.