Skip to content
GCC AI Research

Search

Results for "in-context learning"

Retrieval Augmentation as a Shortcut to the Training Data

MBZUAI ·

This article discusses retrieval augmentation in text generation, where information retrieved from an external source is used to condition predictions. It references recent work on retrieval-augmented image captioning, showing that model size can be greatly reduced when training data is available through retrieval. The author intends to continue this work focusing on the intersection of retrieval augmentation and in-context learning, and controllable image captioning for language learning materials. Why it matters: This research direction has the potential to improve transfer learning in vision-language models, which could be especially relevant for downstream applications in Arabic NLP and multimodal tasks.

Smoothing the way for in-context robot learning

MBZUAI ·

MBZUAI researchers have developed a new action tokenization method called LipVQ-VAE to improve in-context robot learning. LipVQ-VAE combines VQ-VAE with a Lipschitz constraint to generate smoother robotic motions, addressing limitations of traditional methods. The technique was tested on simulated and real robots, showing improved performance in imitation learning. Why it matters: This research advances robot learning by enabling more fluid and successful robot actions through improved action representation, drawing inspiration from NLP techniques.

Modeling Text as a Living Object

MBZUAI ·

The InterText project, funded by the European Research Council, aims to advance NLP by developing a framework for modeling fine-grained relationships between texts. This approach enables tracing the origin and evolution of texts and ideas. Iryna Gurevych from the Technical University of Darmstadt presented the intertextual approach to NLP, covering data modeling, representation learning, and practical applications. Why it matters: This research could enable a new generation of AI applications for text work and critical reading, with potential applications in collaborative knowledge construction and document revision assistance.

Reasoning with interactive guidance

MBZUAI ·

Niket Tandon from the Allen Institute for AI presented a talk at MBZUAI on enabling large language models to focus on human needs and continuously learn from interactions. He proposed a memory architecture inspired by the theory of recursive reminding to guide models in avoiding past errors. The talk addressed who to ask, what to ask, when to ask and how to apply the obtained guidance. Why it matters: The research explores how to align LLMs with human feedback, a key challenge for practical and ethical AI deployment.

Teaching language models about Arab culture through cross-cultural transfer

MBZUAI ·

MBZUAI researchers presented a method for cross-cultural transfer learning to improve language models' understanding of diverse Arab cultures. They used in-context learning and demonstration-based reinforcement (DITTO) to transfer cultural knowledge between countries. Experiments showed up to 34% improvement in performance on cultural understanding benchmarks using only a few demonstrations. Why it matters: This research addresses the gap in cultural understanding of Arabic language models, especially for smaller Arab countries, and provides a novel transfer learning approach.

Learning to act in noisy contexts using deep proxy learning

MBZUAI ·

Researchers are exploring methods for evaluating the outcome of actions using off-policy observations where the context is noisy or anonymized. They employ proxy causal learning, using two noisy views of the context to recover the average causal effect of an action without explicitly modeling the hidden context. The implementation uses learned neural net representations for both action and context, and demonstrates outperformance compared to an autoencoder-based alternative. Why it matters: This research addresses a key challenge in applying AI in real-world scenarios where data privacy or bandwidth limitations necessitate working with noisy or anonymized data.

AI-Assisted Knowledge Navigation

MBZUAI ·

Akhil Arora from EPFL presented a framework for AI-assisted knowledge navigation, focusing on understanding and enhancing human navigation on Wikipedia. The framework includes methods for modeling navigation patterns, identifying knowledge gaps, and assessing their causal impact. He also discussed applications beyond Wikipedia, such as multimodal knowledge navigation assistants and multilingual knowledge gap mitigation. Why it matters: This research has the potential to improve information systems by making online knowledge more accessible and navigable, especially for platforms like Wikipedia that serve as critical resources for global knowledge sharing.