MBZUAI researchers are working to improve computer vision models by incorporating common sense knowledge. They aim to address issues like the generation of unrealistic human features, such as hands with incorrect numbers of fingers. By integrating common-sense knowledge, like the fact that humans typically have five fingers per hand, they seek to make deep learning models more reliable. Why it matters: This research could improve the accuracy and trustworthiness of AI-generated content, making it more suitable for real-world applications.
A new dataset called ArabCulture is introduced to address the lack of culturally relevant commonsense reasoning resources in Arabic AI. The dataset covers 13 countries across the Gulf, Levant, North Africa, and the Nile Valley, spanning 12 daily life domains with 54 fine-grained subtopics. It was built from scratch by native speakers writing and validating culturally relevant questions. Why it matters: The dataset highlights the need for more culturally aware models and benchmarks tailored to the Arabic-speaking world, moving beyond machine-translated resources.
MBZUAI researchers have created ArabCulture, a new benchmark dataset to measure cultural commonsense reasoning capabilities in Arabic language models. The dataset was built by native Arabic speakers from 13 countries and is the largest of its kind. Testing 31 language models, the researchers found that many systems struggle with understanding cultural concepts across the Arab world. Why it matters: The new benchmark addresses a gap in AI, enabling development of culturally-aware AI systems tailored to the nuances of the Arabic-speaking world.
MBZUAI President Professor Eric Xing argues against exaggerated claims of AI existential threats, contrasting them with real dangers like climate change and nuclear warfare. He critiques the "doomer outcry" fueled by sensationalism rather than rational analysis, emphasizing the importance of evidence-based discussion. Xing suggests that overregulation risks stifling the startup and open-source community, which are vital for transparent and responsible AI development. Why it matters: The piece advocates for a balanced perspective on AI's risks and benefits, promoting informed discussion over alarmist narratives in the region's rapidly developing AI landscape.
A new paper from MBZUAI researchers explores using ChatGPT to combat the spread of fake news. The researchers, including Preslav Nakov and Liangming Pan, demonstrate that ChatGPT can be used to fact-check published information. Their paper, "Fact-Checking Complex Claims with Program-Guided Reasoning," was accepted at ACL 2023. Why it matters: This research highlights the potential of large language models to address the growing challenge of misinformation, with implications for maintaining information integrity in the digital age.
Mingyu Ding from UC Berkeley presented research on endowing robots with human-like commonsense and physical reasoning capabilities. The talk covered multimodal commonsense reasoning integrating vision, world models, and language-based task planners. It also discussed physical reasoning approaches for robots to infer dynamics and physical properties of objects. Why it matters: Enhancing robots with these capabilities can improve their ability to generalize across everyday tasks, leading to greater social benefits and impact.
MBZUAI Professor Fahad Khan is working on a unified theory of machine visual intelligence. His goal is to enable AI systems to better understand and function in complex, chaotic visual environments. The aim is to improve real-world applications like smart cities, personalized healthcare, and autonomous vehicles. Why it matters: This research could significantly advance AI's ability to perceive and interact with the real world, especially in challenging environments common in the developing world.
This paper presents a benchmark study of contrastive learning (CL) methods applied to Arabic social meaning tasks like sentiment analysis and dialect identification. The study compares state-of-the-art supervised CL techniques against vanilla fine-tuning across a range of tasks. Results indicate that CL methods outperform vanilla fine-tuning in most cases and demonstrate data efficiency. Why it matters: This work highlights the potential of contrastive learning for improving performance in Arabic NLP, especially in low-resource scenarios.