Skip to content
GCC AI Research

Search

Results for "Shimon Ullman"

Brady and Ullman combine for top level AI education

MBZUAI ·

MBZUAI board member Sir Michael Brady and Weizmann Institute's Shimon Ullman lectured in the fourth module of MBZUAI’s Executive Program on December 11. Brady led a session on 'Artificial Intelligence (AI) and Image Analysis,' while Ullman presented on ‘AI and Human Intelligence: Lessons From Computer Vision.’ The 12-week Executive Program aims to educate high-level decision makers in AI, supporting the UAE's AI leadership mission. Why it matters: Showcases MBZUAI's commitment to attracting top global talent and fostering AI education for leadership in the UAE.

Personalized medicine based on deep human phenotyping

MBZUAI ·

Eran Segal from Weizmann Institute of Science presented The Human Phenotype Project, a large-scale prospective cohort with over 10,000 participants. The project aims to identify novel molecular markers and develop prediction models for disease onset using deep profiling. The profiling includes medical history, lifestyle, blood tests, and molecular profiling of the transcriptome, genetics, microbiome, metabolome and immune system. Why it matters: Such projects demonstrate the growing focus on personalized medicine in the region, utilizing advanced AI and machine learning techniques for disease prevention and treatment.

From Individual to Society: Social Simulation Driven by LLM-based Agent

MBZUAI ·

Fudan University's Zhongyu Wei presented research on social simulation driven by LLMs, covering individual and large-scale social movement simulation. Wei directs the Data Intelligence and Social Computing Lab (Fudan DISC) and has published extensively on multimodal large models and social computing. His work includes the Volcano multimodal model, DISC-MedLLM, and ElectionSim. Why it matters: Using LLMs for social simulation could provide new tools for understanding and potentially predicting social dynamics in the Arab world.

A Benchmark and Agentic Framework for Omni-Modal Reasoning and Tool Use in Long Videos

arXiv ·

A new benchmark, LongShOTBench, is introduced for evaluating multimodal reasoning and tool use in long videos, featuring open-ended questions and diagnostic rubrics. The benchmark addresses the limitations of existing datasets by combining temporal length and multimodal richness, using human-validated samples. LongShOTAgent, an agentic system, is also presented for analyzing long videos, with both the benchmark and agent demonstrating the challenges faced by state-of-the-art MLLMs.

Unlocking the Potential of Large Models for Vision Related Tasks

MBZUAI ·

Yanwei Fu from Fudan University will present research on multimodal models, robotic grasping, and fMRI neural decoding. Topics include few-shot learning, object-centered self-supervised learning, image manipulation, and visual-language alignment. The research also covers Transformer compression and applications of large models with MVS 3D modeling in robotic arm grasping. Why it matters: While the talk is not directly about Middle East AI, the topics covered are core to advancing AI research and applications in the region.