Dr. Jindong Wang from Microsoft Research Asia gave a talk at MBZUAI about the limitations of large foundation models, including adapting to real-world unpredictability and security concerns. He also discussed the need for interdisciplinary collaboration to evaluate the benefits and risks of these models. Dr. Wang shared his research and insights on how to harness the power of large foundation models while addressing their constraints and fostering responsible AI integration. Why it matters: This highlights MBZUAI's role in hosting discussions about responsible AI development and the challenges of deploying foundation models.
MBZUAI has launched the Institute of Foundation Models to advance generative AI research and development. The institute will focus on developing large-scale AI models adaptable to various applications, building on MBZUAI's existing work with models like Jais, Llama 2, and Vicuna. It will focus on multi-modal foundation models applicable to areas like healthcare, finance and environmental engineering. Why it matters: This initiative further solidifies the UAE's position as a leader in AI, particularly in the development and application of foundation models for diverse industries.
IBM Fellow Dr. Tanveer Syeda-Mahmood gave a talk on the evolution of foundational models, covering multimodal fusion in healthcare and neuro-inspired AI for computer vision. She also discussed image-driven fact-checking of generative AI textual reports for responsible models. Dr. Syeda-Mahmood leads IBM's work in Multimodal Bioinspired AI and WatsonX features, and previously led the Medical Sieve Radiology Grand Challenge. Why it matters: The talk highlights the ongoing development and application of AI foundational models in critical areas like healthcare and responsible AI development, showing IBM's continued investment in these areas.
TII and LightOn have partnered to build the NOOR Platform for exascale computing, aimed at developing foundation models. The collaboration will leverage LightOn's expertise in large language models, with the first output being the largest Arabic language model to date. The platform will provide high-quality data pipelines and facilitate extreme-scale distributed training and serving. Why it matters: This partnership aims to establish Abu Dhabi as a center of AI excellence and boost the UAE's ambitions in high-tech innovation and NLP research.
MBZUAI researchers Darya Taratynova and Shahad Hardan developed Forget-MI, a method for making clinical AI models "unlearn" specific patient data without retraining the entire model. Forget-MI addresses the challenge of removing patient data from AI models trained on multimodal records (like chest X-rays and reports) due to regulations like GDPR and HIPAA. The method unlearns both unimodal (image or text) and joint (image-text) associations while retaining overall accuracy using a late-fusion multimodal classifier. Why it matters: This research provides a practical solution to a critical privacy concern in healthcare AI, enabling compliance with data protection regulations and fostering trust in AI-driven medical applications.
MBZUAI has launched the Institute of Foundation Models (IFM) with a new Silicon Valley Lab in Sunnyvale, CA, joining existing facilities in Paris and Abu Dhabi. The launch event showcased PAN, a world model for simulating diverse realities with multimodal inputs. The IFM lab is also advancing K2-65B and JAIS AI systems. Why it matters: This expansion enhances MBZUAI's global presence and connects it with a critical AI ecosystem, supporting the UAE's economic diversification through advanced AI technologies.
MBZUAI researchers developed a method to adapt Meta's Segment Anything Model (SAM) for medical image segmentation, addressing its performance gap with natural images. Their approach improves SAM's accuracy without requiring extensive retraining or large medical image datasets. The research, led by Chao Qin, was nominated for the Best Paper Award at the MICCAI conference in Marrakesh. Why it matters: This offers a more efficient and effective way to leverage foundation models in specialized medical imaging applications, potentially improving diagnostic accuracy and reducing the need for large-scale, domain-specific training data.
The Technology Innovation Institute (TII) in Abu Dhabi has launched the Falcon Foundation, a non-profit dedicated to advancing open-source generative AI models. TII is committing $300 million to fund open-source AI projects, beginning with its Falcon AI models. The foundation aims to foster collaboration among stakeholders, developers, academia, and industry to promote transparent governance and knowledge exchange in AI. Why it matters: This initiative signals the UAE's commitment to leading in AI development through open-source innovation and collaboration, potentially accelerating AI adoption and customization across various sectors.