Skip to content
GCC AI Research

Search

Results for "Foundation Models"

Understanding and improving foundation models: from environmental risk to social responsibility

MBZUAI ·

Dr. Jindong Wang from Microsoft Research Asia gave a talk at MBZUAI about the limitations of large foundation models, including adapting to real-world unpredictability and security concerns. He also discussed the need for interdisciplinary collaboration to evaluate the benefits and risks of these models. Dr. Wang shared his research and insights on how to harness the power of large foundation models while addressing their constraints and fostering responsible AI integration. Why it matters: This highlights MBZUAI's role in hosting discussions about responsible AI development and the challenges of deploying foundation models.

Innovation accelerated: MBZUAI announces new center of Foundation Models to spearhead research and application of generative AI

MBZUAI ·

MBZUAI has launched the Institute of Foundation Models to advance generative AI research and development. The institute will focus on developing large-scale AI models adaptable to various applications, building on MBZUAI's existing work with models like Jais, Llama 2, and Vicuna. It will focus on multi-modal foundation models applicable to areas like healthcare, finance and environmental engineering. Why it matters: This initiative further solidifies the UAE's position as a leader in AI, particularly in the development and application of foundation models for diverse industries.

Evolution of Foundational Models: From Deep Learning in Healthcare to Neuro-inspired AI

MBZUAI ·

IBM Fellow Dr. Tanveer Syeda-Mahmood gave a talk on the evolution of foundational models, covering multimodal fusion in healthcare and neuro-inspired AI for computer vision. She also discussed image-driven fact-checking of generative AI textual reports for responsible models. Dr. Syeda-Mahmood leads IBM's work in Multimodal Bioinspired AI and WatsonX features, and previously led the Medical Sieve Radiology Grand Challenge. Why it matters: The talk highlights the ongoing development and application of AI foundational models in critical areas like healthcare and responsible AI development, showing IBM's continued investment in these areas.

TII, LightOn Partner to Build NOOR Platform for Exascale Computing for Foundation Models

TII ·

TII and LightOn have partnered to build the NOOR Platform for exascale computing, aimed at developing foundation models. The collaboration will leverage LightOn's expertise in large language models, with the first output being the largest Arabic language model to date. The platform will provide high-quality data pipelines and facilitate extreme-scale distributed training and serving. Why it matters: This partnership aims to establish Abu Dhabi as a center of AI excellence and boost the UAE's ambitions in high-tech innovation and NLP research.

MBZUAI Launches Institute of Foundation Models and Establishes Silicon Valley AI Lab

MBZUAI ·

MBZUAI has launched the Institute of Foundation Models (IFM) with a new Silicon Valley Lab in Sunnyvale, CA, joining existing facilities in Paris and Abu Dhabi. The launch event showcased PAN, a world model for simulating diverse realities with multimodal inputs. The IFM lab is also advancing K2-65B and JAIS AI systems. Why it matters: This expansion enhances MBZUAI's global presence and connects it with a critical AI ecosystem, supporting the UAE's economic diversification through advanced AI technologies.

A new playbook for patient privacy in the age of foundation models

MBZUAI ·

MBZUAI researchers Darya Taratynova and Shahad Hardan developed Forget-MI, a method for making clinical AI models "unlearn" specific patient data without retraining the entire model. Forget-MI addresses the challenge of removing patient data from AI models trained on multimodal records (like chest X-rays and reports) due to regulations like GDPR and HIPAA. The method unlearns both unimodal (image or text) and joint (image-text) associations while retaining overall accuracy using a late-fusion multimodal classifier. Why it matters: This research provides a practical solution to a critical privacy concern in healthcare AI, enabling compliance with data protection regulations and fostering trust in AI-driven medical applications.

Unifying Vision Representation

MBZUAI ·

This seminar explores vision systems through self-supervised representation learning, addressing challenges and solutions in mainstream vision self-supervised learning methods. It discusses developing versatile representations across modalities, tasks, and architectures to propel the evolution of the vision foundation model. Tong Zhang from EPFL, with a background from Beihang University, New York University, and Australian National University, will lead the talk. Why it matters: Advancing vision foundation models is crucial for expanding AI applications, especially in the Middle East where computer vision can address challenges in areas like urban planning, agriculture, and environmental monitoring.

UAE’s Technology Innovation Institute Launches ‘Falcon Foundation’ to Champion Open-sourcing of Generative AI Models

TII ·

The Technology Innovation Institute (TII) in Abu Dhabi has launched the Falcon Foundation, a non-profit dedicated to advancing open-source generative AI models. TII is committing $300 million to fund open-source AI projects, beginning with its Falcon AI models. The foundation aims to foster collaboration among stakeholders, developers, academia, and industry to promote transparent governance and knowledge exchange in AI. Why it matters: This initiative signals the UAE's commitment to leading in AI development through open-source innovation and collaboration, potentially accelerating AI adoption and customization across various sectors.