MBZUAI's Executive Program held a module on AI ethics, safety, and societal impacts, led by Professors Tom Mitchell and Justine Cassell. The session covered machine learning bias, privacy, AI's impact on jobs and education, and the ethical use of AI. Forty-two participants from ministerial leadership and top industry executives are part of the first cohort. Why it matters: This highlights MBZUAI and the UAE's commitment to ethical AI development as part of building a knowledge-based economy.
MBZUAI Professor Fakhri Karray delivered a talk on advances in operational AI, highlighting its potential to grow global GDP by 15% by 2025. He discussed AI's impact on IoT, self-driving machines, virtual assistants, and other fields. Karray outlined milestones in AI, achievements in operational AI, future directions, and challenges for safe and beneficial AI. Why it matters: The presentation underscores MBZUAI's role in shaping the discourse around AI's transformative potential and ethical considerations in the region.
A panel discussion was hosted at MBZUAI in collaboration with the Manara Center for Coexistence and Dialogue. The discussion centered on the potential of multimodal machine intelligence for human-centered applications, particularly in health and wellbeing. USC Professor Shrikanth Narayanan spoke on creating trustworthy and inclusive AI that considers protected variables. Why it matters: This signals MBZUAI's interest in exploring ethical AI development and its applications for societal good, potentially driving research and policy initiatives in the region.
MBZUAI Professor Monojit Choudhury co-authored a study on LLMs and their capacity for moral reasoning, with the study being presented at the 18th Conference of the European Chapter of the Association for Computational Linguistics (EACL) in Malta. The study included contributions from Aditi Khandelwal, Utkarsh Agarwal, and Kumar Tanmay from Microsoft. The research explores AI alignment, ensuring AI systems align with human values, moral principles, and ethical considerations. Why it matters: The study provides insight into LLMs' capabilities regarding complex ethical issues, which is important for guiding the development of AI in a way that is consistent with human values.
This paper introduces the AI Pentad model, comprising humans/organizations, algorithms, data, computing, and energy, as a framework for AI regulation. It also presents the CHARME²D Model to link the AI Pentad with regulatory enablers like registration, monitoring, and enforcement. The paper assesses AI regulatory efforts in the EU, China, UAE, UK, and US using the CHARME²D model, highlighting strengths and weaknesses.
MBZUAI is hosting the AI for the Global South (AI4GS) workshop in collaboration with the Indian Institute of Technology Delhi Abu Dhabi. The workshop aims to address the underrepresentation of the Global South in AI development and ensure AI benefits everyone. It brings together researchers from diverse disciplines and geographies, including representatives from NGOs, technology companies like Microsoft, Google, Cohere, and G42, and startups. Why it matters: The initiative promotes inclusive AI development, ensuring that AI tools and research consider the needs and contexts of underrepresented regions.
MBZUAI President Professor Eric Xing discussed AI's potential to augment human capabilities and the responsibility of AI researchers in shaping future leaders. Xing's background includes professorships at Carnegie Mellon University, leadership at Petuum Inc., and directorship of the Center for Machine Learning and Health. He also held visiting positions at Stanford University and Facebook Inc. Why it matters: The emphasis on responsible AI development and education aligns with the UAE's broader strategy to become a leader in ethical and human-centric AI.
Dr. Xinwei Sun from Microsoft Research Asia presented research on trustworthy AI, focusing on statistical learning with theoretical guarantees. The work covers methods for sparse recovery with false-discovery rate analysis and causal inference tools for robustness and explainability. Consistency and identifiability were addressed theoretically, with applications shown in medical imaging analysis. Why it matters: The research contributes to addressing key limitations of current AI models regarding explainability, reproducibility, robustness, and fairness, which are crucial for real-world applications in sensitive fields like healthcare.