This commentary discusses the EU AI Act and its potential impact on AI regulation globally. It highlights the importance of balancing innovation with safety and security, particularly in sensitive sectors like healthcare. The author, Prof. Mérouane Debbah of TII, welcomes the EU's emphasis on transparency and the role of open-source models. Why it matters: The EU AI Act is likely to influence AI policy in the Middle East, prompting a need for regional alignment and consideration of its implications for research and development.
This paper introduces the AI Pentad model, comprising humans/organizations, algorithms, data, computing, and energy, as a framework for AI regulation. It also presents the CHARME²D Model to link the AI Pentad with regulatory enablers like registration, monitoring, and enforcement. The paper assesses AI regulatory efforts in the EU, China, UAE, UK, and US using the CHARME²D model, highlighting strengths and weaknesses.
MBZUAI Professor Fakhri Karray delivered a talk on advances in operational AI, highlighting its potential to grow global GDP by 15% by 2025. He discussed AI's impact on IoT, self-driving machines, virtual assistants, and other fields. Karray outlined milestones in AI, achievements in operational AI, future directions, and challenges for safe and beneficial AI. Why it matters: The presentation underscores MBZUAI's role in shaping the discourse around AI's transformative potential and ethical considerations in the region.
MBZUAI's Executive Program held a module on AI ethics, safety, and societal impacts, led by Professors Tom Mitchell and Justine Cassell. The session covered machine learning bias, privacy, AI's impact on jobs and education, and the ethical use of AI. Forty-two participants from ministerial leadership and top industry executives are part of the first cohort. Why it matters: This highlights MBZUAI and the UAE's commitment to ethical AI development as part of building a knowledge-based economy.
Dr. Youcheng Sun from the University of Manchester presented on ensuring the trustworthiness of AI systems using formal verification, software testing, and explainable AI. He discussed applying these techniques to challenges like copyright protection for AI models. Dr. Sun's research has been funded by organizations including Google, Ethereum Foundation, and the UK’s Defence Science and Technology Laboratory. Why it matters: As AI adoption grows in the GCC, ensuring the safety, dependability, and trustworthiness of these systems is crucial for public trust and responsible innovation.