Skip to content
GCC AI Research

Search

Results for "ethical AI"

AI impacts must be ethical

MBZUAI ·

MBZUAI's Executive Program held a module on AI ethics, safety, and societal impacts, led by Professors Tom Mitchell and Justine Cassell. The session covered machine learning bias, privacy, AI's impact on jobs and education, and the ethical use of AI. Forty-two participants from ministerial leadership and top industry executives are part of the first cohort. Why it matters: This highlights MBZUAI and the UAE's commitment to ethical AI development as part of building a knowledge-based economy.

Multimodal machine intelligence and its human-centered possibilities

MBZUAI ·

A panel discussion was hosted at MBZUAI in collaboration with the Manara Center for Coexistence and Dialogue. The discussion centered on the potential of multimodal machine intelligence for human-centered applications, particularly in health and wellbeing. USC Professor Shrikanth Narayanan spoke on creating trustworthy and inclusive AI that considers protected variables. Why it matters: This signals MBZUAI's interest in exploring ethical AI development and its applications for societal good, potentially driving research and policy initiatives in the region.

Making Machine Learning Safe for the World - New Lines Institute

The National ·

The New Lines Institute published a report analyzing the risks associated with advanced AI systems. It examines potential harms like disinformation, bias, and autonomous weapons. Why it matters: The report highlights the need for proactive safety measures and ethical guidelines in AI development to mitigate negative impacts in the Middle East and globally.

Balancing the future of AI: MBZUAI hosts AI for the Global South workshop

MBZUAI ·

MBZUAI is hosting the AI for the Global South (AI4GS) workshop in collaboration with the Indian Institute of Technology Delhi Abu Dhabi. The workshop aims to address the underrepresentation of the Global South in AI development and ensure AI benefits everyone. It brings together researchers from diverse disciplines and geographies, including representatives from NGOs, technology companies like Microsoft, Google, Cohere, and G42, and startups. Why it matters: The initiative promotes inclusive AI development, ensuring that AI tools and research consider the needs and contexts of underrepresented regions.

Machines and morality: judging right and wrong with large-language models

MBZUAI ·

MBZUAI Professor Monojit Choudhury co-authored a study on LLMs and their capacity for moral reasoning, with the study being presented at the 18th Conference of the European Chapter of the Association for Computational Linguistics (EACL) in Malta. The study included contributions from Aditi Khandelwal, Utkarsh Agarwal, and Kumar Tanmay from Microsoft. The research explores AI alignment, ensuring AI systems align with human values, moral principles, and ethical considerations. Why it matters: The study provides insight into LLMs' capabilities regarding complex ethical issues, which is important for guiding the development of AI in a way that is consistent with human values.

Towards Trustworthy AI: From High-dimensional Statistics to Causality

MBZUAI ·

Dr. Xinwei Sun from Microsoft Research Asia presented research on trustworthy AI, focusing on statistical learning with theoretical guarantees. The work covers methods for sparse recovery with false-discovery rate analysis and causal inference tools for robustness and explainability. Consistency and identifiability were addressed theoretically, with applications shown in medical imaging analysis. Why it matters: The research contributes to addressing key limitations of current AI models regarding explainability, reproducibility, robustness, and fairness, which are crucial for real-world applications in sensitive fields like healthcare.

Climate conscious computing

MBZUAI ·

MBZUAI's Qirong Ho and colleagues are developing an Artificial Intelligence Operating System (AIOS) for decarbonization, aiming to reduce energy waste in AI development. The AIOS focuses on improving communication efficiency between machines during AI model training, as inefficient communication leads to prolonged tasks and increased energy consumption. This system addresses the high computing power demands of large language models like ChatGPT and LLaMA-2. Why it matters: By optimizing energy usage in AI development, the AIOS could significantly reduce the carbon footprint of AI technologies in the region and globally.

AI-Enabled Technologies for People with Disabilities: Some Key Research and Privacy/Security Challenges

MBZUAI ·

The article discusses the potential of AI-enabled assistive technologies to empower People with Disabilities (PWD), citing that over one billion people live with some form of disability globally. It highlights examples like communication tools, assistive robots, and smart visual aids, and emphasizes the need to address security and privacy concerns. The author, Ishfaq Ahmad from the University of Texas at Arlington, points out that with a growing global population, over two billion people will need assistive products by 2030. Why it matters: The piece advocates for using AI to tackle critical human rights issues and improve the lives of a significant portion of the global population in the face of increasing disability rates.