Skip to content
GCC AI Research

Search

Results for "ethics"

AI impacts must be ethical

MBZUAI ·

MBZUAI's Executive Program held a module on AI ethics, safety, and societal impacts, led by Professors Tom Mitchell and Justine Cassell. The session covered machine learning bias, privacy, AI's impact on jobs and education, and the ethical use of AI. Forty-two participants from ministerial leadership and top industry executives are part of the first cohort. Why it matters: This highlights MBZUAI and the UAE's commitment to ethical AI development as part of building a knowledge-based economy.

Machines and morality: judging right and wrong with large-language models

MBZUAI ·

MBZUAI Professor Monojit Choudhury co-authored a study on LLMs and their capacity for moral reasoning, with the study being presented at the 18th Conference of the European Chapter of the Association for Computational Linguistics (EACL) in Malta. The study included contributions from Aditi Khandelwal, Utkarsh Agarwal, and Kumar Tanmay from Microsoft. The research explores AI alignment, ensuring AI systems align with human values, moral principles, and ethical considerations. Why it matters: The study provides insight into LLMs' capabilities regarding complex ethical issues, which is important for guiding the development of AI in a way that is consistent with human values.

LLM Post-Training: A Deep Dive into Reasoning Large Language Models

arXiv ·

A new survey paper provides a deep dive into post-training methodologies for Large Language Models (LLMs), analyzing their role in refining LLMs beyond pretraining. It addresses key challenges such as catastrophic forgetting, reward hacking, and inference-time trade-offs, and highlights emerging directions in model alignment, scalable adaptation, and inference-time reasoning. The paper also provides a public repository to continually track developments in this fast-evolving field.

AceGPT, Localizing Large Language Models in Arabic

arXiv ·

Researchers introduce AceGPT, a localized large language model (LLM) specifically for Arabic, addressing cultural sensitivity and local values not well-represented in mainstream models. AceGPT incorporates further pre-training with Arabic texts, supervised fine-tuning using native Arabic instructions and GPT-4 responses, and reinforcement learning with AI feedback using a reward model attuned to local culture. Evaluations demonstrate that AceGPT achieves state-of-the-art performance among open Arabic LLMs across several benchmarks. Why it matters: This work advances culturally-aware AI development for Arabic-speaking communities, providing a valuable resource and benchmark for future research.

Project NATHR-G1 - The Story behind the Innovative Solution for a more Humane World

TII ·

A team led by the Technology Innovation Institute (TII) in Abu Dhabi has developed NATHR-G1, a ground penetrating radar for detecting landmines and unexploded ordnance. The project, involving researchers from Colombia, Germany, Sweden, and Switzerland, builds on earlier work using radar to detect buried objects. NATHR-G1 incorporates machine learning for advanced signal processing and object identification. Why it matters: This humanitarian application of AI and robotics based in the UAE could significantly reduce casualties from landmines and other explosive remnants of war.

University community mourns

MBZUAI ·

MBZUAI mourns the passing of UAE President Sheikh Khalifa bin Zayed Al Nahyan. The university offers condolences to the Royal family, the UAE government, and the people. The Ministry of Presidential Affairs declared 40 days of official mourning. Why it matters: This event marks a significant moment of transition and reflection for the UAE and its institutions.

Fact checking with ChatGPT

MBZUAI ·

A new paper from MBZUAI researchers explores using ChatGPT to combat the spread of fake news. The researchers, including Preslav Nakov and Liangming Pan, demonstrate that ChatGPT can be used to fact-check published information. Their paper, "Fact-Checking Complex Claims with Program-Guided Reasoning," was accepted at ACL 2023. Why it matters: This research highlights the potential of large language models to address the growing challenge of misinformation, with implications for maintaining information integrity in the digital age.