Skip to content
GCC AI Research

Search

Results for "moral reasoning"

Machines and morality: judging right and wrong with large-language models

MBZUAI ·

MBZUAI Professor Monojit Choudhury co-authored a study on LLMs and their capacity for moral reasoning, with the study being presented at the 18th Conference of the European Chapter of the Association for Computational Linguistics (EACL) in Malta. The study included contributions from Aditi Khandelwal, Utkarsh Agarwal, and Kumar Tanmay from Microsoft. The research explores AI alignment, ensuring AI systems align with human values, moral principles, and ethical considerations. Why it matters: The study provides insight into LLMs' capabilities regarding complex ethical issues, which is important for guiding the development of AI in a way that is consistent with human values.

SocialMaze: A Benchmark for Evaluating Social Reasoning in Large Language Models

arXiv ·

MBZUAI researchers introduce SocialMaze, a new benchmark for evaluating social reasoning capabilities in large language models (LLMs). SocialMaze includes six diverse tasks across social reasoning games, daily-life interactions, and digital community platforms, emphasizing deep reasoning, dynamic interaction, and information uncertainty. Experiments show that LLMs vary in handling dynamic interactions, degrade under uncertainty, but can be improved via fine-tuning on curated reasoning examples.

Could AI outthink the greatest (human) philosophers?

MBZUAI ·

An AI model from the University of New South Wales (UNSW) won the AI Eurovision Song Contest in 2020. Following this, UNSW researchers posed philosophical questions to an AI language model and found that respondents preferred some machine-generated answers over those from philosophers like the Dalai Lama. This raises the question of whether AI can outthink human philosophers, a topic explored through projects like Philosopher AI and attempts to emulate the human brain with neural networks. Why it matters: The exploration of AI's capacity for philosophical thought could revolutionize our understanding of intelligence and consciousness, with potential implications for AI ethics and the future of human-machine collaboration in intellectual fields within the Middle East and abroad.

Empowering Large Language Models with Reliable Reasoning

MBZUAI ·

Liangming Pan from UCSB presented research on building reliable generative AI agents by integrating symbolic representations with LLMs. The neuro-symbolic strategy combines the flexibility of language models with precise knowledge representation and verifiable reasoning. The work covers Logic-LM, ProgramFC, and learning from automated feedback, aiming to address LLM limitations in complex reasoning tasks. Why it matters: Improving the reliability of LLMs is crucial for high-stakes applications in finance, medicine, and law within the region and globally.

Naval Chaplaincy School discusses Artificial Intelligence - 106rqw.ang.af.mil

Bahrain AI ·

The Naval Chaplaincy School engaged in discussions concerning Artificial Intelligence, as indicated by the provided title. While specific details of the discourse are unavailable, such discussions typically explore the ethical, operational, and human impact of AI within specialized military and spiritual contexts. This engagement represents an institutional effort to address emerging technological challenges. Why it matters: This highlights a global trend of organizations grappling with AI's implications, though without content, its specific relevance to Middle East AI developments is unclear.