Skip to content
GCC AI Research

Search

Results for "Attitudes"

On attitudes toward artificial intelligence: an individual differences perspective

MBZUAI ·

Christian Montag from Ulm University gave a talk about assessing attitudes towards AI, covering the IMPACT framework (Modality, Person, Area, Country/Culture, and Transparency). He discussed how factors like age, gender, personality, and culture relate to attitudes toward AI, and how those attitudes link to trust in automation and specific AI models like ChatGPT and Ernie Bot. Montag's research explores the intersection of psychology, neuroscience, behavioral economics, and computer science, focusing on the impact of AI on the human mind. Why it matters: Understanding public perception of AI is crucial for responsible development and deployment, especially in the Arab world where cultural and demographic factors can significantly shape attitudes.

Developing and Validating the Arabic Version of the Attitudes Toward Large Language Models Scale

arXiv ·

This paper presents the development and validation of an Arabic version of the Attitudes Toward Large Language Models (AT-GLLM and AT-PLLM) scales, adapted from the original English versions. The study involved translating the scales and testing them on a sample of 249 Arabic-speaking adults. The translated scales demonstrated strong psychometric properties, including a two-factor structure, measurement invariance across genders, and good reliability and validity. Why it matters: This provides a culturally relevant tool for assessing attitudes toward LLMs in the Arab world, crucial for localized research and policy-making in the rapidly growing field of Arabic AI.

Machines and morality: judging right and wrong with large-language models

MBZUAI ·

MBZUAI Professor Monojit Choudhury co-authored a study on LLMs and their capacity for moral reasoning, with the study being presented at the 18th Conference of the European Chapter of the Association for Computational Linguistics (EACL) in Malta. The study included contributions from Aditi Khandelwal, Utkarsh Agarwal, and Kumar Tanmay from Microsoft. The research explores AI alignment, ensuring AI systems align with human values, moral principles, and ethical considerations. Why it matters: The study provides insight into LLMs' capabilities regarding complex ethical issues, which is important for guiding the development of AI in a way that is consistent with human values.