This paper introduces rational counterfactuals, a method for identifying counterfactuals that maximize the attainment of a desired consequent. The approach aims to identify the antecedent that leads to a specific outcome for rational decision-making. The theory is applied to identify variable values that contribute to peace, such as Allies, Contingency, Distance, Major Power, Capability, Democracy, and Economic Interdependency. Why it matters: The research provides a framework for analyzing and promoting conditions conducive to peace using counterfactual reasoning.
MBZUAI researchers are studying how AI can be used to combat disinformation and improve news verification during elections, as AI amplifies the volume and speed of fake news. Dilshod Azizov is using machine learning to spot patterns in news that will improve verification, while Preslav Nakov's FRAPPE system identifies persuasive techniques and framing in news articles. FRAPPE uses machine learning and NLP to analyze news presentation and reporting, aiming to help users understand the underlying context of news. Why it matters: This research highlights the potential of AI to both negatively and positively impact democratic processes, emphasizing the need for tools to analyze and verify information in the face of increasing AI-generated disinformation.
A panel discussion hosted by MBZUAI in collaboration with the Manara Center for Coexistence and Dialogue addressed misinformation and its threat to elections. The talk covered the reasons behind the rise of misinformation, citizen perspectives, and the role of social media influencers. Two cases, the Indian general elections of 2024 and the upcoming US presidential elections in November 2024, were used to describe the contours of misinformation. Why it matters: Understanding the dynamics of misinformation, especially through social media influencers, is crucial for safeguarding democratic processes in the region and globally.
A new framework for constructing confidence sets for causal orderings within structural equation models (SEMs) is presented. It leverages a residual bootstrap procedure to test the goodness-of-fit of causal orderings, quantifying uncertainty in causal discovery. The method is computationally efficient and suitable for medium-sized problems while maintaining theoretical guarantees as the number of variables increases. Why it matters: This offers a new dimension of uncertainty quantification that enhances the robustness and reliability of causal inference in complex systems, but there is no indication of connection to the Middle East.
KAUST alumnus Grant Hill-Cawthorne, who earned a Ph.D. in pathogen genomics in 2013, is now the director of research at the UK Parliament. Hill-Cawthorne's work at KAUST involved establishing the Pathogen Genomics Laboratory with a focus on mass gatherings like Hajj and their influence on disease spread. He also worked with the Saudi Ministry of Health to study respiratory samples from Hajj pilgrims. Why it matters: This highlights KAUST's role in training researchers who go on to influence global health policy and research, particularly in areas relevant to Saudi Arabia's unique context.
Michael Hickner, an Associate Professor from Penn State University, visited KAUST as part of the CRDF-KAUST-OSR Visiting Scholar Fellowship Program. Hickner specializes in Materials Science and Engineering, Chemistry, and Chemical Engineering. The visit was documented with photos by Meres J. Weche. Why it matters: Such programs foster international collaboration and knowledge exchange in science and engineering between KAUST and other leading institutions.
This study compares AI uptake in the UAE and Kuwait, analyzing how constitutional, collective-choice, and operational rules shape AI implementation and its impact on citizen centricity and public value creation. It finds that the UAE's concentrated authority and pro-innovation environment enable scaling AI initiatives, while Kuwait's dispersed governance and cautious approach limit progress despite similar resources. The research highlights the importance of vertical rule coherence over wealth in determining AI's public-value yield.