MBZUAI researchers presented a new causal discovery method at NeurIPS that identifies relationships between deterministic and non-deterministic variables. The method builds directed graphs visualizing relationships between variables, incorporating both probabilistic and deterministic principles. The lead author, Longkang Li, aims to apply causal discovery to healthcare and biology for better understanding of diseases. Why it matters: This research advances the field of causal inference, potentially improving applications in areas like healthcare where understanding complex relationships is critical.
KAUST Professor Raul Tempone, an expert in Uncertainty Quantification (UQ), has been appointed as an Alexander von Humboldt Professor at RWTH Aachen University in Germany. This professorship will enable him to further his research on mathematics for uncertainty quantification with new collaborators. Tempone believes the KAUST Strategic Initiative for Uncertainty Quantification (SRI-UQ) contributed to this award. Why it matters: This appointment enhances KAUST's visibility and facilitates cross-fertilization between European and KAUST research groups, benefiting both institutions and attracting talent.
A new framework for constructing confidence sets for causal orderings within structural equation models (SEMs) is presented. It leverages a residual bootstrap procedure to test the goodness-of-fit of causal orderings, quantifying uncertainty in causal discovery. The method is computationally efficient and suitable for medium-sized problems while maintaining theoretical guarantees as the number of variables increases. Why it matters: This offers a new dimension of uncertainty quantification that enhances the robustness and reliability of causal inference in complex systems, but there is no indication of connection to the Middle East.
MBZUAI Professor Kun Zhang's research focuses on causality in AI systems, aiming to understand underlying processes beyond data correlation. He emphasizes the importance of causality and graphical representations to model why systems produce observations and account for uncertainty. Zhang served as a program chair at the 38th Conference on Uncertainty in Artificial Intelligence (UAI) in Eindhoven. Why it matters: This highlights the growing importance of causality and uncertainty in AI research, crucial for responsible AI deployment and decision-making in the region.
Patrick van der Smagt, Director of AI Research at Volkswagen Group, discussed the use of generative machine learning models for predicting and controlling complex stochastic systems in robotics. The talk highlighted examples in robotics and beyond and addressed the challenges of achieving quality and trust in AI systems. He also mentioned his involvement in a European industry initiative on trust in AI and his membership in the AI Council of the State of Bavaria. Why it matters: Understanding control in robotics, along with trust in AI, are key issues for further development of autonomous systems, especially in industrial applications within the GCC region.
This paper introduces rational counterfactuals, a method for identifying counterfactuals that maximize the attainment of a desired consequent. The approach aims to identify the antecedent that leads to a specific outcome for rational decision-making. The theory is applied to identify variable values that contribute to peace, such as Allies, Contingency, Distance, Major Power, Capability, Democracy, and Economic Interdependency. Why it matters: The research provides a framework for analyzing and promoting conditions conducive to peace using counterfactual reasoning.
A new paper from MBZUAI researchers explores using ChatGPT to combat the spread of fake news. The researchers, including Preslav Nakov and Liangming Pan, demonstrate that ChatGPT can be used to fact-check published information. Their paper, "Fact-Checking Complex Claims with Program-Guided Reasoning," was accepted at ACL 2023. Why it matters: This research highlights the potential of large language models to address the growing challenge of misinformation, with implications for maintaining information integrity in the digital age.
Saber Salehkaleybar from EPFL presented a talk on causal discovery, focusing on learning causal relationships from observational data and through interventions. He discussed an approximation algorithm for experiment design under budget constraints, with applications in gene-regulatory networks. The talk also covered improvements to reduce the computational complexity of experiment design algorithms. Why it matters: Causal AI systems can lead to more intelligent decision-making in various fields.