Skip to content
GCC AI Research

Search

Results for "rule-based systems"

Web-Based Expert System for Civil Service Regulations: RCSES

arXiv ·

The paper introduces a web-based expert system called RCSES for civil service regulations in Saudi Arabia. The system covers 17 regulations and utilizes XML for knowledge representation and ASP.net for rule-based inference. RCSES was validated by domain experts and technical users, and compared favorably to other web-based expert systems.

Machine learning 101

MBZUAI ·

Machine learning (ML) algorithms use data to make decisions or predictions, improving over time as more data is provided. ML is a subset of AI, focused on models that learn from data, contrasting with rule-based systems. ML is superior in scenarios where rules are not exhaustive, such as medical scans, but rule-based systems and ML often complement each other. Why it matters: This overview clarifies the role of machine learning within the broader field of AI, highlighting its data-driven approach and its advantages over traditional rule-based systems in complex decision-making scenarios.

Developing an AI system that thinks like a scientist

KAUST ·

KAUST researchers developed a new algorithm for detecting cause and effect in large datasets. The algorithm aims to find underlying models that generate data, helping uncover cause-and-effect dynamics. It could aid researchers across fields like cell biology and genetics by answering questions that typical machine learning cannot. Why it matters: This advancement could equip current machine learning methods with abilities to better deal with abstraction, inference, and concepts such as cause and effect.

Rational Counterfactuals

arXiv ·

This paper introduces rational counterfactuals, a method for identifying counterfactuals that maximize the attainment of a desired consequent. The approach aims to identify the antecedent that leads to a specific outcome for rational decision-making. The theory is applied to identify variable values that contribute to peace, such as Allies, Contingency, Distance, Major Power, Capability, Democracy, and Economic Interdependency. Why it matters: The research provides a framework for analyzing and promoting conditions conducive to peace using counterfactual reasoning.

Automated Decision Making for Safety Critical Applications

MBZUAI ·

Mykel Kochenderfer from Stanford University gave a talk on building robust decision-making systems for autonomous systems, highlighting the challenges of balancing safety and efficiency in uncertain environments. The talk addressed computational tractability and establishing trust in these systems. Kochenderfer outlined methodologies and research applications for building safer systems, drawing from his work on air traffic control, unmanned aircraft, and automated driving. Why it matters: The development of safe and reliable autonomous systems is crucial for various applications in the region, and insights from experts like Kochenderfer can guide research and development efforts at institutions like MBZUAI.

ILION: Deterministic Pre-Execution Safety Gates for Agentic AI Systems

arXiv ·

The paper introduces ILION, a deterministic execution gate designed to ensure the safety of autonomous AI agents by classifying proposed actions as either BLOCK or ALLOW. ILION uses a five-component cascade architecture that operates without statistical training, API dependencies, or labeled data. Evaluation against existing text-safety infrastructures demonstrates ILION's superior performance in preventing unauthorized actions, achieving an F1 score of 0.8515 with sub-millisecond latency.