G42 has launched Digital Embassies and Greenshield, a sovereign operating model enabling nations to securely deploy AI while maintaining legal control over data, systems, and policies, regardless of infrastructure location. Digital Embassies establish government-to-government legal constructs defining jurisdiction, while Greenshield, implemented by Core42, translates sovereign policy into execution. It applies consistent sovereign controls across environments, governing identity and access, data handling, security, compliance, auditability, and continuity. Why it matters: This framework could accelerate AI adoption across the region by helping governments overcome infrastructure readiness gaps while ensuring data sovereignty.
G42 and Cerebras, in partnership with MBZUAI and C-DAC, will deploy an 8 exaflop AI supercomputer in India. The system will operate under India's governance frameworks, with all data remaining within national jurisdiction to meet sovereign security and compliance requirements. The supercomputer will be accessible to Indian researchers, startups, and government entities under the India AI Mission.
This paper proposes a framework for understanding AI sovereignty as a balance between autonomy and interdependence, considering global data, supply chains, and standards. It introduces a planner's model with policy heuristics for equalizing marginal returns across sovereignty pillars and setting openness. The model is applied to India and the Middle East (Saudi Arabia and UAE), finding that managed interdependence, rather than isolation, is key for AI sovereignty.
MBZUAI researchers have developed 'Byzantine antidote' (Bant), a novel defense mechanism against Byzantine attacks in federated learning. Bant uses trust scores and a trial function to dynamically filter and neutralize corrupted updates, even when a majority of nodes are compromised. The research was presented at the 40th Annual AAAI Conference on Artificial Intelligence.
MBZUAI has released Jais and Jais-chat, two new open generative large language models (LLMs) with a focus on Arabic. The 13 billion parameter models are based on the GPT-3 architecture and pretrained on Arabic, English, and code. Evaluation shows state-of-the-art Arabic knowledge and reasoning, with competitive English performance.