Skip to content
GCC AI Research

Search

Results for "corruption"

SSRC’s Dr. Abdelrahman AlMahmoud to Participate in WGISTA Webinar

TII ·

Dr. Abdelrahman AlMahmoud from TII's Secure Systems Research Center (SSRC) will participate in a WGISTA webinar on adopting a digital mindset in auditing and fighting corruption. The webinar, organized by the International Organization of Supreme Audit Institutions (INTOSAI), will discuss the impact of emerging technologies on public sector auditing. Dr. AlMahmoud will share insights on how AI and Big Data can enable auditors to process data at a new scale. Why it matters: This highlights the UAE's growing role in applying advanced technologies like AI and big data to improve governance and accountability in the public sector.

For better or worse: How AI can impact elections

MBZUAI ·

MBZUAI researchers are studying how AI can be used to combat disinformation and improve news verification during elections, as AI amplifies the volume and speed of fake news. Dilshod Azizov is using machine learning to spot patterns in news that will improve verification, while Preslav Nakov's FRAPPE system identifies persuasive techniques and framing in news articles. FRAPPE uses machine learning and NLP to analyze news presentation and reporting, aiming to help users understand the underlying context of news. Why it matters: This research highlights the potential of AI to both negatively and positively impact democratic processes, emphasizing the need for tools to analyze and verify information in the face of increasing AI-generated disinformation.

Detecting Propaganda Techniques in Code-Switched Social Media Text

arXiv ·

This paper introduces a new task: detecting propaganda techniques in code-switched text. The authors created and released a corpus of 1,030 English-Roman Urdu code-switched texts annotated with 20 propaganda techniques. Experiments show the importance of directly modeling multilinguality and using the right fine-tuning strategy for this task.

Social Media Influencers, Misinformation, and the threat to elections

MBZUAI ·

A panel discussion hosted by MBZUAI in collaboration with the Manara Center for Coexistence and Dialogue addressed misinformation and its threat to elections. The talk covered the reasons behind the rise of misinformation, citizen perspectives, and the role of social media influencers. Two cases, the Indian general elections of 2024 and the upcoming US presidential elections in November 2024, were used to describe the contours of misinformation. Why it matters: Understanding the dynamics of misinformation, especially through social media influencers, is crucial for safeguarding democratic processes in the region and globally.

Towards Trustworthy AI-Generated Text

MBZUAI ·

Xiuying Chen from KAUST presented her work on improving the trustworthiness of AI-generated text, focusing on accuracy and robustness. Her research analyzes causes of hallucination in language models related to semantic understanding and neglect of input knowledge, and proposes solutions. She also demonstrated vulnerabilities of language models to noise and enhances robustness using augmentation techniques. Why it matters: Improving the reliability of AI-generated text is crucial for its deployment in sensitive domains like healthcare and scientific discovery, where accuracy is paramount.

Data Laundering: Artificially Boosting Benchmark Results through Knowledge Distillation

arXiv ·

Researchers at MBZUAI have demonstrated a method called "Data Laundering" to artificially boost language model benchmark scores using knowledge distillation. The technique covertly transfers benchmark-specific knowledge, leading to inflated accuracy without genuine improvements in reasoning. The study highlights a vulnerability in current AI evaluation practices and calls for more robust benchmarks.

LLMEffiChecker: Understanding and Testing Efficiency Degradation of Large Language Models

arXiv ·

The paper introduces LLMEffiChecker, a tool to test the computational efficiency robustness of LLMs by identifying vulnerabilities that can significantly degrade performance. LLMEffiChecker uses both white-box (gradient-guided perturbation) and black-box (causal inference-based perturbation) methods to delay the generation of the end-of-sequence token. Experiments on nine public LLMs demonstrate that LLMEffiChecker can substantially increase response latency and energy consumption with minimal input perturbations.

The search for an antidote to Byzantine attacks

MBZUAI ·

MBZUAI researchers have developed a new method called "Byzantine antidote" (Bant) to defend federated learning systems against Byzantine attacks, where malicious nodes intentionally disrupt the training process. Bant uses trust scores and a trial function to dynamically filter out corrupted updates, even when most nodes are compromised. The system can identify poorly labeled data while still training models effectively, addressing both unconscious mistakes and deliberate sabotage. Why it matters: This research enhances the reliability and security of federated learning in sensitive sectors like healthcare and finance, enabling safer collaborative AI development.