Skip to content
GCC AI Research

Search

Results for "dark web"

Hackers and the Internet of Things

KAUST ·

Cybersecurity specialist James Lyne spoke at KAUST's 2018 Winter Enrichment Program (WEP) about cybersecurity threats and techniques. Lyne demonstrated hacking and phishing attacks, emphasizing how hackers can exploit personal information by bypassing basic security measures. He highlighted the increasing sophistication of cybercriminals and the existence of illicit marketplaces on the dark web where hacking applications are sold. Why it matters: Raising awareness of cybersecurity threats is crucial for protecting individuals and organizations in Saudi Arabia and the broader region as digital infrastructure expands.

Evaluating Web Search Engines Results for Personalization and User Tracking

arXiv ·

This paper presents six experiments evaluating personalization and user tracking in web search engine results. The experiments involve comparing search results based on VPN location (including UAE vs others), logged-in status, network type, search engine, browser, and trained Google accounts. The study measures total hits, first hit, and correlation between hits to identify patterns of personalization. Why it matters: The findings shed light on the extent of filter bubble effects and potential biases in search results for users in the UAE and globally.

Towards Real-world Fact-Checking with Large Language Models

MBZUAI ·

Iryna Gurevych from TU Darmstadt presented research on using large language models for real-world fact-checking, focusing on dismantling misleading narratives from misinterpreted scientific publications and detecting misinformation via visual content. The research aims to explain why a false claim was believed, why it is false, and why the alternative is correct. Why it matters: Addressing misinformation, especially when supported by seemingly credible sources, is critical for public health, conflict resolution, and maintaining trust in institutions in the Middle East and globally.

Culturally Aware GenAI Risks for Youth: Perspectives from Youth, Parents, and Teachers in a Non-Western Context

arXiv ·

A study investigated the culturally aware risks of Generative AI for youth aged 7-17 in Saudi Arabia, focusing on privacy and safety challenges. Researchers analyzed 736 Reddit posts, 1,262 X (Twitter) posts, and conducted interviews with 31 Saudi participants including youth, parents, and teachers. Findings highlighted context-dependent risks, particularly regarding the disclosure of personal and family information that conflicts with culturally rooted expectations of modesty, privacy, and honor. The study proposes design implications for inclusive, context-sensitive parental controls that align with local cultural norms and values. Why it matters: This research is crucial for developing AI tools and policies that are culturally appropriate and safeguard youth in non-Western contexts like the Middle East.