Skip to content
GCC AI Research

Search

Results for "filter bubble"

Evaluating Web Search Engines Results for Personalization and User Tracking

arXiv ·

This paper presents six experiments evaluating personalization and user tracking in web search engine results. The experiments involve comparing search results based on VPN location (including UAE vs others), logged-in status, network type, search engine, browser, and trained Google accounts. The study measures total hits, first hit, and correlation between hits to identify patterns of personalization. Why it matters: The findings shed light on the extent of filter bubble effects and potential biases in search results for users in the UAE and globally.

All that Glitters ain’t Gold: Examining Machine Learning as Socio-technical Infrastructure.

MBZUAI ·

Zeerak Talat, an independent scholar, gave a talk at MBZUAI on automated content moderation and the impacts of machine learning on society. Talat's research considers how machine learning interacts with and impacts societies through content moderation technologies, drawing from NLP, privacy preserving machine learning, science and technology studies, decolonize studies, and media studies. The talk highlighted research areas that can afford productive directions for the meeting between machine learning and society. Why it matters: The talk contributes to the discussion of ethical AI development and deployment in the region, particularly regarding content moderation and its societal impacts.

Social Media Influencers, Misinformation, and the threat to elections

MBZUAI ·

A panel discussion hosted by MBZUAI in collaboration with the Manara Center for Coexistence and Dialogue addressed misinformation and its threat to elections. The talk covered the reasons behind the rise of misinformation, citizen perspectives, and the role of social media influencers. Two cases, the Indian general elections of 2024 and the upcoming US presidential elections in November 2024, were used to describe the contours of misinformation. Why it matters: Understanding the dynamics of misinformation, especially through social media influencers, is crucial for safeguarding democratic processes in the region and globally.

On a mission to end fake news

MBZUAI ·

MBZUAI Professor Preslav Nakov is researching methods to combat fake news and online disinformation through NLP techniques. His work focuses on detecting harmful memes and identifying the stance of individuals regarding disinformation. Four of Nakov’s recent papers on these topics were presented at NAACL 2022. Why it matters: This research aims to mitigate the impact of weaponized news and online manipulation, contributing to a more trustworthy information environment in the region and globally.

FanarGuard: A Culturally-Aware Moderation Filter for Arabic Language Models

arXiv ·

The paper introduces FanarGuard, a bilingual moderation filter for Arabic and English language models that considers both safety and cultural alignment. A dataset of 468K prompt-response pairs was created and scored by LLM judges on harmlessness and cultural awareness to train the filter. The first benchmark targeting Arabic cultural contexts was developed to evaluate cultural alignment. Why it matters: FanarGuard advances context-sensitive AI safeguards by integrating cultural awareness into content moderation, addressing a critical gap in current alignment techniques.

Decoding the news: a new application to identify persuasion techniques in the media

MBZUAI ·

MBZUAI Professor Preslav Nakov has developed FRAPPE, an interactive website that analyzes news articles to identify persuasion techniques. FRAPPE helps users understand framing, persuasion, and propaganda at an aggregate level, across different news outlets and countries. Presented at EACL, FRAPPE uses 23 specific techniques categorized into six broader buckets, such as 'attack on reputation' and 'manipulative wording'. Why it matters: The tool addresses the increasing difficulty in discerning factual information from disinformation, providing a means to identify biases in news media from different countries.

Scalable Community Detection in Massive Networks Using Aggregated Relational Data

MBZUAI ·

A new mini-batch strategy using aggregated relational data is proposed to fit the mixed membership stochastic blockmodel (MMSB) to large networks. The method uses nodal information and stochastic gradients of bipartite graphs for scalable inference. The approach was applied to a citation network with over two million nodes and 25 million edges, capturing explainable structure. Why it matters: This research enables more efficient community detection in massive networks, which is crucial for analyzing complex relationships in various domains, but this article has no clear connection to the Middle East.