Skip to content
GCC AI Research

Search

Results for "misleading content"

UAE warns public about misleading AI-generated videos - Gulf News

Gulf News ·

The UAE government has issued a warning to the public regarding the dangers of misleading AI-generated videos, particularly those used to spread rumors and false information. Authorities emphasized the importance of verifying the credibility of video content before sharing it on social media. The warning highlights potential legal consequences for individuals involved in creating or disseminating such content. Why it matters: This proactive stance reflects growing concerns in the UAE about the misuse of AI-driven technologies and its commitment to combatting disinformation.

Truth-O-Meter: Making neural content meaningful and truthful

MBZUAI ·

A new content improvement system has been developed to address issues of randomness and incorrectness in text generated by deep learning models like GPT-3. The system uses text mining to identify correct sentences and employs syntactic/semantic generalization to substitute problematic elements. The system can substantially improve the factual correctness and meaningfulness of raw content. Why it matters: Improving the quality of automatically generated content is crucial for ensuring reliability and trustworthiness across various AI applications.

Towards Real-world Fact-Checking with Large Language Models

MBZUAI ·

Iryna Gurevych from TU Darmstadt presented research on using large language models for real-world fact-checking, focusing on dismantling misleading narratives from misinterpreted scientific publications and detecting misinformation via visual content. The research aims to explain why a false claim was believed, why it is false, and why the alternative is correct. Why it matters: Addressing misinformation, especially when supported by seemingly credible sources, is critical for public health, conflict resolution, and maintaining trust in institutions in the Middle East and globally.

Detect – Verify – Communicate: Combating Misinformation with More Realistic NLP

MBZUAI ·

Iryna Gurevych from TU Darmstadt discussed challenges in using NLP for misinformation detection, highlighting the gap between current fact-checking research and real-world scenarios. Her team is working on detecting emerging misinformation topics and has constructed two corpora for fact checking using larger evidence documents. They are also collaborating with cognitive scientists to detect and respond to vaccine hesitancy using effective communication strategies. Why it matters: Addressing misinformation is crucial in the Middle East, especially regarding public health and socio-political issues, making advancements in NLP-based fact-checking highly relevant.

Combatting the spread of scientific falsehoods with NLP

MBZUAI ·

Researchers from MBZUAI and other institutions presented a study at ACL 2024 on combatting misinformation by identifying misrepresented scientific research. They compiled a dataset called MISSCI, comprised of real-world examples of misinformation gathered from the HealthFeedback fact-checking website. The annotators classified the different types of errors in reasoning into nine different classes. Why it matters: This work addresses a critical need to combat the spread of scientific falsehoods online, especially given the challenges of manual fact-checking.

Social Media Influencers, Misinformation, and the threat to elections

MBZUAI ·

A panel discussion hosted by MBZUAI in collaboration with the Manara Center for Coexistence and Dialogue addressed misinformation and its threat to elections. The talk covered the reasons behind the rise of misinformation, citizen perspectives, and the role of social media influencers. Two cases, the Indian general elections of 2024 and the upcoming US presidential elections in November 2024, were used to describe the contours of misinformation. Why it matters: Understanding the dynamics of misinformation, especially through social media influencers, is crucial for safeguarding democratic processes in the region and globally.

Hunting for Spammers: Detecting Evolved Spammers on Twitter

arXiv ·

A study analyzes spam content on trending hashtags on Saudi Twitter, finding that approximately 75% of the total generated content is spam. The paper assesses the performance of previous spam detection systems on a newly gathered dataset and proposes an updated manual classification algorithm to improve accuracy. Adapted features are used to build a new data-driven detection system to respond to spammers' evolving techniques. Why it matters: The high prevalence of spam in Arabic content on Twitter necessitates the development of adaptive detection techniques to maintain the quality and trustworthiness of online information in the region.

An algorithm for success

KAUST ·

The article mentions several KAUST faculty and staff, including Matteo Parsani (Assistant Professor of Applied Mathematics), Teofilo Abrajano (Director of Sponsored Research), and David Keyes (Director of the Extreme Computing Research Center). It also references a talk by NASA Senior Scientist Mark Carpenter at the SIAM CSE 2017 conference. The article includes a photograph of King Abdullah bin Abdulaziz Al Saud. Why it matters: This appears to be general information about KAUST faculty and activities, but lacks specific details on research or AI developments.