Skip to content
GCC AI Research

Search

Results for "text mining"

The evolving of Data Science and the Saudi Arabia case. How much have we changed in 13 years?

arXiv ·

This study analyzes the evolution of data science vocabulary using 16,018 abstracts containing "data science" over 13 years. It identifies new vocabulary introduction and its integration into scientific literature using techniques like EDA, LSA, LDA, and N-grams. The research compares overall scientific publications with those specific to Saudi Arabia, identifying representative articles based on vocabulary usage. Why it matters: The work provides insights into the development of data science terminology and its specific adoption within the Saudi Arabian research landscape.

Truth-O-Meter: Making neural content meaningful and truthful

MBZUAI ·

A new content improvement system has been developed to address issues of randomness and incorrectness in text generated by deep learning models like GPT-3. The system uses text mining to identify correct sentences and employs syntactic/semantic generalization to substitute problematic elements. The system can substantially improve the factual correctness and meaningfulness of raw content. Why it matters: Improving the quality of automatically generated content is crucial for ensuring reliability and trustworthiness across various AI applications.

Modeling Text as a Living Object

MBZUAI ·

The InterText project, funded by the European Research Council, aims to advance NLP by developing a framework for modeling fine-grained relationships between texts. This approach enables tracing the origin and evolution of texts and ideas. Iryna Gurevych from the Technical University of Darmstadt presented the intertextual approach to NLP, covering data modeling, representation learning, and practical applications. Why it matters: This research could enable a new generation of AI applications for text work and critical reading, with potential applications in collaborative knowledge construction and document revision assistance.

Scalable Community Detection in Massive Networks Using Aggregated Relational Data

MBZUAI ·

A new mini-batch strategy using aggregated relational data is proposed to fit the mixed membership stochastic blockmodel (MMSB) to large networks. The method uses nodal information and stochastic gradients of bipartite graphs for scalable inference. The approach was applied to a citation network with over two million nodes and 25 million edges, capturing explainable structure. Why it matters: This research enables more efficient community detection in massive networks, which is crucial for analyzing complex relationships in various domains, but this article has no clear connection to the Middle East.

Proceedings of Symposium on Data Mining Applications 2014

arXiv ·

The Symposium on Data Mining and Applications (SDMA 2014) was organized by MEGDAM to foster collaboration among data mining and machine learning researchers in Saudi Arabia, GCC countries, and the Middle East. The symposium covered areas such as statistics, computational intelligence, pattern recognition, databases, Big Data Mining and visualization. Acceptance was based on originality, significance and quality of contribution.

ParlaMint 4.0: Parliamentary Debates going Comparable

MBZUAI ·

ParlaMint is a CLARIN ERIC flagship project focused on harmonizing multilingual corpora of parliamentary sessions. The newest version, published in October 2023, covers 26 European parliaments with linguistic annotations and machine translations to English. Maciej Ogrodniczuk, Head of Linguistic Engineering Group at the Institute of Computer Science, Polish Academy of Sciences, presented the project. Why it matters: While focused on European parliaments, the ParlaMint project provides a valuable model and infrastructure for creating comparable Arabic parliamentary corpora, which could enhance Arabic NLP research and political analysis in the Middle East.

Cross-Document Topic-Aligned Chunking for Retrieval-Augmented Generation

arXiv ·

This paper introduces Cross-Document Topic-Aligned (CDTA) chunking to address knowledge fragmentation in Retrieval-Augmented Generation (RAG) systems. CDTA identifies topics across documents, maps segments to topics, and synthesizes them into unified chunks. Experiments on HotpotQA and UAE legal texts show that CDTA improves faithfulness and citation accuracy compared to existing chunking methods, especially for complex queries requiring multi-hop reasoning.