Skip to content
GCC AI Research

Search

Results for "LibrAI"

AI Startup Spotlight Series: LibrAI

MBZUAI ·

LibrAI is an AI startup founded by Xudong Han, a University of Melbourne PhD graduate, in December 2023 after the release of ChatGPT. The company focuses on advancing AI safety and responsible AI practices, building on Han's prior work creating FairLib, an open-source toolkit for fairness in deep neural networks. LibrAI's team of seven aims to create practical solutions ensuring AI is both responsible and revolutionary. Why it matters: The establishment of a startup focused on AI safety highlights growing awareness of ethical considerations in AI development within the region.

Highlighting LLM safety: How the Libra-Leaderboard is making AI more responsible

MBZUAI ·

MBZUAI-based startup LibrAI has launched the Libra-Leaderboard, an evaluation framework for LLMs that assesses both capability and safety. The leaderboard evaluates 26 mainstream LLMs using 57 datasets, assigning scores based on bias, misinformation, and oversensitivity. LibrAI also launched the Interactive Safety Arena to engage the public and educate them on AI safety through adversarial prompt testing. Why it matters: The Libra-Leaderboard provides a benchmark for responsible AI development, emphasizing the importance of aligning AI capabilities with safety considerations in the rapidly evolving LLM landscape.

AI Literacy in UAE Libraries: Assessing Competencies, Training Needs, and Ethical Considerations for the Digital Age

arXiv ·

A survey of 92 library and information science (LIS) professionals in the UAE reveals strong cognitive AI competencies but gaps in behavioral and normative competencies related to AI biases and ethics. The study identifies a disconnect between the perceived importance of AI skills and the effectiveness of current training programs. It recommends that library training programs address AI ethics and biases.

September startup spotlight: Five success stories from MBZUAI researchers, students, and alumni

MBZUAI ·

MBZUAI's Incubation and Entrepreneurship Center (MIEC), launched in November 2023, is fostering AI-driven startups, including LibrAI, Audiomatic, and Limb. LibrAI is an AI safety monitoring platform founded by MBZUAI postdoctoral researcher Xudong Han. Audiomatic, created by MBZUAI students Muhammad Taimoor Haseeb and Ahmad Hammoudeh, is an AI-powered audio integration platform. Why it matters: These startups demonstrate MBZUAI's role in translating AI research into practical solutions, contributing to the UAE's innovation ecosystem and addressing real-world challenges.

Algorithms and Software for Text Classification

MBZUAI ·

The article discusses the challenges in effectively applying text classification techniques, despite the availability of tools like LibMultiLabel. It highlights the importance of guiding users to appropriately use machine learning methods due to considerations in practical applications such as evaluation criteria and data strategies. The piece also mentions a panel discussion hosted by MBZUAI in collaboration with the Manara Center for Coexistence and Dialogue. Why it matters: This signals ongoing efforts within the UAE AI ecosystem to address practical challenges and promote responsible AI usage in NLP applications.

Generative Artificial Intelligence in RNA Biology

MBZUAI ·

Researchers at the Rosalind Franklin Institute are using generative AI, including GANs, to augment limited biological datasets, specifically mirtron data from mirtronDB. The synthetic data created mimics real-world samples, facilitating more comprehensive training of machine learning models, leading to improved mirtron identification tools. They also plan to apply Large Language Models (LLMs) to predict unknown patterns in sequence and structure biology problems. Why it matters: This research explores AI techniques to tackle data scarcity in biological research, potentially accelerating discoveries in noncoding RNA and transposable elements.

Empowering Large Language Models with Reliable Reasoning

MBZUAI ·

Liangming Pan from UCSB presented research on building reliable generative AI agents by integrating symbolic representations with LLMs. The neuro-symbolic strategy combines the flexibility of language models with precise knowledge representation and verifiable reasoning. The work covers Logic-LM, ProgramFC, and learning from automated feedback, aiming to address LLM limitations in complex reasoning tasks. Why it matters: Improving the reliability of LLMs is crucial for high-stakes applications in finance, medicine, and law within the region and globally.