Skip to content
GCC AI Research

AI Safety Research

MBZUAI · Notable

Summary

Adel Bibi, a KAUST alumnus and researcher at the University of Oxford, presented his research on AI safety, covering robustness, alignment, and fairness of LLMs. The research addresses challenges in AI systems, alignment issues, and fairness across languages in common tokenizers. Bibi's work includes instruction prefix tuning and its theoretical limitations towards alignment. Why it matters: This research from a leading researcher highlights the importance of addressing safety concerns in LLMs, particularly regarding alignment and fairness in the Arabic language.

Keywords

AI safety · LLM · KAUST · Alignment · Fairness

Get the weekly digest

Top AI stories from the GCC region, every week.