KAUST researchers have developed a saliva-powered microbial fuel cell (MFC) that generates electricity using electrogenic bacteria to consume waste and release electrons. The micro-MFC uses graphene as an anode and an air cathode, achieving high current densities (1190 A m-3). The MFC produced 40 times more power than through the use of a carbon cloth anode. Why it matters: This technology offers a novel way to power lab-on-chip or portable diagnostic devices, particularly in remote or dangerous areas, and may offer alternatives to energy-intensive water purification technologies.
This study investigates the correlation between Google Trends data for COVID-19 symptoms and the actual number of COVID-19 cases in Saudi Arabia between March and October 2020. The researchers found that searches for "cough" and "sore throat" were most frequent, while "loss of smell", "loss of taste", and "diarrhea" showed the highest correlation with confirmed cases. The study concludes that Google searches can serve as a supplementary surveillance tool for monitoring the spread of COVID-19 in Saudi Arabia. Why it matters: The research demonstrates the potential of using readily available digital data to augment traditional surveillance methods for public health monitoring in the region.
The Qatar Computing Research Institute (QCRI) has released SpokenNativQA, a multilingual spoken question-answering dataset for evaluating LLMs in conversational settings. The dataset contains 33,000 naturally spoken questions and answers across multiple languages, including low-resource and dialect-rich languages. It aims to address the limitations of text-based QA datasets by incorporating speech variability, accents, and linguistic diversity. Why it matters: This benchmark enables more robust evaluation of LLMs in speech-based interactions, particularly for Arabic dialects and other low-resource languages.
LAraBench introduces a benchmark for Arabic NLP and speech processing, evaluating LLMs like GPT-3.5-turbo, GPT-4, BLOOMZ, Jais-13b-chat, Whisper, and USM. The benchmark covers 33 tasks across 61 datasets, using zero-shot and few-shot learning techniques. Results show that SOTA models generally outperform LLMs in zero-shot settings, though larger LLMs with few-shot learning reduce the gap. Why it matters: This benchmark helps assess and improve the performance of LLMs on Arabic language tasks, highlighting areas where specialized models still excel.
KAUST and King Faisal Specialist Hospital and Research Centre (KFSH&RC) are collaborating to develop bioelectronic sensors for rapid pathogen detection. These sensors aim to provide cheap and accurate results, potentially replacing conventional lab tests. A COVID-19 saliva test developed by KAUST researchers showed comparable sensitivity to PCR tests with a 15-minute turnaround. Why it matters: This partnership accelerates the development of novel diagnostic tools, which could improve healthcare accessibility in remote areas and low-income countries within the region.
The paper introduces AraTrust, a new benchmark for evaluating the trustworthiness of LLMs when prompted in Arabic. The benchmark contains 522 multiple-choice questions covering dimensions like truthfulness, ethics, safety, and fairness. Experiments using AraTrust showed that GPT-4 performed the best, while open-source models like AceGPT 7B and Jais 13B had lower scores. Why it matters: This benchmark addresses a critical gap in evaluating LLMs for Arabic, which is essential for ensuring the safe and ethical deployment of AI in the Arab world.