The paper introduces Sparse-Quantized Representation (SpQR), a new compression format and quantization technique for large language models (LLMs). SpQR identifies outlier weights and stores them in higher precision while compressing the remaining weights to 3-4 bits. The method achieves less than 1% accuracy loss in perplexity for LLaMA and Falcon LLMs and enables a 33B parameter LLM to run on a single 24GB consumer GPU. Why it matters: This enables near-lossless compression of LLMs, making powerful models accessible on resource-constrained devices and accelerating inference without significant accuracy degradation.
QRC has developed Qibo, a Python library enabling classical simulation of quantum algorithms with double precision. Qibo leverages hardware accelerators like GPUs and CPUs with multi-threading. It incorporates a multi-GPU distributed approach for circuit simulation. Why it matters: This framework allows researchers and developers in the region to explore and prototype quantum algorithms using existing classical computing infrastructure, fostering innovation in quantum computing research and applications.
Communications Physics journal has a focus collection on space quantum communications. The collection covers supporting technologies, new quantum protocols, inter-satellite QKD, constellations of satellites, and quantum inspired technologies and protocols for space based communication. Contributions are welcome from October 20, 2020 to April 30, 2021, and accepted papers are published on a rolling basis. Why it matters: Space-based quantum communication is a critical area for developing secure, global quantum networks, and this collection could highlight relevant research for the GCC region as it invests in advanced technologies.
The Qatar Computing Research Institute (QCRI) has released SpokenNativQA, a multilingual spoken question-answering dataset for evaluating LLMs in conversational settings. The dataset contains 33,000 naturally spoken questions and answers across multiple languages, including low-resource and dialect-rich languages. It aims to address the limitations of text-based QA datasets by incorporating speech variability, accents, and linguistic diversity. Why it matters: This benchmark enables more robust evaluation of LLMs in speech-based interactions, particularly for Arabic dialects and other low-resource languages.
MBZUAI researchers have developed SVRPBench, a new open benchmark for testing vehicle routing algorithms under real-world conditions. SVRPBench simulates unpredictable urban delivery scenarios including rush-hour traffic, accidents, and customer delivery time preferences. The benchmark uses realistic city models with clustered customer locations, unlike existing deterministic benchmarks. Why it matters: This benchmark offers a more practical evaluation for vehicle routing algorithms, potentially leading to significant cost savings and improved efficiency in logistics within the region and beyond.