Researchers from MBZUAI have released MobiLlama, a fully transparent open-source 0.5 billion parameter Small Language Model (SLM). MobiLlama is designed for resource-constrained devices, emphasizing enhanced performance with reduced resource demands. The full training data pipeline, code, model weights, and checkpoints are available on Github.
Qirong Ho, co-founder and CTO of Petuum Inc., will be contributing to the "ML Systems for Many" initiative. Petuum is recognized for creating standardized building blocks for AI assembly. Ho also holds a Ph.D. from Carnegie Mellon University and is part of the CASL open-source consortium. Why it matters: Showcases the ongoing efforts to democratize AI development and deployment, making it more accessible and sustainable, although the specific initiative is not further detailed.
This paper introduces SimulMask, a new paradigm for fine-tuning large language models (LLMs) for simultaneous translation. SimulMask utilizes a novel attention masking approach that models simultaneous translation during fine-tuning by masking attention for a desired decision policy. Applied to a Falcon LLM on the IWSLT 2017 dataset, SimulMask achieves improved translation quality compared to state-of-the-art prompting optimization strategies across five language pairs while reducing computational cost. Why it matters: The proposed method offers a more efficient way to adapt LLMs for real-time translation, potentially enhancing multilingual communication tools and services.
The Secure Systems Research Center (SSRC) has partnered with the University of New South Wales (UNSW Sydney) to research enhancements and scaling of the seL4 microkernel on edge devices. The collaboration aims to extend the seL4 microkernel to support dynamic virtualization, combining minimal trusted computing base with strong isolation. This will address challenges related to heterogeneous hardware, software, and environmental factors in edge computing. Why it matters: This partnership aims to improve the security of edge devices in critical sectors, addressing vulnerabilities in cyber-physical and autonomous systems.
The researchers introduce KAU-CSSL, the first continuous Saudi Sign Language (SSL) dataset focusing on complete sentences. They propose a transformer-based model using ResNet-18 for spatial feature extraction and a Transformer Encoder with Bidirectional LSTM for temporal dependencies. The model achieved 99.02% accuracy in signer-dependent mode and 77.71% in signer-independent mode, advancing communication tools for the SSL community.
KAUST Professor Boon Ooi, Nobel laureate Shuji Nakamura from UCSB, and KACST researchers are collaborating on laser-based solid-state lighting (SSL) through a 2014 tripartite agreement. Their research focuses on SSL, which has the potential to be even more energy-efficient than existing LED lighting by using semiconductor lasers. Nakamura, who won the Nobel Prize in Physics in 2014 for developing blue LEDs, spoke at KAUST about the potential of SSL to improve energy efficiency further. Why it matters: This collaboration aims to advance energy-efficient lighting technologies, leveraging Nobel-winning expertise to develop solutions that could significantly reduce global energy consumption.
The Secure Systems Research Center (SSRC) has obtained membership in the seL4 Foundation. This membership allows SSRC to participate in and contribute to the open-source development of seL4, a formally verified microkernel OS. SSRC aims to research, contribute to, and advance next-generation high-end edge device environments using seL4's capabilities. Why it matters: This move enhances the UAE's capabilities in developing secure and resilient edge computing solutions, fostering innovation in critical sectors like secure communications and drone technology.