Skip to content
GCC AI Research

Search

Results for "vision transformers"

Tomato Maturity Recognition with Convolutional Transformers

arXiv ·

This paper introduces a convolutional transformer model for classifying tomato maturity, along with a new UAE-sourced dataset, KUTomaData, for training segmentation and classification models. The model combines CNNs and transformers and was tested against two public datasets. Results showed state-of-the-art performance, outperforming existing methods by significant margins in mAP scores across all three datasets.

Unifying Vision Representation

MBZUAI ·

This seminar explores vision systems through self-supervised representation learning, addressing challenges and solutions in mainstream vision self-supervised learning methods. It discusses developing versatile representations across modalities, tasks, and architectures to propel the evolution of the vision foundation model. Tong Zhang from EPFL, with a background from Beihang University, New York University, and Australian National University, will lead the talk. Why it matters: Advancing vision foundation models is crucial for expanding AI applications, especially in the Middle East where computer vision can address challenges in areas like urban planning, agriculture, and environmental monitoring.

Early and Accurate Detection of Tomato Leaf Diseases Using TomFormer

arXiv ·

Researchers introduce TomFormer, a transformer-based model for accurate and early detection of tomato leaf diseases, with the goal of deployment on the Hello Stretch robot for real-time diagnosis. TomFormer combines a visual transformer and CNN, achieving state-of-the-art results on KUTomaDATA, PlantDoc, and PlantVillage datasets. KUTomaDATA was collected from a greenhouse in Abu Dhabi, UAE.

The Prism Hypothesis: Harmonizing Semantic and Pixel Representations via Unified Autoencoding

arXiv ·

The paper introduces the Prism Hypothesis, which posits a correspondence between an encoder's feature spectrum and its functional role, with semantic encoders capturing low-frequency components and pixel encoders retaining high-frequency information. Based on this, the authors propose Unified Autoencoding (UAE), a model that harmonizes semantic structure and pixel details using a frequency-band modulator. Experiments on ImageNet and MS-COCO demonstrate that UAE effectively unifies semantic abstraction and pixel-level fidelity, achieving state-of-the-art performance.

Making computer vision more efficient with state-space models

MBZUAI ·

MBZUAI researchers developed GroupMamba, a new set of state-space models (SSMs) for computer vision that addresses limitations in existing SSMs related to computational efficiency and optimization challenges. GroupMamba introduces a new layer called modulated group mamba, improving efficiency and stability. In benchmark tests, GroupMamba performed as well as similar SSM systems, but more efficiently, offering a backbone for tasks like image classification, object detection, and segmentation. Why it matters: This research aims to bridge the gap between vision transformers and CNNs by improving SSMs, potentially leading to more efficient and powerful computer vision models.

Continuous Saudi Sign Language Recognition: A Vision Transformer Approach

arXiv ·

The researchers introduce KAU-CSSL, the first continuous Saudi Sign Language (SSL) dataset focusing on complete sentences. They propose a transformer-based model using ResNet-18 for spatial feature extraction and a Transformer Encoder with Bidirectional LSTM for temporal dependencies. The model achieved 99.02% accuracy in signer-dependent mode and 77.71% in signer-independent mode, advancing communication tools for the SSL community.