Skip to content
GCC AI Research

Search

Results for "pixel grounding"

New multimodal model brings pixel-level precision to satellite imagery

MBZUAI ·

MBZUAI researchers have developed GeoPixel, a new multimodal model for pixel grounding in remote sensing images. GeoPixel associates individual pixels with object categories, enabling detailed image analysis by linking language to objects at the pixel level. The model was trained on a new dataset and benchmark, outperforming existing systems in precision. Why it matters: This advancement enhances the utility of remote sensing data for critical applications like environmental management and disaster response by providing more granular and accurate image interpretation.

PG-Video-LLaVA: Pixel Grounding Large Video-Language Models

arXiv ·

MBZUAI researchers introduce PG-Video-LLaVA, a large multimodal model with pixel-level grounding capabilities for videos, integrating audio cues for enhanced understanding. The model uses an off-the-shelf tracker and grounding module to localize objects in videos based on user prompts. PG-Video-LLaVA is evaluated on video question-answering and grounding benchmarks, using Vicuna instead of GPT-3.5 for reproducibility.

Advancing computer vision with common sense

MBZUAI ·

MBZUAI researchers are working to improve computer vision models by incorporating common sense knowledge. They aim to address issues like the generation of unrealistic human features, such as hands with incorrect numbers of fingers. By integrating common-sense knowledge, like the fact that humans typically have five fingers per hand, they seek to make deep learning models more reliable. Why it matters: This research could improve the accuracy and trustworthiness of AI-generated content, making it more suitable for real-world applications.

The Prism Hypothesis: Harmonizing Semantic and Pixel Representations via Unified Autoencoding

arXiv ·

The paper introduces the Prism Hypothesis, which posits a correspondence between an encoder's feature spectrum and its functional role, with semantic encoders capturing low-frequency components and pixel encoders retaining high-frequency information. Based on this, the authors propose Unified Autoencoding (UAE), a model that harmonizes semantic structure and pixel details using a frequency-band modulator. Experiments on ImageNet and MS-COCO demonstrate that UAE effectively unifies semantic abstraction and pixel-level fidelity, achieving state-of-the-art performance.

Create and edit images like a smart artist

MBZUAI ·

Researchers from Carnegie Mellon University and MBZUAI have developed a new method called ConceptAligner for precise image editing using AI. The system decomposes text embeddings into independent building blocks called atomic concepts, allowing users to make targeted tweaks without generating entirely new images. Their approach ensures that each latent factor maps to a specific user-controllable dial, enabling accurate concept-level modifications. Why it matters: This research addresses a major limitation in AI image generation, enhancing its usefulness in industries where precise control is crucial, such as advertising and medicine, and improving the reliability of AI-driven creative tools.