Skip to content
GCC AI Research

Search

Results for "spatiotemporal graph"

Modeling Complex Object Changes in Satellite Image Time-Series: Approach based on CSP and Spatiotemporal Graph

arXiv ·

This paper introduces a novel approach for monitoring and analyzing the evolution of complex geographic objects in satellite image time-series. The method uses a spatiotemporal graph and constraint satisfaction problems (CSP) to model and analyze object changes. Experiments on real-world satellite images from Saudi Arabian cities demonstrate the effectiveness of the proposed approach.

Temporally Evolving Generalised Networks

MBZUAI ·

Emilio Porcu from Khalifa University presented on temporally evolving generalized networks, where graphs evolve over time with changing topologies. The presentation addressed challenges in building semi-metrics and isometric embeddings for these networks. The research uses kernel specification and network-based metrics and is illustrated using a traffic accident dataset. Why it matters: This work advances the application of kernel methods to dynamic graph structures, relevant for modeling evolving relationships in various domains.

Making sense of space and time in video

MBZUAI ·

MBZUAI researchers presented a new approach to video analysis at ICCV in Paris, led by Syed Talal Wasim. The approach builds on still image processing techniques like focal modulation to analyze spatial and temporal information in video separately. It aims to improve temporal aggregation while avoiding the computational complexity of transformers. Why it matters: This research advances video understanding in computer vision by offering a more efficient method for temporal modeling, crucial for applications like activity recognition and video surveillance.

Multimodal Factual Knowledge Acquisition

MBZUAI ·

Manling Li from UIUC proposes a new research direction: Event-Centric Multimodal Knowledge Acquisition, which transforms traditional entity-centric single-modal knowledge into event-centric multi-modal knowledge. The approach addresses challenges in understanding multimodal semantic structures using zero-shot cross-modal transfer (CLIP-Event) and long-horizon temporal dynamics through the Event Graph Model. Li's work aims to enable machines to capture complex timelines and relationships, with applications in timeline generation, meeting summarization, and question answering. Why it matters: This research pioneers a new approach to multimodal information extraction, moving from static entity-based understanding to dynamic, event-centric knowledge acquisition, which is essential for advanced AI applications in understanding complex scenarios.

Better models show how infectious diseases spread

KAUST ·

KAUST researchers developed a new model integrating SIR compartment modeling in time and a point process modeling approach in space-time, also considering age-specific contact patterns. They used a two-step framework to model infectious locations over time for different age groups. The model demonstrated improved predictive accuracy in simulations and a COVID-19 case study in Cali, Colombia, compared to existing models. Why it matters: This model can assist decision-makers in identifying high-risk locations and vulnerable populations for better disease control strategies in the region and globally.

FancyVideo: Towards Dynamic and Consistent Video Generation via Cross-frame Textual Guidance

arXiv ·

FancyVideo, a new video generator, introduces a Cross-frame Textual Guidance Module (CTGM) to enhance text-to-video models. CTGM uses a Temporal Information Injector and Temporal Affinity Refiner to achieve frame-specific textual guidance, improving comprehension of temporal logic. Experiments on the EvalCrafter benchmark demonstrate FancyVideo's state-of-the-art performance in generating dynamic and consistent videos, also supporting image-to-video tasks.