Skip to content
GCC AI Research

Search

Results for "surgical instrument segmentation"

RP-SAM2: Refining Point Prompts for Stable Surgical Instrument Segmentation

arXiv ·

Researchers from MBZUAI introduced RP-SAM2, a method to improve surgical instrument segmentation by refining point prompts for more stable results. RP-SAM2 uses a novel shift block and compound loss function to reduce sensitivity to point prompt placement, improving segmentation accuracy in data-constrained settings. Experiments on the Cataract1k and CaDIS datasets show that RP-SAM2 enhances segmentation accuracy and reduces variance compared to SAM2, with code available on GitHub.

AI-driven surgical skill optimization

MBZUAI ·

Researchers at Johns Hopkins are developing AI-driven video analysis tools to provide surgeons with unbiased skill assessments and personalized feedback. The system segments surgical procedures, detects instruments, and assesses skill in cataract surgery. Dr. Shameema Sikder is leading the development of technologies to improve ophthalmic surgical care standards internationally. Why it matters: AI-based surgical skill assessment could standardize training and improve patient outcomes in the region and globally.

Interpretable and synergistic deep learning for visual explanation and statistical estimations of segmentation of disease features from medical images

arXiv ·

The study compares deep learning models trained via transfer learning from ImageNet (TII-models) against those trained solely on medical images (LMI-models) for disease segmentation. Results show that combining outputs from both model types can improve segmentation performance by up to 10% in certain scenarios. A repository of models, code, and over 10,000 medical images is available on GitHub to facilitate further research.

Image- and AI-guided robotics for minimally invasive surgery

MBZUAI ·

Researchers have developed robotic path-planning and control algorithms for minimally invasive surgery (MIS) that steer flexible needles, incorporating teleoperation and haptic feedback. An AI algorithm was designed to predict target motion due to respiratory movement, improving needle placement accuracy. GANs were used to generate synthetic images visualizing organ and tumor motion. Why it matters: This research demonstrates the potential of AI and robotics to enhance precision and adaptability in MIS, potentially reducing patient trauma and improving recovery times in the region and beyond.

Enhancing Pothole Detection and Characterization: Integrated Segmentation and Depth Estimation in Road Anomaly Systems

arXiv ·

Researchers at KFUPM have developed a system for pothole detection and characterization using a YOLOv8-seg model and depth estimation. A new dataset of images and depth maps was collected from roads in Al-Khobar, Saudi Arabia. The system combines segmentation and depth data to provide a more comprehensive pothole characterization, enhancing autonomous vehicle navigation and road maintenance.

Deep Surface Meshes

MBZUAI ·

Pascal Fua from EPFL presented an approach to implementing convolutional neural nets that output complex 3D surface meshes. The method overcomes limitations in converting implicit representations to explicit surface representations. Applications include single view reconstruction, physically-driven shape optimization, and bio-medical image segmentation. Why it matters: This research advances geometric deep learning by enabling end-to-end trainable models for 3D surface mesh generation, with potential impact on various applications in computer vision and biomedical imaging in the region.

Hybrid Deep Feature Extraction and ML for Construction and Demolition Debris Classification

arXiv ·

This paper introduces a hybrid deep learning and machine learning pipeline for classifying construction and demolition waste. A dataset of 1,800 images from UAE construction sites was created, and deep features were extracted using a pre-trained Xception network. The combination of Xception features with machine learning classifiers achieved up to 99.5% accuracy, demonstrating state-of-the-art performance for debris identification.