Researchers at Johns Hopkins are developing AI-driven video analysis tools to provide surgeons with unbiased skill assessments and personalized feedback. The system segments surgical procedures, detects instruments, and assesses skill in cataract surgery. Dr. Shameema Sikder is leading the development of technologies to improve ophthalmic surgical care standards internationally. Why it matters: AI-based surgical skill assessment could standardize training and improve patient outcomes in the region and globally.
MBZUAI researchers developed a new deep learning method for rapid and accurate estimation of clinical measurements from echocardiograms. The method focuses on improving the measurement of the left ventricle ejection fraction, a key indicator of heart health. Their deep learning approach improves upon previous methods by better organizing data representation, enhancing performance and transferability. Why it matters: The AI-driven solution can potentially reduce analysis time for cardiologists, improve patient care, and be particularly beneficial in regions with limited healthcare resources.
Researchers from MBZUAI have developed EchoCoTr, a novel spatiotemporal deep learning method for estimating left ventricular ejection fraction (LVEF) from echocardiograms. EchoCoTr combines CNNs and vision transformers to overcome the limitations of each when applied to medical video data. The method achieves state-of-the-art results on the EchoNet-Dynamic dataset, demonstrating improved accuracy compared to existing approaches, with code available on GitHub.
This paper introduces a self-supervised contrastive learning method for segmenting the left ventricle in echocardiography images when limited labeled data is available. The approach uses contrastive pretraining to improve the performance of UNet and DeepLabV3 segmentation networks. Experiments on the EchoNet-Dynamic dataset show the method achieves a Dice score of 0.9252, outperforming existing approaches, with code available on Github.
I am sorry, but the provided content appears to be incomplete and does not offer enough information to create a meaningful summary. It mentions 'Self-powered dental braces' in the title, but the content is just a copyright notice and a link to KAUST.
The paper introduces MedPromptX, a clinical decision support system using multimodal large language models (MLLMs), few-shot prompting (FP), and visual grounding (VG) for chest X-ray diagnosis, integrating imagery with EHR data. MedPromptX refines few-shot data dynamically for real-time adjustment to new patient scenarios and narrows the search area in X-ray images. The study introduces MedPromptX-VQA, a new visual question answering dataset, and demonstrates state-of-the-art performance with an 11% improvement in F1-score compared to baselines.
Researchers have developed robotic path-planning and control algorithms for minimally invasive surgery (MIS) that steer flexible needles, incorporating teleoperation and haptic feedback. An AI algorithm was designed to predict target motion due to respiratory movement, improving needle placement accuracy. GANs were used to generate synthetic images visualizing organ and tumor motion. Why it matters: This research demonstrates the potential of AI and robotics to enhance precision and adaptability in MIS, potentially reducing patient trauma and improving recovery times in the region and beyond.
This paper introduces Pulmonary Embolism Detection using Contrastive Learning (PECon), a supervised contrastive pretraining strategy using both CT scans and EHR data to improve feature alignment between modalities for better PE diagnosis. PECon pulls sample features of the same class together while pushing away features of other classes. The approach achieves state-of-the-art results on the RadFusion dataset, with an F1-score of 0.913 and AUROC of 0.943.