The paper introduces MedNNS, a neural network search framework designed for medical imaging, addressing challenges in architecture selection and weight initialization. MedNNS constructs a meta-space encoding datasets and models based on their performance using a Supernetwork-based approach, expanding the model zoo size by 51x. The framework incorporates rank loss and Fréchet Inception Distance (FID) loss to capture inter-model and inter-dataset relationships, improving alignment in the meta-space and outperforming ImageNet pre-trained DL models and SOTA NAS methods.
MBZUAI researchers are introducing MedNNS, a system to be presented at MICCAI 2025, that recommends the best AI architecture and initialization for medical imaging tasks. MedNNS addresses the challenge of inefficient trial-and-error in building medical imaging AI by reframing model selection as a retrieval problem. The system employs a Once-For-All ResNet-like model and a learned meta-space of 720k model-dataset pairs, using dataset embeddings to predict optimal model performance. Why it matters: By automating model selection, MedNNS promises to significantly reduce the time and resources required to develop effective AI solutions for healthcare, particularly in medical imaging.
MBZUAI doctoral student Mai A. Shaaban and colleagues developed MedPromptX, a system that analyzes chest X-rays and patient data to aid lung disease diagnoses. MedPromptX uses multimodal large language models with visual grounding and few-shot prompting, trained on a new dataset of 6,000 patient records (MedPromptX-VQA) derived from MIMIC-IV and MIMIC-CXR. The system addresses the challenge of incomplete electronic health records by leveraging the knowledge embedded in large language models to interpret lab results. Why it matters: This research advances AI-driven medical diagnostics by integrating diverse data sources and addressing data gaps, potentially leading to quicker and more accurate diagnoses.