Skip to content
GCC AI Research

Web2Code: A new dataset to enhance multimodal LLM performance presented at NeurIPS

MBZUAI · Significant research

Summary

MBZUAI researchers introduced Web2Code, a new dataset suite, at NeurIPS to enhance multimodal LLM performance in web page analysis and HTML generation. The suite includes a fine-tuning dataset and two benchmark datasets. Instruction tuning with Web2Code improved performance on specialized tasks without affecting general capabilities. Why it matters: This contribution addresses a key limitation in current multimodal LLMs, potentially boosting productivity in web design and development by providing targeted training data.

Keywords

MBZUAI · Web2Code · multimodal LLM · HTML · NeurIPS

Get the weekly digest

Top AI stories from the GCC region, every week.

Related

Web2Code: A Large-scale Webpage-to-Code Dataset and Evaluation Framework for Multimodal LLMs

arXiv ·

MBZUAI researchers introduce Web2Code, a new large-scale dataset and evaluation framework for training and benchmarking multimodal LLMs on webpage understanding and HTML code generation. The dataset includes webpage images, HTML code, and QA pairs about webpage content. Experiments demonstrate the dataset's utility in webpage understanding, code generation, and general visual domain tasks, with code and data available on Github.

Cultural awareness in AI: New visual question answering benchmark shared in oral presentation at NeurIPS

MBZUAI ·

MBZUAI researchers, in collaboration with over 70 researchers, have created the Culturally diverse Visual Question Answering (CVQA) benchmark to evaluate cultural understanding in multimodal LLMs. The CVQA dataset includes over 10,000 questions in 31 languages and 13 scripts, testing models on images of local dishes, personalities, and monuments. Testing of several multimodal LLMs on the CVQA benchmark revealed significant challenges, even for top models. Why it matters: This benchmark highlights the need for AI models to better understand diverse cultures, promoting fairness and relevance across different languages and regions.

Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models

arXiv ·

Video-ChatGPT is a new multimodal model that combines a video-adapted visual encoder with a large language model (LLM) to enable detailed video understanding and conversation. The authors introduce a new dataset of 100,000 video-instruction pairs for training the model. They also develop a quantitative evaluation framework for video-based dialogue models.

Solving complex problems with LLMs: A new prompting strategy presented at NeurIPS

MBZUAI ·

Researchers from MBZUAI and King's College London have developed a new prompting strategy called self-guided exploration to improve LLM performance on combinatorial problems. The method was tested on complex challenges like the traveling salesman problem. The findings will be presented at the 38th Annual Conference on Neural Information Processing Systems (NeurIPS) in Vancouver. Why it matters: This research could lead to practical applications of LLMs in industries like logistics, planning, and scheduling by offering new approaches to computationally complex problems.