WebMMU: Multimodal and Multilingual Evaluation of Agent Reasoning on Web
Welcome to the AI research bites. This series of short and informative talks showcases cutting-edge research work from ServiceNow AI Research team. The AI Research Bites are open to all, especially those interested in keeping up with the fast-paced AI research community.
Modern web agents can read, but few can see holistically. Despite rapid progress in multimodal LLMs, today's models falter when asked to visually ground UI elements, reason over DOM structures, or edit complex layouts across diverse languages and domains. WebMMU is our attempt to course-correct: a benchmark born from the belief that data, not just models, is the real bottleneck. With real-world website screenshots and three challenging tasks: VQA, sketch-to-UI, and code editing, WebMMU stresses the visual, structural, and multilingual reasoning skills agents must master to operate robustly on the real web. During this talk, Sai Rajeswar Mudumba will dive into what current models get wrong, how our dataset is designed to test (not trick) them, and why better evals are essential for building general-purpose digital agents that can truly see, reason, and build.
Paper: https://webmmu-paper.github.io/assets/WebMMU_2025.pdf
Dataset: https://huggingface.co/collections/mair-lab/webmmu-686777551dc5d822264e36f2
Website: https://webmmu-paper.github.io/
ServiceNow AI Research team: https://www.servicenow.com/research/