Humans can rapidly learn the meaning of words without explicit feedback. Known as fast mapping, children can acquire a new word after only a few passive exposures. This word learning capability is believed to be the most fundamental building block of multimodal understanding and reasoning. Despite recent progress in multimodal learning, a systematic evaluation is still missing for human-like word learning in machines. To fill in this gap, we introduce the MEWL benchmark to assess how machines learn word meaning in grounded visual scenes. MEWL covers humans’ core cognitive toolkits in word learning: cross-situational reasoning, bootstrapping, and pragmatic learning. Specifically, MEWL is a few-shot benchmark suite of nine tasks for probing various word learning capabilities. These tasks are carefully designed to be aligned with the children’s core abilities in word learning and echo the theories in the developmental literature. By evaluating multimodal and unimodal agents’ performance with a comparative analysis of human performance, we notice a sharp divergence in human and machine word learning. We further discuss the gap between humans and machines and call for human-like few-shot word learning in machines.
Supplementary notes can be added here, including code, math, and images.