What can babies teach us about contrastive methods?

Abstract

Contrastive methods in machine learning have emerged as a very successful strategy to pretrain representations using only unsupervised data. Unfortunately, their success is not yet completely theoretically understood. In this work we draw connections between contrastive methods and infant learning. As most of infant learning takes place without supervision, gaining insights into how infants form representations has a potential to shed new light on the success of contrastive methods. We present a set of eye-tracking experiments that investigate the impact of different learning conditions on infant category learning. The results reveal that comparison has a key role in supporting the emergence of robust category representations, in particular when it entails dissimilar items. This closely resembles the objectives used in contrastive methods. Thus, infant learning can provide an alternative explanation for the success of contrastive methods.

Publication
NeurIPS
Jelena Sucevic
Jelena Sucevic
Postdoctoral Research Scientist