IJN Colloquium

The Vector Grounding Problem

Practical information
18 November 2022
11am
Place

ENS, conference room Pavillon Jardin, 29 rue d'Ulm, 75005 Paris

IJN

The impressive performance of current artificial language models in complex linguistic tasks has generated considerable debate about how to understand their abilities. Are their surprisingly compelling linguistic outputs just mere statistical parroting of the huge trove of text they were trained on, lacking grounding in the real world, and thus devoid of intrinsic meaning, unable to be about the world ?

In this paper, our aim is two-fold. First, we will distinguish different senses of representational grounding : referential, sensorimotor, communicative, and epistemic. We will argue that the referential sense of grounding is the central one for assessing whether language models are more than mere stochastic parrots. Second, we will argue that, in light of their architecture and training, artificial language models could satisfy the minimal conditions for the relevant notion of representational grounding. Language models are typically trained on datasets whose implicit structure essentially depends on causal interactions between humans and the world, and have to learn to represent and exploit such structure to produce their outputs. We will show that our best theories of representational content can make space for representational grounding in language models, and even more so in visuo-linguistic models.

2022-2023 Nicod Philosophy Colloquium

Slots for external participants are limited. If you’d like to attend a session, please send an email about one week before that session to Denis Buehler.