Each human language contains an unbounded number of different sentences. How can something so large and complex possibly be learnt? Over the past two decades we've learned how to define probability distributions over grammars and the linguistic structures they generate, making it possible to define statistical models that learn regularities of complex linguistic structures. Bayesian approaches are particularly attractive because they can exploit "prior" (e.g., innate) knowledge as well as learn statistical generalizations from the input. Here we use computational models to investigate "synergies" in language acquisition, where a "joint model" is capable of solving "chicken-and-egg" problems that are challenging for conventional "staged learning" models.
Research in Paris Programme - Mairie de Paris
Ecole Normale Supérieure - Département d'études cognitives