Construction grammar grew out of the need to model the whole of language instead of distinguishing core linguistic expressions from peripheral ones and has since then established itself as the grammatical embodiment of cognitive-functional linguistics. Its central claim that all linguistic knowledge can be represented as form-meaning mappings — called constructions — has been embraced in all areas of linguistics.
As is often the case, however, it takes time before the potential of an innovation is fully explored and understood. Early movies, for example, strongly mimicked theatre and used long and static shots before film makers developed their own cinematic “grammar”. A similar process happens in science, and while construction grammar is already too mature to be directly compared to early cinema, the formal and computational properties of its most important data structure are not yet completely worked out. As a result, construction grammar has become an umbrella term for all linguistic studies that roughly agree on what Bill Croft dubbed “vanilla” construction grammar, but more precision is needed in order to prevent a babelesque confusion from installing itself in the field and thereby impeding much-needed breakthroughs.
In this presentation, I will try to offer a more precise perspective on what constructions are and what they can do. More specifically, I will look at the representational and algorithmic properties of constructions. The goal of the presentation is therefore not to favor one or the other analysis, but simply to elicit more clarity about which analyses are possible and which criticisms on constructional analyses are valid concerns and which are not. In order to substantiate my claims, all analyses are accompanied by a concrete computational implementation in Fluid Construction Grammar, an open-source computational platform for exploring issues in constructional language processing and learning.