Creativity is progressively acknowledged as the main driver for progress in all sectors of humankind’s activities: arts, science, technology, business, and social policies. Nowadays, many creative processes rely on many actors collectively contributing to an outcome. The same is true when groups of people collaborate in the solution of a complex problem. Despite the critical importance of collective actions in human endeavors, few works have tackled this topic extensively and quantitatively. Here we report about an experimental setting to single out some of the key determinants of efficient teams committed to an open-ended creative task. In this experiment, dynamically forming teams were challenged to create several artworks using LEGO bricks. The growth rate of the artworks, the dynamical network of social interactions, and the interaction patterns between the participants and the artworks were monitored in parallel. The experiment revealed that larger working teams are building at faster rates and that higher commitment leads to higher growth rates. Even more importantly, there exists an optimal number of weak ties in the social network of creators that maximizes the growth rate. Finally, the presence of influencers within the working team dramatically enhances the building efficiency. The generality of the approach makes it suitable for application in very different settings, both physical and online, whenever a creative collective outcome is required
Learning features from data has shown to be more successful than using hand-crafted features for many machine learning tasks. Inmusicinformationretrieval(MIR), features learned from windowed spectrograms are highly variant to transformations like transposition or time-shift. Such variances are undesirable when they are irrelevant for the respective MIR task. We propose an architecture called Complex Autoencoder (CAE) which learns features invariant to orthogonal transformations. Mapping signals onto complex basis functions learned by the CAE results in a transformation-invariant “magnitude space” and a transformation-variant“phase space”. Thephasespaceis useful to infer transformations between data pairs. When exploiting the invariance-property of the magnitude space, we achieve state-of-the-art results in audio-to-score alignment and repeated section discovery for audio. A PyTorch implementation of the CAE,including the repeated section discovery method, is available online.
Exponential families and mixture families are parametric probability models that can be geometrically studied as smooth statistical manifolds with respect to any statistical divergence like the Kullback–Leibler (KL) divergence or the Hellinger divergence. When equipping a statistical manifold with the KL divergence, the induced manifold structure is dually flat, and the KL divergence between distributions amounts to an equivalent Bregman divergence on their corresponding parameters. In practice, the corresponding Bregman generators of mixture/exponential families require to perform definite integral calculus that can either be too time-consuming (for exponentially large discrete support case) or even do not admit closed-form formula (for continuous support case). In these cases, the dually flat construction remains theoretical and cannot be used by information-geometric algorithms. To bypass this problem, we consider performing stochastic Monte Carlo (MC) estimation of those integral-based mixture/exponential family Bregman generators. We show that, under natural assumptions, these MC generators are almost surely Bregman generators. We define a series of dually flat information geometries, termed Monte Carlo Information Geometries, that increasingly-finely approximate the untractable geometry. The advantage of this MCIG is that it allows a practical use of the Bregman algorithmic toolbox on a wide range of probability distribution families. We demonstrate our approach with a clustering task on a mixture family manifold. We then show how to generate MCIG for arbitrary separable statistical divergence between distributions belonging to a same parametric family of distributions.
Inpainting-based generative modeling allows for stimulating human-machine interactions by letting users perform stylistically coherent local editions to an object using a statistical model. We presentNONOTO, a new interface for interactive music generation based on in-painting models. It is aimed both at researchers, by offering a simple and flexible API allowing them to connect their own models with the interface, and at musicians by providing industry-standard features such as audio playback, real-time MIDI output and straightforward synchronization with DAWs using Ableton Link.
The origin and meaning of facial beauty represent a longstanding puzzle. Despite the profuse literature devoted to facial attractiveness, its very nature, its determinants and the nature of inter-person differences remain controversial issues. Here we tackle such questions proposing a novel experimental approach in which human subjects, instead of rating natural faces, are allowed to efficiently explore the face-space and “sculpt” their favorite variation of a reference facial image. The results reveal that different subjects prefer distinguishable regions of the face-space, highlighting the essential subjectivity of the phenomenon. The different sculpted facial vectors exhibit strong correlations among pairs of facial distances, characterising the underlying universality and complexity of the cognitive processes, and the relative relevance and robustness of the different facial distances.
Railways are a key infrastructure for any modern country. The reliability and resilience of this peculiar transportation system may be challenged by different shocks such as disruptions, strikes and adverse weather conditions. These events compromise the correct functioning of the system and trigger the spreading of delays into the railway network on a daily basis. Despite their importance, a general theoretical understanding of the underlying causes of these disruptions is still lacking. In this work, we analyse the Italian and German railway networks by leveraging on the train schedules and actual delay data retrieved during the year 2015. We use these data to infer simple statistical laws ruling the emergence of localized delays in different areas of the network and we model the spreading of these delays throughout the network by exploiting a framework inspired by epidemic spreading models. Our model offers a fast and easy tool for the preliminary assessment of the effectiveness of traffic handling policies, and of the railway network criticalities.
Human language users are capable of proficiently learning new constructions and using a language for everyday communication even if they have only acquired a basic linguistic inventory. This paper argues that such robustness can best be achieved through a constructional processing model in which grammatical structures may emerge spontaneously as a side effect of how constructions are combined with each other. This claim is substantiated by a fully operational precision model for Basic English in Fluid Construction Grammar, which is available for online testing. The precision model is the first ever to incorporate key properties from construction grammar in a large-scale setting, such as argument structure constructions and the surface generalization hypothesis, and is therefore a milestone achievement in the field of construction grammar.
Creative industries constantly strive for fame and popularity. Though highly desirable, popularity is not the only achievement artistic creations might ever acquire. Leaving a longstanding mark in the global production and influencing future works is an even more important achievement, usually acknowledged by experts and scholars. ‘Significant’ or ‘influential’ works are not always well known to the public or have sometimes been long forgotten by the vast majority. In this paper, we focus on the duality between what is successful and what is significant in the musical context. To this end, we consider a user-generated set of tags collected through an online music platform, whose evolving co-occurrence network mirrors the growing conceptual space underlying music production. We define a set of general metrics aiming at characterizing music albums throughout history, and their relationships with the overall musical production. We show how these metrics allow to classify albums according to their current popularity or their belonging to expert-made lists of important albums. In this way, we provide the scientific community and the public at large with quantitative tools to tell apart popular albums from culturally or aesthetically relevant artworks. The generality of the methodology presented here lends itself to be used in all those fields where innovation and creativity are in play.
The quest for information is one of the most common activity of human beings. Despite the the impressive progress of search engines, not to miss the needed piece of information could be still very tough, as well as to acquire specific competences and knowledge by shaping and following the proper learning paths. Indeed, the need to find sensible paths in information networks is one of the biggest challenges of our societies and, to effectively address it, it is important to investigate the strategies adopted by human users to cope with the cognitive bottleneck of finding their way in a growing sea of information. Here we focus on the case of Wikipedia and investigate a recently released dataset about users’ click on the English Wikipedia, namely the English Wikipedia Clickstream. We perform a semantically charged analysis to uncover the general patterns followed by information seekers in the multi-dimensional space of Wikipedia topics/categories. We discover the existence of well defined strategies in which users tend to start from very general, i.e., semantically broad, pages and progressively narrow down the scope of their navigation, while keeping a growing semantic coherence. This is unlike strategies associated to tasks with predefined search goals, namely the case of the Wikispeedia game. In this case users first move from the ‘particular’ to the ‘universal’ before focusing down again to the required target. The clear picture offered here represents a very important stepping stone towards a better design of information networks and recommendation strategies, as well as the construction of radically new learning paths.
We introduce a Maximum Entropy model able to capture the statistics of melodies in music. The model can be used to generate new melodies that emulate the style of a given musical corpus. Instead of using the n–body interactions of (n−1)–order Markov models, traditionally used in automatic music generation, we use a k-nearest neighbour model with pairwise interactions only. In that way, we keep the number of parameters low and avoid over-fitting problems typical of Markov models. We show that long-range musical phrases don’t need to be explicitly enforced using high-order Markov interactions, but can instead emerge from multiple, competing, pairwise interactions. We validate our Maximum Entropy model by contrasting how much the generated sequences capture the style of the original corpus without plagiarizing it. To this end we use a data-compression approach to discriminate the levels of borrowing and innovation featured by the artificial sequences. Our modelling scheme outperforms both fixed-order and variable-order Markov models. This shows that, despite being based only on pairwise interactions, our scheme opens the possibility to generate musically sensible alterations of the original phrases, providing a way to generate innovation.
The reconstruction of phylogenies of cultural artefacts represents an open problem that mixes theoretical and computational challenges. Existing bench- marks rely on simulated phylogenies, where hypotheses on the underlying evolutionary mechanisms are unavoidable, or in real data phylogenies, for which no true evolutionary history is known. Here we introduce a web-based game, Copystree, where users create phylogenies of manuscripts, through successive copying actions, in a fully monitored setup. While players enjoy the experience, Copystree allows to build artificial phylogenies whose evolutionary processes do not obey to any pre-defined theoretical mechanisms, being generated instead with the unpredictability of human creativity. We present the analysis of the data gathered during the first set of experiments and use the artificial phylogenies gathered for a first test of existing phylogenetic algorithms.
The study of the dynamics behind the emergence of novelties and inno- vation is a relatively recent field of study in complex systems, fostered by the abundance of data about the creations and sharing of artworks and about on-line activity in general. Despite this recentness, many works have been able to discover and characterise several interesting statistical patterns related to the emergence of new creative elements and a very general mathematical framework describing the collective process of di- scovering and sharing novelties come out. However, still a lot has to be discovered concerning the conditions, either historical and social, fostering the emergence of creative elements from a group of interacting individuals. From a social perspective, many hypotheses have been developed and te- sted concerning the relations between individual like the presence of ?weak ties? in social networks or the ?folding? of different social groups into a larger one sharing a common goal. Complex Systems Science has given lit- tle contributions to the understanding of how the dynamics behind social interactions contributes to foster the emergence of creativity. This work of thesis is devoted to the analysis of data collected during a collective social experiment in which individuals were asked to collaborate in the realisation of a set of LEGO bricks sculptures. The participants to the experiments were provided with particular RFID tags, developed in the framework of the SOCIOPATTERNS project, that enabled a quite precise mapping of the social interactions occurring during their activity within the experiment. The interaction with the LEGO Sculptures were similarly mapped by means of other RFID tags placed around the sculptures, and their growth in volume has been recorded with the aid of infra-red depth sensors. The RFID sensors allowed for a reconstruction of the dynamical network of social interactions between the participants in the experiment. We looked for correlations between the evolving structure of this social net- work and the growing patterns of the sculptures, spotting the local social structures more prone for a rapid growth of the volume in small amounts of times and in long term periods. In this way, we were able to identify the social patterns more fruitful in terms of ?local consensus? around the development of the collective artwork, indicating a shared vision around the actions to be performed on it. Moreover, we were able to identify how the presence of ‘influential individuals’ characterised by means of information spreading models favoured the growth of the sculptures in the long-term. The novelty behind the proposed approach could contribute to shed light on the phenomena related to creativity and could be useful in conceiving and designing new collecting creativity experiments.
This paper proposes an evaluation model to analyze the impact of microgrid topologies on self‐sufficiency for a given size of batteries and photovoltaic (PV) panels (resources). Three topologies are evaluated for a community of 19 houses: centralized resources (ideal case), stand‐alone resources, and a multi‐microgrid topology with autonomous exchange. Depending on the ratio of PV and battery size, the topology with stand‐alone resources has a clear disadvantage in terms of self‐sufficiency compared to the centralized, ideal topology. To counteract this, we propose a hybrid topology: households are interconnected so that they can exchange energy between each other based on an autonomous energy exchange algorithm we developed. We show that for a well‐chosen ratio of batteries and PV, the interconnected system can improve the stand‐alone design by up to 10% without requiring any additional resources. This topology can approach performance similar to that of a centralized microgrid but its design is more flexible and resilient to failures or accidents. The evaluation model computes the self‐sufficiency ratio (SSR) for the three topologies for 0–20 kWh batteries and 1–14 kWp PV sizes. Furthermore, seasonal differences in SSR per topology are analyzed for an actual community with real resources. We also calculate the savings in PV and battery due to the interconnected topology. Finally, the third topology’s feasibility is demonstrated on a full‐scale platform in Okinawa on which the autonomous energy exchange software was tested for over a year in a community of 19 houses. © 2017 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc.
Rules are an efficient feature of natural languages which allow speakers to use a finite set of instructions to generate a virtually infinite set of utterances. Yet, for many regular rules, there are irregular exceptions. There has been lively debate in cognitive science about how individual learners acquire rules and exceptions; for example, how they learn the past tense of preach is preached, but for teach it is taught. However, for most population or language-level models of language structure, particularly from the perspective of language evolution, the goal has generally been to examine how languages evolve stable structure, and neglects the fact that in many cases, languages exhibit exceptions to structural rules. We examine the dynamics of regularity and irregularity across a population of interacting agents to investigate how, for example, the irregular teach coexists beside the regular preach in a dynamic language system. Models show that in the absence of individual biases towards either regularity or irregularity, the outcome of a system is determined entirely by the initial condition. On the other hand, in the presence of individual biases, rule systems exhibit frequency dependent patterns in regularity reminiscent of patterns found in natural language. We implement individual biases towards regularity in two ways: through child agents who have a preference to generalise using the regular form, and through a memory constraint wherein an agent can only remember an irregular form for a finite time period. We provide theoretical arguments for the prediction of a critical frequency below which irregularity cannot persist in terms of the duration of the finite time period which constrains agent memory. Further, within our framework we also find stable irregularity, arguably a feature of most natural languages not accounted for in many other cultural models of language structure.
After several decades in scientific purgatory, language evolution has reclaimed its place as one of the most important branches in linguistics. This renewed interest is accompanied by powerful new methods for making empirical observations. At the same time, construction grammar is increasingly embraced in all areas of linguistics as a fruitful way of making sense of all these new data, and it has enthused formal and computational linguists, who have developed sophisticated tools for exploring issues in language processing and learning. Separately, linguists and computational linguists are able to explain which changes take place in language and how these changes are possible. When working together, however, they can also address the question of why language evolves over time and how it emerged in the first place. This special issue therefore brings together key contributions from both fields to put evidence and methods from both perspectives on the table.
Word order, argument structure and unbounded dependencies are among the most important topics in linguistics because they touch upon the core of the syntax-semantics interface. One question is whether ?marked? word order patterns, such as The man I talked to vs. I talked to the man, require special treatment by the grammar or not. Mainstream linguistics answers this question affirmatively: in the marked order, some mechanism is necessary for ?extracting? the man from its original argument position, and a special placement rule (e.g. topicalization) is needed for putting the constituent in clause-preceding position. This paper takes an opposing view and argues that such formal complexity is only required for analyses that are based on syntactic trees. A tree is a rigid data structure that only allows information to be shared between local nodes, hence it is inadequate for non-local dependencies and can only allow restricted word order variations. A construction, on the other hand, offers a more powerful representation device that allows word order variations ? even unbounded dependencies ? to be analyzed as the side-effect of how language users combine the same rules in different ways in order to satisfy their communicative needs. This claim is substantiated through a computational implementation of English argument structure constructions in Fluid Construction Grammar that can handle both comprehension and formulation.
Human languages have multiple strategies that allow us to discriminate objects in a vast variety of contexts. Colours have been extensively studied from this point of view. In particular, previous research in artificial language evolution has shown how artificial languages may emerge based on specific strategies to distinguish colours. Still, it has not been shown how several strategies of diverse complexity can be autonomously managed by artificial agents . We propose an intrinsic motivation system that allows agents in a population to create a shared artificial language and progressively increase its expressive power. Our results show that with such a system agents successfully regulate their language development, which indicates a relation between population size and consistency in the emergent communicative systems.
Long-distance dependencies belong to the most controversial challenges in linguistics. These patterns seem to contain constituents that have left their original position in a sentence and that have landed in a different place. A typical example is the relative clause the person I have talked to yesterday, in which the direct object (the person) is not situated in an argument position following the verb, but instead is located at the beginning of the utterance. Upon closer inspection, however, all problems related to long-distance dependencies can be reduced to the limits of phrase structural analyses. A phrase structure tree is a rigid data structure in which information is shared between local nodes. These analyses therefore need to resort to more complex formal machinery in order to overcome this locality constraint, such as using transformations or positing filler-gap constructions. However, there exists a more intuitive alternative within the tradition of cognitive-functional linguistics in which long-distance dependencies do not require special treatment. Instead, these patterns are simply the side effect of how grammatical constructions combine with each other in order to satisfy the communicative needs of language users. Through a computational implementation in Fluid Construction Grammar, this article demonstrates that it is perfectly feasible to formalize this alternative in a model that is capable of both formulating and comprehending utterances.
The complex organization of syntax in hierarchical structures is one of the core design features of human language. Duality of patterning refers for instance to the organization of the meaningful elements in a language at two distinct levels: a combinatorial level where meaningless forms are combined into meaningful forms and a compositional level where meaningful forms are composed into larger lexical units. The question remains wide open regarding how such a structure could have emerged. Furthermore a clear mathematical framework to quantify this phenomenon is still lacking. The aim of this paper is that of addressing these two aspects in a self-consistent way. First, we introduce suitable measures to quantify the level of combinatoriality and compositionality in a language, and present a framework to estimate these observables in human natural languages. Second, we show that the theoretical predictions of a multi-agents modeling scheme, namely the Blending Game, are in surprisingly good agreement with empirical data. In the Blending Game a population of individuals plays language games aiming at success in communication. It is remarkable that the two sides of duality of patterning emerge simultaneously as a consequence of a pure cultural dynamics in a simulated environment that contains meaningful relations, provided a simple constraint on message transmission fidelity is also considered.
The understanding and the characterisation of individual mobility patterns in urban environments is important in order to improve liveability and planning of big cities. In relatively recent times, the availability of data regarding human movements have fostered the emergence of a new branch of social studies, with the aim to unveil and study those patterns thanks to data collected by means of geolocalisation technologies. In this paper we analyse a large dataset of GPS tracks of cars collected in Rome (Italy). Dividing the drivers in classes according to the number of trips they perform in a day, we show that the sequence of the travelled space connecting two consecutive stops shows a precise behaviour so that the shortest trips are performed at the middle of the sequence, when the longest occur at the beginning and at the end when drivers head back home. We show that this behaviour is consistent with the idea of an optimisation process in which the total travel time is minimised, under the effect of spatial constraints so that the starting points is on the border of the space in which the dynamics takes place.
We present a numerical model for the evolution of pathogens organised in discrete antigenic clusters, where individuals in the same clusters have the same fitness. The fitness of each cluster is a decreasing function of the total number of cluster members appeared in the population. Cluster transition is modelled with inclusion and exclusion of dynamical epistatic effects. In both cases we observe a continuous transition, driven by the mutation rate, from a dynamics with single clusters alternating in time to the coexistence of many clusters in the population. The transition between the two regimes is investigated in terms of the key parameters of the model. We find that the location and the scaling of this transition can be explained in terms of the time of first appearance of a new cluster in the population. The presence of dynamical epistatic effects results in a shift of the value of the mutation rate where the transition occurs.
The recent spread of social networks and ICT systems has allowed for a huge availability of data on social phenomena and collective behaviour. This has induced a deep change in social dynamics field, that moved from an essentially theoretical approach to a strongly data driven one. In such framework, the present work aims at exploring the collaboration dynamics and the organisational structures within the GitHub platform. Moreover, the purpose is using success and popularity as feedbacks to check whether some particular structures exist that are associated with more efficiency, better results and subsequently more innovative features in the development of the code. GitHub is based on the Git revision control system and is currently the most important platform for open source coding, counting millions of repositories and active users. Moreover, the complete timeline of GitHub activity is publicly accessible on the GitHub Archive website. GitHub is therefore a particularly suitable system to observe and analyse collective social behaviours and collaborative dynamics. The collaboration among users fosters an uninterrupted flow of new ideas which actualise in many different events such as the creation of new projects and updating of existing ones through code modifications. The analysis required a preliminary selection of the data downloaded from GitHub Archive in order to create a database containing all the necessary information about projects activity. The analysis carried out on this database was mostly inspired by previous research on innovation dynamics in the framework of complex systems. Every project was mapped in a network structure in order to observe dynamically the development and the modifications of the code. Some metrics were defined that could estimate the collaboration degree among users and the organization of the workload within the developing branches. Other metrics were chosen in order to evaluate both the success and the popularity reached by a project and its potential innovation. Correlation analysis between the metrics and the indexes above mentioned allow for some evaluations about the interdependence between attention received and structural features of the projects. This thesis work follows up several quantitative analyses on GitHub presented in literature and proposes a new visualisation of internal structures and collaborative dynamics within GitHub projects. Moreover, identifying successful patterns could help in highlighting the most influential and pioneering projects and encouraging their development.
It is common opinion that many innovations are triggered by serendipity whose notion is associated with fortuitous events leading to unintended consequences. One might argue that this interpretation is due to the poor understanding of the dynamics of innovations. Very little is known, in fact, about how innovations proceed and samples the space of potential novelties. This space is usually referred to as the adjacent possible, a concept originally introduced in the study of biological systems to indicate the set of possibilities that are one step away from what actually exists. In this paper we focus on the problem of defining the adjacent possible space, and analyzing its dynamics, for a particular system, namely the cultural system of the network of movies. We synthesized to this end the graph emerging from the Internet Movies Database (IMDb) and looked at the static and dynamical properties of this network. We deal, in particular, with the subtle mechanism of the adjacent possible by measuring the expansion and the coverage of this elusive space during the global evolution of the system. Finally, we introduce the concept of adjacent possibilities at the level of single node and try to elucidate its nature by looking at the correlations with topological and user annotation metrics.
The dynamics of political votes has been widely studied, both for its practical interest and as a paradigm of the dynamics of mass opinions and collective phenomena, where theoretical predictions can be easily tested. However, the vote outcome is often influenced by many factors beyond the bare opinion on the candidate, and in most cases it is bound to a single preference. The voter perception of the political space is still to be elucidated. We here propose a web experiment (laPENSOcos’i) where we explicitly investigate participant’s opinions on political entities (parties, coalitions, individual candidates) of the Italian political scene. As a main result, we show that the political perception follows a Weber-Fechner-like law, i.e., when ranking political entities according to the user expressed preferences, the perceived distance of the user from a given entity scales as the logarithm of this rank.
The emergence of novelties and their rise and fall in popularity is an ubiquitous phenomenon in human activities. The coexistence of always popular milestones with novel and sometimes ephemeral trends pervades technological, scientific and artistic production. By introducing suitable statistical measures, we demonstrate that different systems of human activities, i.e. the creation of hashtags in Twitter, the interaction with online program code repositories, the creation of texts and the listening of songs on an on-line platform, exhibit surprisingly similar properties.We then introduce a general framework to explain those regularities. We propose a simple mathematical model based on the expansion into the adjacent possible, that has been proven to be a very general and powerful mechanism able to explain many of the statistical patterns emerging in innovation dynamics, to which we add two crucial elements. On the one hand we quantify the idea that, while exploring a conceptual or physical space, inertia exists towards known already discovered elements. On the other hand, we highlight the role of the collective dynamics – where many users interact, in a direct or indirect way in the emergence and diffusion of novelties and innovations.
We propose and implement a dc microgrid with a fully decentralized control system, using the ICT concept of network overlays and peer-to-peer (P2P) networks. Decentralization not only concerns the physical systems and control logic but also the control structure which provides the network infrastructure on which Energy Management is carried out. In this study, we show how such decentralization can be achieved using P2P frameworks as underlying control structures and implemented a pure P2P to eliminate single points of failure. For this, a Direct Current Open Energy System (DC-OES) made of the interconnection of standalone dc nanogrids is used as underlying microgrid. The power ﬂows between nanogrids are controlled by a decentralized exchange strategy: each household can request or respond to energy deals with its neighbours without requiring system-wide knowledge or control. Using dc combined with a layered, modular software allows loose coupling which increases ﬂexibility and dependability. The system has been implemented and tested on a full-scale platform in Okinawa including 19 inhabited houses. Real data analysis as well as simulations demonstrate improvements in selfsufﬁciency compared to other types of systems. Resilience against utility blackouts is proven in practice.
Rules are an efficient feature of natural languages which allow speakers to use a finite set of instructions to generate a virtually infinite set of utterances. Yet, for many regular rules, there are irregular exceptions. There has been lively debate in cognitive science about how individual learners acquire rules and exceptions; for example, how they learn the past tense of preach is preached, but for teach it is taught. In this paper, we take a different perspective, examining the dynamics of regularity and irregularity across a population of interacting agents to investigate how inflectional rules are applied to verbs. We show that in the absence of biases towards either regularity or irregularity, the outcome is determined by the initial condition, irrespective of the frequency of usage of the given lemma. On the other hand, in presence of biases, rule systems exhibit frequency dependent patterns in regularity reminiscent of patterns in natural language corpora. We examine the case where individuals are biased towards linguistic regularity in two ways: either as child learners, or through a memory constraint wherein irregular forms can only be remembered by an individual agent for a finite time period. We provide theoretical arguments for the prediction of a critical frequency below which irregularity cannot persist in terms of the duration of the finite time period which constrains agent memory.
Studies in literature and narrative have begun to argue more forcefully for considering human evolution as central to understanding stories and storytelling more generally (Sugiyama, 2001; Hernadi, 2002). However, empirical studies in language evolution have focused primarily on language structure or the language faculty, leaving the evolution of stories largely unexplored (although see Von Heiseler, 2014). Stories are unique products of human culture enabled principally by human language. Given this, the dynamics of creativity in stories, and the traits which make successful stories, are of crucial interest to understanding the evolution of language in the context of human evolution more broadly. The current work aims to illuminate how stories emerge, evolve, and change in the context of a collaborative cultural effort. We present results from a novel experimental paradigm centered around a story game where players write short continuations (between 60 and 120 characters) of existing stories. These continuations then become open to other players to continue in turn. Stories are subject to player selection, allowing for variation and speciation of the resulting narratives, and evolve as a result of collaborative effort between players. The game starts with a seed of over 60 potential stories, and players choose which stories to continue, providing a player-driven story selection mechanism. In this way, stories which are creative, intriguing, and open ended spawn more stories, and eventually lead to longer story paths as play continues. The game also introduces further limitations by constraining a players’ view of the story path: players have access only to a story and its parent, meaning knowledge of the existing narrative is limited. We present data from hundreds of players and stories, creating large story trees which explore the space of different possible narratives which grow out of a confined set of starting points. This data allows us to investigate several aspects of the growing story trees to illuminate not only what makes a story successful, but how creative stories trigger new stories, and what makes individual storytellers successful. Given the selection mechanism central to game play, we identify the most successful stories by their number of offspring. Particularly successful storytellers emerge measured both by how many children their stories have spawned, and also how long their story path extends. We also show that coherent stories often emerge, despite the fact that they are authored by several different players, and any given player only sees a limited snapshot of the story path. We contextualise the results of the game and connect it to language evolution in two ways. First, we look for detectable triggers of innovation and creativity within the story trees, and identify these as expanding the adjacent possible (e.g., new adaptations open the space of other possible adaptations in the future; Tria, Loreto, Servedio, & Strogatz, 2014). We argue that this concept can be extended to stories, using evidence from the game bolstered by evidence from more traditional literature (the Gutenberg Corpus). Second, we frame the results in terms of recurring themes found in storytelling cross-culturally (Tehrani, 2013). We suggest that the most successful triggers of innovation in stories combine original novelty and a firm grounding in existing recurring story frameworks in human culture. This indicates that much like other cultural and biological systems, stories are subject to competing pressures for stability and conservation on the one hand, and innovation and novelty on the other.
Creole languages offer an invaluable opportunity to study the processes leading to the emergence and evolution of Language, thanks to the short – typically a few generations – and reasonably well defined time-scales involved in their emergence. Another well-known case of a very fast emergence of a Language, though referring to a much smaller population size and different ecological conditions, is that of the Nicaraguan Sign Language. What these two phenomena have in common is that in both cases what is emerging is a contact language, i.e., a language born out of the non-trivial interaction of two (or more) parent languages. This is a typical case of what is known in biology as horizontal transmission. In many well-documented cases, creoles emerged in large segregated sugarcane or rice plantations on which the slave labourers were the overwhelming majority. Lacking a common substrate language, slaves were naturally brought to shift to the economically and politically dominant European language (often referred to as the lexifier) to bootstrap an effective communication system among themselves. Here, we focus on the emergence of creole languages originated in the contacts of European colonists and slaves during the 17th and 18th centuries in exogenous plantation colonies of especially the Atlantic and Indian Ocean, where detailed census data are available. Those for several States of USA can be found at http://www.census.gov/history, while for Central America and the Caribbean can be found at http://www.jamaicanfamilysearch.com/Samples/1790al11.htm. Without entering in the details of the creole formation at a fine-grained linguistic level, we aim at uncovering some of the general mechanisms that determine the emergence of contact languages, and that successfully apply to the case of creole formation.
Coping with the complexities of the social world in
the 21st century requires deeper quantitative and
predictive understanding. Forty-three
internationally acclaimed scientists and thinkers
share their vision for complexity science in the
next decade in this invaluable book. Topics cover
how complexity and big data science could help
society to tackle the great challenges ahead, and
how the newly established Complexity Science Hub
Vienna might be a facilitator on this path.
Natural languages enable humans to engage in highly complex social and conversational interactions with each other. Alife approaches to the origins and emergence of language typically manage this complexity by carefully staging the learning paths that embodied artificial agents need to follow in order to bootstrap their own communication system from scratch. This paper investigates how these scaffolds introduced by the experimenter can be removed by allowing agents to autonomously set their own challenges when they are driven by intrinsic motivation and have the capacity to self-assess their own skills at achieving their communicative goals. The results suggest that intrinsic motivation not only allows agents to spontaneously develop their own learning paths, but also that they are able to make faster transitions from one learning phase to the next.
Sign languages (SL) require a fundamental rethinking of many basic assumptions about human language processing because instead of using linear speech, sign languages coarticulate facial expressions, shoulder and hand movements, eye gaze and usage of a three-dimensional space. SL researchers have therefore advocated SL-specific approaches that do not start from the biases of models that were originally developed for vocal languages. Unfortunately, there are currently no processing models that adequately achieve both language comprehension and formulation, and the SL-specific developments run the risk of becoming alienated from other linguistic research. This paper explores the hypothesis that a construction grammar architecture offers a solution to these problems because constructions are able to simultaneously access and manipulate information coming from many different sources. This claim is illustrated by a proof-of-concept implementation of a basic grammar for French Sign Language in Fluid Construction Grammar.
One of the most salient hallmarks of construction grammar is its approach to argument structure and coercion: rather than positing many different verb senses in the lexicon, the same lexical construction may freely interact with multiple argument structure constructions. This view has however been criticized from within the construction grammar movement for leading to overgeneration. This paper argues that this criticism falls flat for two reasons: (1) lexicalism, which is the alternative solution proposed by the critics, has already been proven to overgenerate itself, and (2) the argument of overgeneration becomes void if grammar is implemented as a problem-solving model rather than as a generative competence model; a claim that the paper substantiates through a computational operationalization of argument structure and coercion in Fluid Construction Grammar. The paper thus shows that the current debate on argument structure is hiding a much more fundamental rift between practitioners of construction grammar that touches upon the role of grammar itself.
Air Transportation represents a very interesting example of a complex techno-social system whose importance has considerably grown in time and whose management requires a careful understanding of the subtle interplay between technological infrastructure and human behavior. Despite the competition with other transportation systems, a growth of air traffic is still foreseen in Europe for the next years. The increase of traffic load could bring the current Air Traffic Network above its capacity limits so that safety standards and performances might not be guaranteed anymore. Lacking the possibility of a direct investigation of this scenario, we resort to computer simulations in order to quantify the disruptive potential of an increase in traffic load. To this end we model the Air Transportation system as a complex dynamical network of flights controlled by humans who have to solve potentially dangerous conflicts by redirecting aircraft trajectories. The model is driven and validated through historical data of flight schedules in a European national airspace. While correctly reproducing actual statistics of the Air Transportation system, e.g., the distribution of delays, the model allows for theoretical predictions. Upon an increase of the traffic load injected in the system, the model predicts a transition from a phase in which all conflicts can be successfully resolved, to a phase in which many conflicts cannot be resolved anymore. We highlight how the current flight density of the Air Transportation system is well below the transition, provided that controllers make use of a special re-routing procedure. While the congestion transition displays a universal scaling behavior, its threshold depends on the conflict solving strategy adopted. Finally, the generality of the modeling scheme introduced makes it a flexible general tool to simulate and control Air Transportation systems in realistic and synthetic scenarios.
Each sphere of knowledge and information could be depicted as a complex mesh of correlated items. By properly exploiting these connections, innovative and more efficient navigation strategies could be defined, possibly leading to a faster learning process and an enduring retention of information. In this work we investigate how the topological structure embedding the items to be learned can affect the efficiency of the learning dynamics. To this end we introduce a general class of algorithms that simulate the exploration of knowledge/information networks standing on well-established findings on educational scheduling, namely the spacing and lag effects. While constructing their learning schedules, individuals move along connections, periodically revisiting some concepts, and sometimes jumping on very distant ones. In order to investigate the effect of networked information structures on the proposed learning dynamics we focused both on synthetic and real-world graphs such as subsections of Wikipedia and word-association graphs. We highlight the existence of optimal topological structures for the simulated learning dynamics whose efficiency is affected by the balance between hubs and the least connected items. Interestingly, the real-world graphs we considered lead naturally to almost optimal learning performances.
We introduce a model for music generation where melodies are seen as a network of interacting notes. Starting from the principle of maximum entropy we assign to this network a probability distribution, which is learned from an existing musical corpus. We use this model to generate novel musical sequences that mimic the style of the corpus. Our main result is that this model can reproduce high-order patterns despite having a polynomial sample complexity. This is in contrast with the more traditionally used Markov models that have an exponential sample complexity.
Contact languages are born out of the non-trivial interaction of two (or more) parent languages. Nowadays, the enhanced possibility of mobility and communication allows for a strong mixing of languages and cultures, thus raising the issue of whether there are any pure languages or cultures that are unaffected by contact with others. As with bacteria or viruses in biological evolution, the evolution of languages is marked by horizontal transmission; but to date no reliable quantitative tools to investigate these phenomena have been available. An interesting and well documented example of contact language is the emergence of creole languages, which originated in the contacts of European colonists and slaves during the 17th and 18th centuries in exogenous plantation colonies of especially the Atlantic and Indian Ocean. Here, we focus on the emergence of creole languages to demonstrate a dynamical process that mimics the process of creole formation in American and Caribbean plantation ecologies. Inspired by the Naming Game (NG), our modeling scheme incorporates demographic information about the colonial population in the framework of a non-trivial interaction network including three populations: Europeans, Mulattos/Creoles, and Bozal slaves. We show how this sole information makes it possible to discriminate territories that produced modern creoles from those that did not, with a surprising accuracy. The generality of our approach provides valuable insights for further studies on the emergence of languages in contact ecologies as well as to test specific hypotheses about the peopling and the population structures of the relevant territories. We submit that these tools could be relevant to addressing problems related to contact phenomena in many cultural domains: e.g., emergence of dialects, language competition and hybridization, globalization phenomena.
Empirical evidence shows that the rate of irregular usage of English verbs exhibits discontinuity as a function of their frequency: the most frequent verbs tend to be totally irregular. We aim to qualitatively understand the origin of this feature by studying simple agent-based models of language dynamics, where each agent adopts an inflectional state for a verb and may change it upon interaction with other agents. At the same time, agents are replaced at some rate by new agents adopting the regular form. In models with only two inflectional states (regular and irregular), we observe that either all verbs regularise irrespective of their frequency, or a continuous transition occurs between a low-frequency state, where the lemma becomes fully regular, and a high-frequency one, where both forms coexist. Introducing a third (mixed) state, wherein agents may use either form, we find that a third, qualitatively different behaviour may emerge, namely, a discontinuous transition in frequency. We introduce and solve analytically a very general class of three-state models that allows us to fully understand these behaviours in a unified framework. Realistic sets of interaction rules, including the well-known naming game (NG) model, result in a discontinuous transition, in agreement with recent empirical findings. We also point out that the distinction between speaker and hearer in the interaction has no effect on the collective behaviour. The results for the general three-state model, although discussed in terms of language dynamics, are widely applicable.
Several recent theories have suggested that an increase in the number of non-native speakers in a language can lead to changes in morphological rules. We examine this experimentally by contrasting the performance of native and non-native English speakers in a simple Wug-task, showing that non-native speakers are significantly more likely to provide non -ed (i.e., irregular) past-tense forms for novel verbs than native speakers. Both groups are sensitive to sound similarities between new words and existing words (i.e., are more likely to provide irregular forms for novel words which sound similar to existing irregulars). Among both natives and non-natives, irregularizations are non-random; that is, rather than presenting as truly irregular inflectional strategies, they follow identifiable sub-rules present in the highly frequent set of irregular English verbs. Our results shed new light on how native and non-native learners can affect language structure.
The comprehension of vehicular traffic in urban environments is crucial to achieve a good management of the complex processes arising from people collective motion. Even allowing for the great complexity of human beings, human behavior turns out to be subject to strong constraints – physical, environmental, social, economical – that induce the emergence of common patterns. The observation and understanding of those patterns is key to setup effective strategies to optimize the quality of life in cities while not frustrating the natural need for mobility. In this paper we focus on vehicular mobility with the aim to reveal the underlying patterns and uncover the human strategies determining them. To this end we analyze a large dataset of GPS vehicles tracks collected in the Rome (Italy) district during a month. We demonstrate the existence of a local optimization of travel times that vehicle drivers perform while choosing their journey. This finding is mirrored by two additional important facts, i.e., the observation that the average vehicle velocity increases by increasing the travel length and the emergence of a universal scaling law for the distribution of travel times at fixed traveled length. A simple modeling scheme confirms this scenario opening the way to further predictions.
In this study we examine microgrid topologies that combine solar panels and batteries for a community of 20 residential houses: In the first case we consider a system with centralized PV panels and batteries that distributes the energy to the 20 homes. In the second case we consider 20 standalone home systems with roof-top PV panels and batteries. Using real electricity consumption and solar irradiation data we simulated the overall demand energy that could replaced by solar energy for both topologies. The centralized-resources approach achieves better performance but it requires extended planning and high initial investments, while the distributed approach can be gradually built bottom-up. We analyze the additional resource investment needed to reach the same electricity savings as for the centralized topology. Finally, we compare it to a hybrid approach named Open Energy Systems (OES), a 2-layered microgrid made of interconnected nanogrids and show that it improves the solar replacement ratio by autonomously exchanging energy with neighbors.
Language universals have long been attributed to an innate Universal Grammar. An alternative explanation states that linguistic universals emerged independently in every language in response to shared cognitive or perceptual biases. A computational model has recently shown how this could be the case, focusing on the paradigmatic example of the universal properties of colour naming patterns, and producing results in quantitative agreement with the experimental data. Here we investigate the role of an individual perceptual bias in the framework of the model. We study how, and to what extent, the structure of the bias influences the corresponding linguistic universal patterns. We show that the cultural history of a group of speakers introduces population-specific constraints that act against the pressure for uniformity arising from the individual bias, and we clarify the interplay between these two forces.
We describe the general concept and practical feasibility of a dc-based open energy system (OES) that proposes an alternative way of exchanging intermittent energy between houses in a local community. Each house is equipped with a dc nanogrid, including photovoltaic panels and batteries. We extend these nanogrids with a bidirectional dc–dc converter and a network controller so that power can be exchanged between houses over an external dc power bus. In this way, demand-response ﬂuctuations are absorbed not only by the local battery, but can be spread over all batteries in the system. By using a combination of voltage and current controlled units, we implemented a higher-level control software independent from the physical process. A further software layer for autonomous control handles power exchange based on a distributed multiagent system, using a peer-to-peer like architecture. In parallel to the software, we made a physical model of a four-node OES on which different power exchange strategies can be simulated and compared. First results show an improved solar replacement ratio, and thus a reduction of ac grid consumption thanks to power interchange. The concept’s feasibility has been demonstrated on the ﬁrst three houses of a full-scale OES platform in Okinawa.
Long-distance dependencies are notoriously diffi cult to analyze in a formally explicit way because they involve constituents that seem to have been extracted from their canonical position in an utterance. The most widespread solution is to identify a GAP at an EXTRACTION SITE and to communicate information about that gap to its FILLER, as in What_FILLER did you see_GAP? This paper rejects the filler?gap solution and proposes a cognitive-functional alternative in which long-distance dependencies spontaneously emerge as a side eff ect of how grammatical constructions interact with each other for expressing diff erent conceptualizations. The proposal is supported by a computational implementation in Fluid Construction Grammar that works for both parsing and production.
Computational experiments in cultural language evolution are important because they help to reveal the cognitive mechanisms and cultural processes that continuously shape and reshape the structure and knowledge of language. However, understanding the intricate relations between these mechanisms and processes can be a daunting challenge. This paper proposes to recruit the concept of fitness landscapes from evolutionary biology and computer science for visualizing the ?linguistic fitness? of particular language systems. Through a case study on the German paradigm of definite articles, the paper shows how such landscapes can shed a new and unexpected light on non-trivial cases of language evolution. More specifically, the case study falsifies the widespread assumption that the paradigm is the accidental by-product of linguistic erosion. Instead, it has evolved to optimize the cognitive and perceptual resources that language users employ for achieving successful communication.
This thesis is devoted to the study of transportation systems by means of Complex Systems and Complex Network Theories. Complex Networks are a tools of inestimable value in human transportation studies since in most of the cases the means of transportation used by individuals to move in space are bounded to move on a complex network. The topological properties of transportation networks can influence both the ability of individuals to move as well as their behavior in the environment, thus a characterization of the network is mandatory in order to understand the properties of the considered system.The two transportation systems that have been studied in this work are the Air Transport System and the mobility of cars in a urban environment.The analysis and modeling of the Air Transport System is the first and most extensive part of this thesis. In particular we will try to characterize and study the networks in which aircraft fly, exploiting these results to build a data-driven model of Air Traffic Control.The second part of the thesis is a continuation of the studies performed during by Pierpaolo Mastroianni during his Master Thesis. His work concerned the analysis of GPS tracks data in the City of Rome and the inference of statistical laws characterizing the behavior of car drivers. My contribution to his work is the development of a model capable of explaining some of the results presented in the Master Thesis.
The introduction of a new SESAR scenario in the European Airspace will impact the functioning and the performances of the current Air Traffic Management (ATM) System. The understanding of the features and the limits of the current system could be crucial in order to improve and design the structure of the future ATM. In this paper we present some results of the “Assessment of Critical Delay Patterns and Avalanche Dynamics” PhD project from the ComplexWorld Network. During this project we developed a model of Air Traffic Control (ATC) based on Complex Network theory capable of reproducing the features of the real ATC in three European National Airspaces. We then developed an optimization algorithm based on “Extremal Optimization” in order to build efficient and globally optimized planned trajectories. The ATC model is applied in order to study the efficiency of this new planned trajectories when subject to external perturbations and to compare them to the current situation.
Novelties are a familiar part of daily life. They are also fundamental to the evolution of biological systems, human society, and technology. By opening new possibilities, one novelty can pave the way for others in a process that Kauffman has called “expanding the adjacent possible”. The dynamics of correlated novelties, however, have yet to be quantified empirically or modeled mathematically. Here we propose a simple mathematical model that mimics the process of exploring a physical, biological, or conceptual space that enlarges whenever a novelty occurs. The model, a generalization of Polya’s urn, predicts statistical laws for the rate at which novelties happen (Heaps’ law) and for the probability distribution on the space explored (Zipf’s law), as well as signatures of the process by which one novelty sets the stage for another. We test these predictions on four data sets of human activity: the edit events of Wikipedia pages, the emergence of tags in annotation systems, the sequence of words in texts, and listening to new songs in online music catalogues. By quantifying the dynamics of correlated novelties, our results provide a starting point for a deeper understanding of the adjacent possible and its role in biological, cultural, and technological evolution.
Human languages are rule governed, but almost invariably these rules have exceptions in the form of irregularities. Since rules in language are efficient and productive, the persistence of irregularity is an anomaly. How does irregularity linger in the face of internal (endogenous) and external (exogenous) pressures to conform to a rule? Here we address this problem by taking a detailed look at simple past tense verbs in the Corpus of Historical American English. The data show that the language is open, with many new verbs entering. At the same time, existing verbs might tend to regularize or irregularize as a consequence of internal dynamics, but overall, the amount of irregularity sustained by the language stays roughly constant over time. Despite continuous vocabulary growth, and presumably, an attendant increase in expressive power, there is no corresponding growth in irregularity. We analyze the set of irregulars, showing they may adhere to a set of minority rules, allowing for increased stability of irregularity over time. These findings contribute to the debate on how language systems become rule governed, and how and why they sustain exceptions to rules, providing insight into the interplay between the emergence and maintenance of rules and exceptions in language.
Fluid Construction Grammar (FCG) is an open-source computational grammar formalism that is becoming increasingly popular for studying the history and evolution of language. This demonstration shows how FCG can be used to operationalise the cultural processes and cognitive mechanisms that underly language evolution and change.
Construction Grammar has reached a stage of maturity where many researchers are looking for an explicit formal grounding of their work. Recently, there have been exciting developments to cater for this demand, most notably in Sign-Based Construction Grammar (SBCG) and Fluid Construction Grammar (FCG). Unfortunately, like playing a music instrument, the formalisms used by SBCG and FCG take time and effort to master, and linguists who are unfamiliar with them may not always appreciate the far-reaching theoretical consequences of adopting this or that approach. This paper undresses SBCG and FCG to their bare essentials, and offers a linguist-friendly comparison that looks at how both approaches define constructions, linguistic knowledge and language processing.
The German definite article paradigm, which is notorious for its case syncretism, is widely considered to be the accidental by-product of diachronic changes. This paper argues instead that the evolution of the paradigm has been motivated by the needs and constraints of language usage. This hypothesis is supported by experiments that compare the current paradigm to its Old High German ancestor (OHG; 900?1100ad) in terms of linguistic assessment criteria such as cue reliability, processing efficiency and ease of articulation. Such a comparison has been made possible by ?bringing back alive? the OHG system through a computational reconstruction
in the form of a processing model.The experiments demonstrate that syncretism has made the New High German system more efficient for processing, pronunciation and perception than its historical predecessor, without harming the language?s strength at disambiguating utterances.
The naming game (NG) describes the agreement dynamics of a population of N agents interacting locally in pairs leading to the emergence of a shared vocabulary. This model has its relevance in the novel fields of semiotic dynamics and specifically to opinion formation and language evolution. The application of this model ranges from wireless sensor networks as spreading algorithms, leader election algorithms to user-based social tagging systems. In this paper, we introduce the concept of overhearing (i.e., at every time step of the game, a random set of N-delta individuals are chosen from the population who overhear the transmitted word from the speaker and accordingly reshape their inventories). When delta= 0 one recovers the behavior of the original NG. As one increases delta, the population of agents reaches a faster agreement with a significantly low-memory requirement. The convergence time to reach global consensus scales as log N as delta approaches 1. Copyright (C) EPLA, 2013
Despite centuries of research, the origins of grammatical case are more mysterious than ever. This paper addresses some unanswered questions through language game experiments in which a multi-agent population self-organizes a morphosyntactic case system. The experiments show how the formal part of grammatical constructions may pressure such emergent systems to become more economical.
Case has fascinated linguists for centuries without however revealing its most important secrets. This paper offers operational explanations for case through language game experiments in which autonomous agents describe real-world events to each other. The experiments demonstrate (a) why a language may develop a case system, (b) how a population can self-organize a case system, and (c) why and how an existing case system may take on new functions in a language.
German case syncretism is often assumed to be the accidental by-product of historical development. This paper contradicts this claim and argues that the evolution of German case is driven by the need to optimize the cognitive effort and memory required for processing and interpretation. This hypothesis is supported by a novel kind of computational experiments that reconstruct and compare attested variations of the German definite article paradigm. The experiments show how the intricate interaction between those variations and the rest of the German ?linguistic landscape? may direct language change.
Linguistic utterances are full of errors and novel expressions, yet linguistic communication is remarkably robust. This paper presents a double-layered architecture for open-ended language processing, in which ?diagnostics? and ?repairs? operate on a meta-level for detecting and solving problems that may occur during habitual processing on a routine layer. Through concrete operational examples, this paper demonstrates how such an architecture can directly monitor and steer linguistic processing, and how language can be embedded in a larger cognitive system.
Almost all languages in the world have a way to formulate commands. Commands specify actions that the body should undertake (such as “stand up”), possibly involving other objects in the scene (such as “pick up the red block”). Action language involves various competences, in particular (i) the ability to perform an action and recognize which action has been performed by others (the so-called mirror problem), and (ii) the ability to identify which objects are to participate in the action (e.g. “the red block” in “pick up the red block”) and understand what role objects play, for example whether it is the agent or undergoer of the action, or the patient or target (as in “put the red block on top of the green one”). This chapter describes evolutionary language game experiments exploring how these competences originate, can be carried out and acquired, by real robots, using evolutionary language games and a whole systems approach.
Cognitive linguistics has reached a stage of maturity where many researchers are looking for an explicit formal grounding of their work. Unfortunately, most current models of deep language processing incorporate assumptions from generative grammar that are at odds with the cognitive movement in linguistics. This demonstration shows how Fluid Construction Grammar (FCG), a fully operational and bidirectional unification-based grammar formalism, caters for this increasing demand. FCG features many of the tools that were pioneered in computational linguistics in the 70s-90s, but combines them in an innovative way. This demonstration highlights the main differences between FCG and related formalisms.
This chapter introduces a new experimental paradigm for studying issues in the grounding of language and robots, and the integration of all aspects of intelligence into a single system. The paradigm is based on designing and implementing artificial agents so that they are able to play language games about situations they perceive and act upon in the real world. The agents are not pre-programmed with an existing language but with the necessary cognitive functions to self-organize communication systems from scratch, to learn them from human language users if there are sufficiently frequent interactions, and to participate in the on-going cultural evolution of language.
This chapter introduces very briefly the framework and tools for lexical and grammatical processing that have been used in the evolutionary language game experiments reported in this book. This framework is called Fluid Construction Grammar (FCG) because it rests on a constructional approach to language and emphasizes flexible grammar application. Construction grammar organizes the knowledge needed for parsing or producing utterances in terms of bi-directional mappings between meaning and form. In line with other contemporary linguistic formalisms, FCG uses feature structures and unification and includes several innovations which make the formalism more adapted to implement flexible and robust language processing systems on real robots. This chapter is an introduction to the formalism and how it is used in processing.
This chapter introduces the computational infrastructure that is used to bridge the gap between results from sensorimotor processing and language. It consists of a system called Incremental Recruitment Language (IRL) that is able to configure a network of cognitive operations to achieve a particular communicative goal. IRL contains mechanisms for finding such networks, chunking subnetworks for more efficient later reuse, and completing partial networks (as possibly derived from incomplete or only partially understood sentences).
This chapter describes key aspects of a visual perception system as a key component for language game experiments on physical robots. The vision system is responsible for segmenting the continuous flow of incoming visual stimuli into segments and computing a variety of features for each segment. This happens by a combination of bottom-up way processing that work on the incoming signal and top-down processing based on expectations about what was seen before or objects stored in memory. This chapter consists of two parts. The first one is concerned with extracting and maintaining world models about spatial scenes, without any prior knowledge of the possible objects involved. The second part deals with the recognition of gestures and actions which establish the joint attention and pragmatic feedback that is an important aspect of language games.
This chapter explores a semantics-oriented approach to the origins of syntactic structure. It reports on preliminary experiments whereby speakers introduce hierarchical constructions and grammatical markers to express which conceptualization strategy hearers are supposed to invoke. This grammatical information helps hearers to avoid semantic ambiguity or errors in interpretation. A simulation study is performed for spatial grammar using robotic agents that play language games about objects in their shared world. The chapter uses a reconstruction of a fragment of German spatial language to identify the niche of spatial grammar, and then reports on acquisition and formation experiments in which agents seeded with a `pidgin German’ without grammar are made to interact until rudiments of hierarchical structure and grammatical marking emerge.
Grounding language in sensorimotor spaces is an important and difficult task. In order, for robots to be able to interpret and produce utterances about the real world, they have to link symbolic information to continuous perceptual spaces. This requires dealing with inherent vagueness, noise and differences in perspective in the perception of the real world. This paper presents two case studies for spatial language and quantification that show how cognitive operations – the building blocks of grounded procedural semantics – can be efficiently grounded in sensorimotor spaces.
Russian requires speakers of the language to conceptualize events using temporal language devices such as Aktionsarten and aspect, which relate to particular profiles and characteristics of events such as whether the event just started, whether it is ongoing or it is a repeated event. This chapter explores how such temporal features of events can be processed and learned by robots through grounded situated interactions. We use a whole systems approach, tightly integrating perception, conceptualization grammatical processing and learning and demonstrate how a system of Aktionsarten can be acquired.
Basic postures such as sit, stand and lie are ubiquitous in human interaction. In order to build robots that aid and support humans in their daily life, we need to understand how posture categories can be learned and recognized. This paper presents an unsupervised learning approach to posture recognition for a biped humanoid robot. The approach is based on Slow Feature Analysis (SFA), a biologically inspired algorithm for extracting slowly changing signals from signals varying on a fast time scale. Two experiments are carried out: First, we consider the problem of recognizing static postures in a multimodal sensory stream which consists of visual and proprioceptive stimuli. Secondly, we show how to extract a low-dimensional representation of the sensory state space which is suitable for posture recognition in a more complex setting. We point out that the beneficial performance of SFA in this task can be related to the fact that SFA computes manifolds which are used in robotics to model invariants in motion and behavior. Based on this insight, we also propose a method for using SFA components for guided exploration of the state space.
This chapter introduces the modular humanoid robot Myon, covering its mechatronical design, embedded low-level software, distributed processing architecture, and the complementary experimental environment. The Myon humanoid is the descendant of various robotic hardware platforms which have been built over the years and therefore combines the latest research results on the one hand, and the expertise of how a robot has to be built for experiments on embodiment and language evolution on the other hand. In contrast to many other platforms, the Myon humanoid can be used as a whole or in parts. Both the underlying architecture and the supportive application software allow for ad hoc changes in the experimental setup.
This chapter studies how basic spatial categories such as left-right, front-back, far-near or north-south can emerge in a population of robotic agents in co-evolution with terms that express these categories. It introduces various language strategies and tests them first in reconstructions of German spatial terms, then in acquisition experiments to demonstrate the adequacy of the strategy for learning these terms, and finally in language formation experiments showing how a spatial vocabulary and the concepts expressed by it can emerge in a population of embodied agents from scratch.
This chapter investigates how a vocabulary for talking about body actions can emerge in a population of grounded autonomous agents instantiated as humanoid robots. The agents play a Posture Game in which the speaker asks the hearer to take on a certain posture. The speaker either signals success if the hearer indeed performs an action to achieve the posture or he shows the posture himself so that the hearer can acquire the name. The challenge of emergent body language raises not only fundamental issues in how a perceptually grounded lexicon can arise in a population of autonomous agents but also more general questions of human cognition, in particular how agents can develop a body model and a mirror system so that they can recognize actions of others as being the same as their own.
This chapter explores a possible language strategy for verbalizing aspect: the encoding of Aktionsarten by means of morphological markers. Russian tense-aspect system is used as a model. We first operationalize this system and reconstruct the learning operators needed for acquiring it. Then we perform a first language formation experiment in which a novel system of Aktionsarten emerges and gets coordinated between the agents, driven by a need for higher expressivity.
Language change is increasingly recognized as one of the most crucial sources of evidence for understanding human cognition. Unfortunately, despite sophisticated methods for documenting which changes have taken place, the question of why languages evolve over time remains open for speculation. This paper presents a
novel research method that addresses this issue by combining agent-based experiments with deep language processing, and demonstrates the approach through a case study on German definite articles. More specifically, two populations of autonomous agents are equipped with a model of Old High German (500?1100 AD) and Modern High
German definite articles respectively, and a set of self-assessment criteria for evaluating their own linguistic performances. The experiments show that inefficiencies detected in the grammar by the Old High German agents correspond to grammatical forms that have actually undergone the most important changes in the German language.
The results thus suggest that the question of language change can be reformulated as an optimization problem in which language users try to achieve their communicative goals while allocating their cognitive resources as efficiently as possible.
The question how a shared vocabulary can arise in a multi-agent population despite the fact that each agent autonomously invents and acquires words has been solved. The solution is based on alignment: Agents score all associations between words and meanings in their lexicons and update these preference scores based on communicative success. A positive feedback loop between success and use thus arises which causes the spontaneous self-organization of a shared lexicon. The same approach has been proposed for explaining how a population can arrive at a shared grammar, in which we get the same problem of variation because each agent invents and acquires their own grammatical constructions. However, a problem arises if constructions reuse parts that can also exist on their own. This happens particularly when frequent usage patterns, which are based on compositional rules, are stored as such. The problem is how to maintain systematicity. This paper identifies this problem and proposes a solution in the form of multilevel alignment. Multilevel alignment means that the updating of preference scores is not restricted to the constructions that were used in the utterance but also downward and upward in the subsumption hierarchy.
Becoming a proficient speaker of a language requires more than just learning a set of words and grammar rules, it also implies mastering the ways in which speakers of that language typically innovate: stretching the meaning of words, introducing new grammatical constructions, introducing a new category, and so on. This paper demonstrates that such meta-knowledge can be represented and applied by reusing similar representations and processing techniques as needed for routine linguistic processing, which makes it possible that language processing makes use of computational reflection.
The fascinating question of the origins and evolution of language has been drawing a lot of attention recently, not only from linguists, but also from anthropologists, evolutionary biologists, and brain scientists. This groundbreaking book explores the cultural side of language evolution. It proposes a new overarching framework based on linguistic selection and self-organization and explores it in depth through sophisticated computer simulations and robotic experiments. Each case study investigates how a particular type of language system can emerge in a population of language game playing agents and how it can continue to evolve in order to cope with changes in ecological conditions. Case studies cover on the one hand the emergence of concepts and words for proper names, color terms, names for bodily actions, spatial terms and multi-dimensional words. The second set of experiments focuses on the emergence of grammar, specifically case grammar for expressing argument structure, functional grammar for expressing different uses of spatial relations, internal agreement systems for marking constituent structure, morphological expression of aspect, and quantifiers expressed as articles. The book is ideally suited as study material for an advanced course on language evolution and it will be of interest to anyone who wonders how human languages may have originated.
Written by leading international experts, this volume presents contributions establishing the feasibility of human language-like communication with robots. The book explores the use of language games for structuring situated dialogues in which contextualized language communication and language acquisition can take place. Within the text are integrated experiments demonstrating the extensive research which targets artificial language evolution. Language Grounding in Robots uses the design layers necessary to create a fully operational communicating robot as a framework for the text, focusing on the following areas: Embodiment; Behavior; Perception and Action; Conceptualization; Language Processing; Whole Systems Experiments. This book serves as an excellent reference for researchers interested in further study of artificial language evolution.
The lexicons of human languages organize their units at two distinct levels. At a first combinatorial level, meaningless forms (typically referred to as phonemes) are combined into meaningful units (typically referred to as morphemes). Thanks to this, many morphemes can be obtained by relatively simple combinations of a small number of phonemes. At a second compositional level of the lexicon, morphemes are composed into larger lexical units, the meaning of which is related to the individual meanings of the composing morphemes. This duality of patterning is not a necessity for lexicons and the question remains wide open regarding how a population of individuals is able to bootstrap such a structure and the evolutionary advantages of its emergence. Here we address this question in the framework of a multi-agents model, where a population of individuals plays simple naming games in a conceptual environment modeled as a graph. We demonstrate that errors in communication conditions for the emergence of duality of patterning, that can thus be explained in a pure cultural way. Compositional lexicons turn out to be faster to lead to successful communication thanpurely combinatorial lexicons, suggesting that meaning played a crucial role in the evolution of language.
One of the fundamental problems in cognitive science is how humans categorize the visible color spectrum. The empirical evidence of the existence of universal or recurrent patterns in color naming across cultures is paralleled by the observation that color names begin to be used by individual cultures in a relatively fixed order. The origin of this hierarchy is largely unexplained. Here we resort to multiagent simulations, where a population of individuals, subject to a simple perceptual constraint shared by all humans, namely the human Just Noticeable Difference, categorizes and names colors through a purely cultural negotiation in the form of language games. We found that the time needed for a population to reach consensus on a color name depends on the region of the visible color spectrum. If color spectrum regions are ranked according to this criterion, a hierarchy with [red, (magenta)-red], [violet], [green/yellow], [blue], [orange], and [cyan], appearing in this order, is recovered, featuring an excellent quantitative agreement with the empirical observations of the WCS. Our results demonstrate a clear possible route to the emergence of hierarchical color categories, confirming that the theoretical modeling in this area has now attained the required maturity to make significant contributions to the ongoing debates concerning language universals.
All languages of the world have a way to talk about space and spatial relations of objects. Cross-culturally, immense variation in how people conceptualize space for language has been attested. Different spatial conceptualization strategies such as proximal, projective and absolute have been identified to underlie peoples conception of spatial reality. This paper argues that spatial conceptualization strategies are negotiated in a cultural process of linguistic selection. Conceptualization strategies originate in the cognitive capabilities of agents. The ecological conditions and the structure of the environment influence the conceptualization strategy agents invent and which corresponding system of lexicon and ontology of spatial relations is selected for. The validity of these claims is explored using populations of humanoid robots.
This paper compares two prominent approaches in artificial language evolution: Iterated Learning and Social Coordination. More specifically, the paper contrasts experiments in both approaches on how populations of artificial agents can autonomously develop a grammatical case marking system for indicating event structure (i.e. ?who does what to whom?). The comparison demonstrates that only the Social Coordination approach leads to a shared communication system in a multi-agent population. The paper concludes with an analysis and discussion of the results, and argues that Iterated Learning in its current form cannot explain the emergence of more complex natural language-like phenomena.
The paper surveys recent research on language evolution, focusing in particular on models of cultural evolution and how they are being developed and tested using agent-based computational simulations and robotic experiments. The key challenges for evolutionary theories of language are outlined and some example results are discussed, highlighting models explaining how linguistic conventions get shared, how conceptual frameworks get coordinated through language, and how hierarchical structure could emerge. The main conclusion of the paper is that cultural evolution is a much more powerful process that usually assumed, implying that less innate structures or biases are required and consequently that human language evolution has to rely less on genetic evolution.
This paper presents a design pattern for handling argument structure and offers a concrete operationalization of this pattern in Fluid Construction Grammar. Argument structure concerns the mapping between ?participant structure? (who did what to whom) and instances of ?argument realization? (the linguistic expression of participant structures). This mapping is multilayered and indirect, which poses great challenges for grammar design. In the proposed design pattern, lexico-phrasal constructions introduce their semantic and syntactic potential of linkage. Argument structure constructions, then, select from this potential the values that they require and implement the actual linking.
This paper illustrates the use of ?feature matrices?, a technique for handling ambiguity and feature indeterminacy in feature structure grammars using unification as the single mechanism for processing. Both phenomena involve forms that can be mapped onto multiple, often conflicting values. This paper illustrates their respective challenges through German case agreement, which has become the litmus test for demonstrating how well a grammar formalism deals with multifunctionality. After reviewing two traditional solutions, the paper demonstrates how complex grammatical categories can be represented as feature matrices instead of single-valued features. Feature matrices allow a free flow of constraints on possible feature-values coming from any part of an utterance, and they postpone commitment to any particular value until sufficient constraints have been identified. All examples in this paper are operationalized in Fluid Construction Grammar, but the design principle can be extended to other unification-grammars as well.
Natural languages are fluid. New conventions may arise and there is never absolute consensus in a population. How can human language users nevertheless have such a high rate of communicative success? And how do they deal with the incomplete sentences, false starts, errors and noise that is common in normal discourse? Fluidity, ungrammaticality and error are key problems for formal descriptions of language and for computational implementations of language processing because these seem to be necessarily rigid and mechanical. This chapter discusses how these issues are approached within the framework of Fluid Construction Grammar. Fluidity is not achieved by a single mechanism but through a combination of intelligent grammar design and flexible processing principles.
One of the key components for achieving flexible, robust, adaptive and open-ended language-based communication between humans and robots – or between robots and robots – is rich deep semantics. AI has a long tradition of work in the representation of knowledge, most of it within the logical tradition. This tradition assumes that an autonomous agent is able to derive formal descriptions of the world which can then be the basis of logical inference and natural language understanding or production. This paper outlines some difficulties with this logical stance and reports alternative research on the development of an ?embodied cognitive semantics? that is grounded in the world through a robot?s sensori-motor system and is evolutionary in the sense that the conceptual frameworks underlying language are assumed to be adapted by agents in the course of dialogs and thus undergo constant change.