Super Linguistics Colloquium Series

Fall 2020 Schedule

Note: All talks will take place on Zoom. 

 

Friday, September 25, 12 noon-2: 15pm (CEST)

Lauren Gawne (La Trobe University)

Title:  Emoji as Digital Gesture: Digital multimodality

Abstract:  In the space of a decade emoji have gone from being unavailable outside of Japan to active use by over 90% of the world's online population. Their sudden rise in use is often attributed to the way they allow users to convey in writing what is usually done with tone of voice and body language in face-to-face interaction, but the specific implementation of this general claim has been under-explored . In this talk I discuss an ongoing collaboration with Gretchen McCulloch, where we look at the functional parallels between emoji and co-speech gestures.In addition to the obvious similarities between certain emoji and certain gestures (eg, winking, thumbs up), gestures are commonly grouped into subcategories according to how codified their meaning is and how much they are dependent on surrounding speech.

 

Friday, October 2,  3: 15-5: 30pm (CEST)

Neil Cohn (Tilburg University)

Title: The grammar of visual narratives: The structure of sequential images

Abstract:  Sequences of images are all around us - from historical scrolls and cave paintings to instruction manuals and contemporary comics. Just how do we comprehend these sequences of images? Recent research has shown that the comprehension of visual narratives extends beyond the meaningful relationships between images and uses a “narrative grammar” that organizes this semantic information. I will show that this structure, based on contemporary construction grammars from linguistics, packages meaning into categorical roles into hierarchical constituents to account for phenomena like long distance dependencies and structural ambiguities.In addition, using measurements of brainwaves (EEG / ERPs), I will show that this grammar is independent of meaning (eg, N400), and engages similar neurocognitive processing as syntax in language (eg, anterior negativities, P600). Finally, I will show that sequential image processing is modulated by a person's fluency in the specific narrative grammars found in different “visual languages” of the world. Altogether, this work introduces emerging research from the linguistic and cognitive sciences that challenges conventional wisdom with a new paradigm of thinking about the connections between language and graphic communication.

 

Friday, October 16,  3: 15-5: 30pm (CEST)

Lilia Rissman (University of Wisconsin-Madison)

Title:  From gesture to sign language: grammaticalization of agent-backgrounding

Abstract: How is gesture similar to language and how is gesture different? I address this question by observing language emergence in the manual modality, comparing how English-speaking gesturers, adult Nicaraguan homesigners, child Guatemalan homesigners, and adult signers of Nicaraguan Sign Language describe change-of-state events. I found that all groups used morphosyntactic devices to encode agency. Nonetheless, only those individuals who received signing input from an older peer used morphosyntactic devices to encode agent-backgrounding (viewing an event from the perspective of the patient rather than the agent, as in the passive sentence "a book was tipped over"). These results suggest that symbolic mappings between forms and meanings are shared across gesture and language. At the same time, gesture appears to lack the network of systematic mappings found in language, accounting for why gesturers encode agency but not agent-backgrounding.

 

Friday, November 6,  3: 15-5: 30pm (CET)

Martina Wiltschko (ICREA, Pompeu Fabra University) 

Title: The grammar of emotions and the absence thereof

Abstract: At least since Frege, philosophers and linguists alike have recognized a difference between descriptive and emotive language. In the empirical part of this talk, I explore the grammar of emotive language through a case study of response markers. I conclude that emotions are never encoded with grammatical means, only with words (love, hate, angry) or interjections (yikes, wohoooo). There are no grammatical markers for fear or disgust, etc. The only “emotion” expressed with grammatical markers is surprise (e.g., via miratives or exclamatives, for example). But it has been argued on independent grounds that “surprise” is in fact not an emotion leaving us with the conclusion that emotions are not part of grammar, only part of the lexicon. In other words, there is no dedicated EmotivePhrase. This raises the question as to why this should be the case.

In the theoretical part of the talk I provide a brief overview of recent research on emotions. It has been argued that the classic view on emotions as consisting of a few basic emotions (like sadness, happiness, anger, disgust, …) cannot be upheld. Rather according to the work of Feldmann-Barret and colleagues, emotions are concepts that are constructed by the brain. Accordingly, we can conclude - based on the linguistic findings reported on in the first part of the talk - that the system that constructs language (and thought, i.e., UG) is in complementary distribution with the system that constructs emotions (and feelings). And given that, for a linguist, complementarity is the hallmark of identity, I speculate that the universal spine which configures language (Wiltschko 2014) also configures emotions and therefore (seemingly paradoxically) emotions are not part of grammar. I end with a preliminary discussion of the implications of this hypothesis for our understanding of emotions, the human mind, and the evolution of language.

 

Friday, November 13, 3: 15-5: 30pm  (CET)

Julia Fischer (University of Göttingen / German Primate Center / Leibniz ScienceCampus Primate Cognition )

Title:  How the choice of different research frameworks affects our conception of nonhuman primate communication

Abstract:  Studies of the communicative abilities of animals are often framed within an anthropocentric research program: We begin by identifying specific traits of interest in humans and then check whether these traits, or 'proto' versions thereof may be found in other animals. Evolutionary research programs, in contrast, tend to focus on the function of traits, and use bottom-up comparative approaches to reconstruct the evolution of a given trait. Using the case of the alarm call system of members of the genus  Chlorocebus  (vervet monkeys and green monkeys), I will exemplify the strengths and weaknesses of the respective approaches. I will focus on two aspects of the vervet monkey alarm call system that have been deemed central in the quest to understand the evolution of speech, namely 'meaning' and 'vocal learning'. Our study on green monkey vocal responses to an unknown flying object (a research drone) and the subsequent comparison of the acoustic structure of the alarm calls in the genus provides strong support for the notion that the calls are innate responses to different types of predators. Vocal responses may then be subject to learning by experience, resulting in variation in call usage. Listeners, in contrast, can and need to learn what different sounds refer to, and they are able to do so after minimal exposure.The similarities in the vocal communication of (these) nonhuman primates and humans appear to be on the side of the listener. I will argue that there is a greater need to be aware of perhaps unintended consequences of favoring one research approach over the other.

 

Friday, November 20, 3: 15-5: 30pm (CET)

David Poeppel (New York University / Max Planck Institute)

Title: The missing link(ing hypotheses)

Abstract:  How language, music, and other complex sequences are represented and computed in the human brain is a fundamental area of brain research that continues to stimulate as much research as it does vigorous debate. Some classical questions (and persistent puzzles) - highlighting the tension between neuroscience and cognitive science research - concern the role of structure and abstraction. Recent findings from human neuroscience, across various techniques (e.g. fMRI, MEG, ECoG), suggest that the brain supports hierarchically structured abstract representations. New data on the role of brain rhythms show that such neural activity appears to underpin the tracking of structure-building operations. If the new approaches are on the right track, they invite closer relations between fields and better linking hypotheses between the foundational questions that animate both the neurosciences and the cognitive sciences. 

 

Wednesday, November 25, 3: 15-5: 30 (CET)

Andrea Ravignani (Max Planck Institute)

Title: Rhythm and timing from an evolutionary, comparative perspective

Abstract:  Capacities for rhythm, beat perception and synchronization are key in human music, dance, speech and action. Just in the last decade, many thought that humans were quite unique in their rhythmic capacities. Since 2009, however, this theoretical landscape started changing: Research in a parrot (Snowball the cockatoo) and other animals provided evidence that other species can perform predictive and flexible rhythmic synchronization. In this talk, I will take a broad comparative approach, showing the previously underestimated richness of animal rhythms. In particular I will focus on my work in non-human primates and seals. These data suggest that rhythmic abilities in other mammals are more developed than previously surmised. I will discuss how evidence for rhythm in other species can inform the evolution of rhythmic capacities in our own.

Published Sep. 14, 2020 1:13 PM - Last modified Nov. 16, 2020 12:34 PM