Super Linguistics Colloquium Series

2021

A semantics of face emoji in discourse

Friday, January 22 nd , 5.00pm - 7.00pm (please note that we start at 5pm sharp)

Gabriel Greenberg (UCLA), Patrick G. Grosz (UiO), Elsi Kaiser (USC), Christian De Leon (UCLA).

In this talk, we argue that face emoji (😀,😟) are a part of multi-modal discourse and express emotional attitudes. We argue that they comment on the text that they accompany, and that they are more constrained - in semantically interesting ways - than one might initially expect. We hypothesize that they denote functions from individuals x (attitude holders), propositions p, and questions, Q to sets of situations in which the individual x holds an emotional attitude (e.g. happy or unhappy) about how the proposition p resolves the question Q.  The discourse contribution of an emoji is to add this denotation, as applied to a proposition typically expressed by the preceding clause, to the author’s discourse commitments. We argue that this analysis derives a range of interesting constraints, including [i.] positioning constraints of emoji with regards to the text that they accompany, [ii.] apparently direct interactions of emoji with lexical material in the accompanying text (including the scalar operator ‘only’), [iii.] and a range of apparent mixed-emotion uses of face emoji.

The Social Origins of Language

Friday, February 5th, 3.00pm - 5.00pm 

Robert Seyfarth (University of Pennsylvania)

Despite their differences, human language and the vocal communication of nonhuman primates share many features. Both constitute forms of coordinated activity, rely on many shared neural mechanisms, and involve discrete, combinatorial cognition that includes rich pragmatic inference. These common features suggest that during evolution the ancestors of all modern primates faced similar social problems and responded with similar systems of communication and cognition. When language later evolved from this common foundation, many of its distinctive features were already present.

Meaning-First meets Super Linguistics

Friday, February 12th, 3.00pm - 5.00pm  

Artemis Alexiadou (Humboldt-UniversitĂ€t zu Berlin/Leibniz-ZAS) and Uli Sauerland (Leibniz-ZAS)

In a recent paper, we presented a Meaning-First approach (MFA) to grammar (Sauerland & Alexiadou 2020, doi:10.3389/fpsyg.2020.571295). In this talk, we discuss the potential this view might have to generate perspectives and research questions for super-linguistic phenomena. The three relevant assumptions of the MFA are the following: i) complex thought-structure generation is independent of language and occurs in species other than humans, ii) humans can communicate thoughts by compression into an articulateable form, and iii) cognitive systems other than logical thought can (and do) intrude in the compression / articulation process adding a socio-emotive dimension. After introducing the MFA, we argue that phenomena involving different communicative modalities can easily be accommodated in the MFA because language-independent representations are central to it. Two specific applications of the MFA, we then discuss are a) an account of multi-modal code-blending as parallel compression (Branchini & Donati 2016, 10.5334/gjgl.29) and b) the interaction between ellipsis and the intrusion of socio-emotive content.

The Extension Dogma

Friday, March 19th, 3.00pm - 5.00pm 

Paul Pietroski (Rutgers University)

In studies of meaning, linguists and philosophers have often followed Donald Davidson and David Lewis in assuming that whatever meanings are--if there are any--they determine extensions, at least relative to contexts. After reviewing some reasons for rejecting this assumption, which is especially unfriendly to mentalistic conceptions of meaning, I'll suggest that this assumption became prevalent for bad reasons. As time permits, I'll conclude by reviewing some work which suggests that even if we focus on quantificational determiners, mentalistic conceptions of meaning are motivated and The Extension Dogma should be abandoned.

Multiple contexts in drama: Henry V

Friday, March 26th, 3.00pm-5.00pm

Sigrid Beck (UniversitĂ€t TĂŒbingen); joint work with Mathias Bauer (UniversitĂ€t TĂŒbingen) 

We develop an analysis of how multiple contexts interact in literary dialogue, as we witness it in drama. Minimally, an internal context in which the characters of the play interact has to be considered as well as an external context in which the hearer in the context is the play's audience. Pragmatic mechanisms like presupposition, anti-presupposition and implicature, as we find them in Shakespeare’s Henry V, are shown to exploit the multiple contexts involved, in such a way as to inform the literary interpretation of the play.

Children create design features of language

Friday, April 16th, 3.00pm - 5.00pm  

Sotaro Kita (University of Warwick)

Why does language have the universal properties that it has? I will provide evidence for the idea that some of the design features of language (Hockett, 1956) have emerged (partly) due to children's tendency to shape communication systems into "language-like" ones. I will discuss evidence from an emerging sign language (Nicaraguan Sign Language), children's gestural communication when speech is not available, and children's use of sound symbolic words, in which the word sounds like what it means.

Connectedness: a cognitive primitive as revealed by language, and found elsewhere (namely, with baboons)

Friday, May 21st, 3.00pm - 5.00pm  

Emmanuel Chemla (CNRS)

Imagine a word, say 'blicket', that would mean "apple or banana": apples are blickets, and bananas are blickets. Intuitively, 'blicket' is a strange word, it refers to a concept that is unnatural. Why? It has been claimed that words must correspond to "connected" concepts: if apples are blickets and bananas are blickets, then anything in between an apple and a banana should also be a blicket; so if blicket was to be a more traditional word, it may have to include all fruits, not only apples and bananas. By and large, simple "content words", concrete nouns and adjectives, have connected meanings (cf. extensive philosophical work by Gardenförs, and much work in other domains such as computational psychology, language acquisition, or computer science).

Starting from there, we will formalize a notion of connectedness that applies to any type of word, not only content words. We will find that logical words (in particular quantifiers, such as 'all', 'some', 'none' in English), appear to also be connected across languages. We will provide evidence that non-human animals (specifically, baboons, papio papio) tend to form categories that are connected in the same sense, and argue that this tendency may reveal what are natural classes of objects (content word like) or natural classes of patterns (function word like).

Syntactically-integrated co-speech gestures: some preliminary evidence from the languages of Southern Italy

Friday, June 4th, 3.00pm - 5.00pm 

Valentina Colasanti  (Trinity College Dublin - University of Dublin)

Are gestures syntactically integrated? In recent years gestures have been a topic of much interest in formal linguistics, especially with respect to their semantic and pragmatic contribution (Ebert and Ebert 2014; Schlenker 2018; Esipova 2019; i.a.). A consistent observation within this literature is that the semantic content of gestures is integrated into the meaning of spoken utterances; hence, gesture can behave like speech, e.g. in presenting the same kind of semantic behaviour (taking scope, projecting, etc.).

One way to explain the semantic integration of gestures is to treat them as part of the grammar: namely, if gestures can participate in semantic relations, it is because they appear in syntactic representations. In particular, since gestures are performed with the same articulators as sign languages (e.g. hands, eyebrows), this would mean that syntactic features are externalised at the PF interface as gesture (visual-gestural modality) rather than as speech (auditory modality). From this we would expect syntax to be modality-blind, a result that appears to be correct.

In this talk, I will present preliminary results from an ongoing experiment on the status of a particular co-speech gesture, Mano a Borsa (MAB) or ‘pursed hand’, in Neapolitan (Italo-Romance).  MAB arises frequently in interrogative contexts, but its precise syntactic, semantic, and lexical properties are unclear. This experiment pursues the following research questions: (A) what is the clause-type distribution of MAB? (B) where may it be aligned temporally within the spoken utterance? (C) is MAB an underspecified wh-item? Starting with the last question first, I will present early results suggesting that MAB appears to exhibit the same syntactic distribution as a wh-phrase, raising questions about its lexical status. While the simplest conclusion might be that MAB is a sort of underspecified wh-item, in the talk I will discuss whether MAB might instead be the realization of a particular flavour of interrogative C, consistent with its preference for interrogative environments (cf. question A), and its apparent ability to align with the beginning of the clause, even in wh-in-situ contexts (cf. question B).

2020

Is Schumann a scam? how music tricks our brain into thinking it's worthy of emotions.

Friday 24th January, at 2.15-4pm, Henrik Wergelands hus, Room 536. (This presentation will be given via video conferencing.)

Jean-Julien Aucouturier (CNRS/IRCAM)

Music holds tremendous power over our emotions. Through a particularly touching phrase, a forceful chord or even a single note, musical sounds trigger powerful subjective reactions. For scientists, these strong reactions are vexing facts, because such emotional reactions are typically understood as survival reflexes: our increased heart rates, suddenly-sweaty hands or deeper breath are responses preparing our organism to e.g. fight or run away if we stumble into a bear in the woods. Stumbling into music, be it a violin or a flute, a C or a C#, hardly seems a similar matter of life or death. This talk will review recent scientific experiments, from the fields of musicology, psychology and neuroscience, which are trying to dissect musical sounds to see what exactly makes our brains think them worthy of such strong reactions – perhaps because they mimic the dissonant roar of a predator, reproduce the accents and prosody of emotional speech, or the spectral patterns of certain environmental sounds.

Bio: Jean-Julien Aucouturier is a CNRS researcher in cognitive science in IRCAM (Institut de Recherche et Coordination Acoustique/Musique) in Paris, where he leads the CREAM music neuroscience lab (http://cream.ircam.fr).

Emoji Resolution: Indexicality and Anaphoricity đŸ€”

Wednesday 6th May, at 5.15pm-7pm

Patrick Georg Grosz (UiO), Elsi Heilala Kaiser (USC), and Francesco Pierini (ENS)

Abstract

Emojis are an emerging object of study in linguistics and beyond (Bai et al. 2019), and it has been suggested that they are digital counterparts of speech-accompanying gestures in computer-mediated communication (Gawne & McCulloch 2019, Pierini 2019). In this talk, we focus on two subsets of emojis, namely non-face emojis that denote activities (such as the ‘basketball’ or ‘soccer ball’; henceforth ‘activity emojis’), and affective emojis, which include face emojis (such as the ‘grinning face’ and the “angry face’) as well as a set of affective non-face emojis (such as ‘thumbs up’ and ‘heart’). We argue that both the activity emojis and the affective emojis are typically anchored to an individual with a role such as Agent or Experiencer. Moreover, we provide evidence for a view where activity emojis are anaphoric (and often exhibit properties similar to 3rd person pronouns), while affective emojis exhibit 1st-person indexicality. The central paradigm is given in (1ab)-(2ab), where (1ab) exhibit 3rd-person anaphoricity, whereas (2ab) exhibit 1st-person indexicality. We propose a formal semantic analysis, where activity emojis denote separate discourse units, connected to the accompanying text via salient discourse relations, whereas affective emojis are expressive modifiers (similar to adverbs like ‘damn’ and interjections like ‘oh my’).

(1a) kate said sue impressed ann đŸ€ [basketball-emoji]

–-> agent of basketball-event = Sue

(1b) kate said sue admired ann 🏀 [basketball-emoji]

–> agent of basketball-event = Ann

(2a) kate said sue impressed ann đŸ˜Č [astonished-face-emoji]

–> experiencer of astonished-state = author (speaker)

(2b) kate said sue admired ann đŸ˜Č [astonished-face-emoji]

–> experiencer of astonished-state = author (speaker)

Animals Have No Language, and Humans Are Animals Too

Friday June 5th, at 4.15pm-6pm

Yosef Prat (Institut de Biologie, UniversitĂ© de NeuchĂątel)

Do nonhuman animals have language? In humans, language is prominently manifested by vocal communication (i.e., speech). However, while vocal communication is ubiquitous across the animal kingdom, studies to date have found only elementary parallels to speech in nonhuman animals. These modest linguistic capacities of other species have fortified our belief that language is uniquely human. But have we really tested this uniqueness claim? By adopting methods that are commonly used in bioacoustics, I demonstrate that, surprisingly, a true impartial comparison between human speech and other animal vocalizations has not been conducted yet. Oddly, studying human speech using the same methods used to study other species vocalizations is actually expected to provide us with no evidence for human uniqueness.

The Point of Pointing

Friday June 19th, 4.15pm – 6pm

Dorothy Ahn (Rutgers)

Pointing occurs frequently in both spoken and signed languages, though the discussion and the analysis of it in the two language modalities have developed rather separately. In this talk, I point out the similarities of the co-speech pointing gesture in spoken languages and the indexical handshape (IX) used for referent tracking in signed languages. I propose a unified analysis of pointing, where both the co-speech gesture and IX are analyzed as a modifier that provides a locational restriction. I discuss the main implications of this analysis and how it relates to other recent studies on exophoric demonstratives, co-speech gestures, and loci use in sign languages.

Emoji as Digital Gesture: Digital multimodality

Friday, September 25, 12 noon-2: 15pm (CEST)

Lauren Gawne (La Trobe University)

Abstract 

In the space of a decade emoji have gone from being unavailable outside of Japan to active use by over 90% of the world's online population. Their sudden rise in use is often attributed to the way they allow users to convey in writing what is usually done with tone of voice and body language in face-to-face interaction, but the specific implementation of this general claim has been under-explored . In this talk I discuss an ongoing collaboration with Gretchen McCulloch, where we look at the functional parallels between emoji and co-speech gestures.In addition to the obvious similarities between certain emoji and certain gestures (eg, winking, thumbs up), gestures are commonly grouped into subcategories according to how codified their meaning is and how much they are dependent on surrounding speech.

The grammar of visual narratives: The structure of sequential images

Friday, October 2,  3: 15-5: 30pm (CEST)

Neil Cohn (Tilburg University)

Abstract 

Sequences of images are all around us - from historical scrolls and cave paintings to instruction manuals and contemporary comics. Just how do we comprehend these sequences of images? Recent research has shown that the comprehension of visual narratives extends beyond the meaningful relationships between images and uses a “narrative grammar” that organizes this semantic information. I will show that this structure, based on contemporary construction grammars from linguistics, packages meaning into categorical roles into hierarchical constituents to account for phenomena like long distance dependencies and structural ambiguities.In addition, using measurements of brainwaves (EEG / ERPs), I will show that this grammar is independent of meaning (eg, N400), and engages similar neurocognitive processing as syntax in language (eg, anterior negativities, P600). Finally, I will show that sequential image processing is modulated by a person's fluency in the specific narrative grammars found in different “visual languages” of the world. Altogether, this work introduces emerging research from the linguistic and cognitive sciences that challenges conventional wisdom with a new paradigm of thinking about the connections between language and graphic communication.

From gesture to sign language: grammaticalization of agent-backgrounding

Friday, October 16,  3: 15-5: 30pm (CEST)

Lilia Rissman (University of Wisconsin-Madison)

Abstract

How is gesture similar to language and how is gesture different? I address this question by observing language emergence in the manual modality, comparing how English-speaking gesturers, adult Nicaraguan homesigners, child Guatemalan homesigners, and adult signers of Nicaraguan Sign Language describe change-of-state events. I found that all groups used morphosyntactic devices to encode agency. Nonetheless, only those individuals who received signing input from an older peer used morphosyntactic devices to encode agent-backgrounding (viewing an event from the perspective of the patient rather than the agent, as in the passive sentence "a book was tipped over"). These results suggest that symbolic mappings between forms and meanings are shared across gesture and language. At the same time, gesture appears to lack the network of systematic mappings found in language, accounting for why gesturers encode agency but not agent-backgrounding.

The grammar of emotions and the absence thereof

Friday, November 6,  3: 15-5: 30pm (CET)

Martina Wiltschko (ICREA, Pompeu Fabra University) 

Abstract

At least since Frege, philosophers and linguists alike have recognized a difference between descriptive and emotive language. In the empirical part of this talk, I explore the grammar of emotive language through a case study of response markers. I conclude that emotions are never encoded with grammatical means, only with words (love, hate, angry) or interjections (yikes, wohoooo). There are no grammatical markers for fear or disgust, etc. The only “emotion” expressed with grammatical markers is surprise (e.g., via miratives or exclamatives, for example). But it has been argued on independent grounds that “surprise” is in fact not an emotion leaving us with the conclusion that emotions are not part of grammar, only part of the lexicon. In other words, there is no dedicated EmotivePhrase. This raises the question as to why this should be the case.

In the theoretical part of the talk I provide a brief overview of recent research on emotions. It has been argued that the classic view on emotions as consisting of a few basic emotions (like sadness, happiness, anger, disgust, 
) cannot be upheld. Rather according to the work of Feldmann-Barret and colleagues, emotions are concepts that are constructed by the brain. Accordingly, we can conclude - based on the linguistic findings reported on in the first part of the talk - that the system that constructs language (and thought, i.e., UG) is in complementary distribution with the system that constructs emotions (and feelings). And given that, for a linguist, complementarity is the hallmark of identity, I speculate that the universal spine which configures language (Wiltschko 2014) also configures emotions and therefore (seemingly paradoxically) emotions are not part of grammar. I end with a preliminary discussion of the implications of this hypothesis for our understanding of emotions, the human mind, and the evolution of language.

How the choice of different research frameworks affects our conception of nonhuman primate communication

Friday, November 13, 3: 15-5: 30pm  (CET)

Julia Fischer (University of Göttingen / German Primate Center / Leibniz ScienceCampus Primate Cognition )

Abstract 

Studies of the communicative abilities of animals are often framed within an anthropocentric research program: We begin by identifying specific traits of interest in humans and then check whether these traits, or 'proto' versions thereof may be found in other animals. Evolutionary research programs, in contrast, tend to focus on the function of traits, and use bottom-up comparative approaches to reconstruct the evolution of a given trait. Using the case of the alarm call system of members of the genus  Chlorocebus  (vervet monkeys and green monkeys), I will exemplify the strengths and weaknesses of the respective approaches. I will focus on two aspects of the vervet monkey alarm call system that have been deemed central in the quest to understand the evolution of speech, namely 'meaning' and 'vocal learning'. Our study on green monkey vocal responses to an unknown flying object (a research drone) and the subsequent comparison of the acoustic structure of the alarm calls in the genus provides strong support for the notion that the calls are innate responses to different types of predators. Vocal responses may then be subject to learning by experience, resulting in variation in call usage. Listeners, in contrast, can and need to learn what different sounds refer to, and they are able to do so after minimal exposure.The similarities in the vocal communication of (these) nonhuman primates and humans appear to be on the side of the listener. I will argue that there is a greater need to be aware of perhaps unintended consequences of favoring one research approach over the other.

The missing link(ing hypotheses)

Friday, November 20, 3: 15-5: 30pm (CET)

David Poeppel (New York University / Max Planck Institute)

Abstract 

How language, music, and other complex sequences are represented and computed in the human brain is a fundamental area of brain research that continues to stimulate as much research as it does vigorous debate. Some classical questions (and persistent puzzles) - highlighting the tension between neuroscience and cognitive science research - concern the role of structure and abstraction. Recent findings from human neuroscience, across various techniques (e.g. fMRI, MEG, ECoG), suggest that the brain supports hierarchically structured abstract representations. New data on the role of brain rhythms show that such neural activity appears to underpin the tracking of structure-building operations. If the new approaches are on the right track, they invite closer relations between fields and better linking hypotheses between the foundational questions that animate both the neurosciences and the cognitive sciences. 

Rhythm and timing from an evolutionary, comparative perspective

Wednesday, November 25, 3: 15-5: 30 (CET)

Andrea Ravignani (Max Planck Institute)

Abstract 

Capacities for rhythm, beat perception and synchronization are key in human music, dance, speech and action. Just in the last decade, many thought that humans were quite unique in their rhythmic capacities. Since 2009, however, this theoretical landscape started changing: Research in a parrot (Snowball the cockatoo) and other animals provided evidence that other species can perform predictive and flexible rhythmic synchronization. In this talk, I will take a broad comparative approach, showing the previously underestimated richness of animal rhythms. In particular I will focus on my work in non-human primates and seals. These data suggest that rhythmic abilities in other mammals are more developed than previously surmised. I will discuss how evidence for rhythm in other species can inform the evolution of rhythmic capacities in our own.

2019

Limitations on the drive for ease of articulation in (sign) languages and dance: Dance isn't language.

Friday 18th January Donna Jo Napoli (Swarthmore)

All languages exhibit the drive for ease of articulation (EOA) so far as we know.  This drive is limited by the need to communicate, so the changed form must be recognizable, which, in sign languages, means iconicity must be largely preserved.  Even poetic language may exhibit this drive, although it also exhibits enhancement (using more energy than ordinary language).  Dance is different.  Dance has both participatory and performative contexts, where participatory dance gives clear evidence of the drive for EOA.  But performative dance does not.  It has some properties in common with poetry, but few in common with conversational language.  Why?  Language has sense independent of articulation; in dance one cannot know the intention without witnessing the articulation.  Please come witness sign and dance with me.

Geometric and functional constraints in spatial language

Friday 22nd of February: Jurgis Skilters (University of Latvia)

There are two main views regarding the semantics of spatial language. According to the first view, spatial language contains some underlying geometric primitives that are inherent to all spatial meaning (e.g., Herskovits, 1998); according to the second view, functional knowledge (i.e., routines or force-dynamics of everyday spatial use of objects) constrain or even replace geometric knowledge (Coventry & Garrod, 2005). In my talk I will show experimental evidence that both views are complementary. Despite of functional knowledge, there are strong geometric principles modulating the use of spatial knowledge. Based on data from Latvian (a morphologically rich, case-marked language) I will show some fascinating examples from different experiments indicating rich interaction between geometric principles and functional knowledge.

Interdependencies between ‘on’ and case-marked locative, ‘on’ and â€˜above’, ‘at / by / on’ and ‘next to’ will be discussed in a more detail. Some specific forms of Latvian locative (such as inverse locative which is used in relation between clothing and human body) will be discussed in detail. In my talk, I will also clarify the concept of functional dependency in spatial language and spatial cognition.

Coventry, K., & Garrod, S. (2005). Towards a classification of extra-geometric influences on the comprehension of spatial prepositions. In L. Carlson, & E. Van der Zee (Eds.). Functional features in language and space: insights from perception, categorization, and development (pp.149-162). Oxford: Oxford University Press.

Herskovits, A. (1998). Schematization. In P. Olivier, & K.-P. Gapp (Eds.). Representation and processing of spatial expressions (pp.149-162). Mahwah, NJ: Lawrence Erlbaum

Gestural Origins: Linguistic Features of Great Ape Gestural Communication

Friday 1st March: Cat Hobaiter (University of St. Andrews)

Language appears to be the most complex system of animal communication described to date. However, the emergence of language within the human lineage through a single recent genetic leap is extremely implausible. Instead, its precursors were likely present in the communication of our evolutionary ancestors, and are likely shared by our modern great ape cousins. All great apes, including humans, employ a rich repertoire of vocalizations, facial expressions, and gestures to communicate. Great ape gestural repertoires are particularly elaborate, with non-human apes employing over 80 different gesture types intentionally: that is towards a recipient and with a specific goal in mind. Intentional usage is a key feature of language and has rarely been described in other species. It allows us to ask not only what information is encoded in ape gestures, but what do apes mean when they use them. By employing a Pan-centric approach, that employs chimpanzee behaviour to define chimpanzee gesture types, we may be better able to describe their communicative capacities. I will review recent research on the gestural communication of great apes, with a particular focus on comparison between wild Pan populations, and including recent data on human infants. Children aged 1-2 years, on the cusp of acquiring language, were found to employ 52 gesture types. Over 90% of the child gestural repertoire was shared with the repertoires of non-human apes. I will also explore some recent evidence for the presence of linguistic principles, including Zipf's and Menzerath's laws.

The Composition of a Theatrical Sign Language

Friday 26th April: Wendy Sandler (University of Haifa)

Two distinguishing characteristics make sign languages especially illuminating for understanding the nature of human language. First, as I will show, visible actions of bodily components often correspond directly to linguistic components, making their compositional structure easier to identify and study. Second, sign languages, unlike spoken languages, can be born at any time, and the course of their emergence can be traced empirically. I will briefly review how these components emerge in the young Al-Sayyid Bedouin Sign Language. We will then take the study of the Grammar of the Body one step further, into the realm of sign language theatre.

The deaf actors of the Ebisu Sign Language Theatre Laboratory have created a communicative medium that draws on sign language and visual theatre. Their goal is to reach deaf and hearing audiences alike – directly, without interpreters. I will show how the actors weave together components from sign language, gesture, and mime, to produce complex compositional arrays pushing the bounds of human expression in the service of art.

Grouping in Music and Language

Monday 13th May: Jonah Katz (West Virginia University)

In this talk, I review evidence concerning the nature of grouping in music and language, and attempt to draw from these areas some general lessons about music and language. The two domains both involve correspondence between auditory discontinuities and group boundaries, reflecting the Gestalt principles of proximity and similarity, as well as a nested, hierarchical organization of constituents. There are also obvious differences between musical and linguistic grouping. Grappling with those differences requires one to think in detail about modularity, information flow, and the functional nature of cognitive domains. I conclude that deep structural, perceptual, and functional similarities exist between musical and linguistic grouping, but there is less evidence that these properties distinguish them from other complex cognitive activities.

MĂ©lissa Berthet

Friday 24th May 

MĂ©lissa Berthet (Institut Jean Nicod, ENS)

Organizer

Pritty Patel-Grosz

Multi-modal expression of emotion

Friday 13th September Beau Sievers (Harvard)

People express emotion using their voice, face and movement, as well as through abstract forms as in art, architecture and music. The structure of these expressions often seems intuitively linked to its meaning. For example, romantic poetry is written in flowery curlicues, while the logos of death metal bands use spiky script. Similarly, people expressing anger flail and yell, while people expressing sadness slouch and sob. This talk presents evidence that these correspondences arise because emotions are signaled using a multi-sensory code: Variation in low-level features that are shared across the senses is used to express emotional meaning. First, we show that emotional arousal is communicated using the central tendency of the frequency spectrum across sounds, shapes, speech, and human body movements. Second, we show that prototypical emotions such as happiness, sadness, and anger are represented the same way in music and in movement, and that this correspondence holds across cultures. Finally, we show that these feature configurations are represented using a single neural code that is shared by both auditory and visual brain areas. We frame these findings in terms of cognitive ethology and statistical learning--to maximize legibility and therefore survival, systems for expressing and perceiving emotion are tuned to fit one another.

Objecting to discourse moves: Presupposition denials with even and beyond

Friday 27th September Naomi Clair Francis (MIT)

There are several ways of denying a presupposition. This talk will explore two properties of one of them, namely presupposition denials that contain even. The first property is a puzzling polarity-based asymmetry: even-like items in several languages are acceptable in negative presupposition denials but not in positive ones, as shown in (1) for English.

1.  A: Did Kenji's wife come to the picnic? (Presupposes: Kenji has a wife, i.e., is married)

     B: Kenji isn't even married!

     B': #Kenji's even unmarried/a bachelor!

The contrast between sentences like (1B) and (1B') is not straightforwardly reducible to independent properties of even or of presupposition denial, but instead reflects something about how even and presupposition denial interact. I propose a solution to the puzzle that makes crucial use of i) the controversial additive presupposition of even, ii) presuppositions triggered within the salient focus alternatives, and iii) an independently motivated mechanism for denying presuppositions under negation. I explore crosslinguistic predictions of the proposed analysis and discuss what the puzzle can teach us about focus-sensitive operators, presuppositions, and focus alternatives in discourse.

The second property of presupposition denials with even explored in this talk is the facial expressions that accompany them. Presupposition denials with even are felt to be particularly natural when accompanied by a distinctive facial expression and prosodic melody. I outline two properties of these co-speech gestures that are of potential interest to super linguists: i) their acceptability appears to depend at least in part on the linguistic material that is used to deny the presupposition, and ii) they can replace overt linguistic material.

Multimodal Communication: A Discourse Approach

Friday 25th October Malihe Alikhani (Rutgers)

The integration of textual and visual information is fundamental to the way people communicate. My hypothesis is that despite the differences of the visual and linguistic communication, the two have similar intentional, inferential and contextual properties, which can be modeled with similar representations and algorithms. I present three successful case studies where natural language techniques provide a useful foundation for supporting user engagement with visual communication. Finally, I propose using these findings for designing interactive systems that can communicate with people using a broad range of appropriate modalities.

Language evolution and the brain

Friday 8th November Robert A. Barton (Durham University)

Recent evidence suggests new hypotheses about the neural basis of language evolution. First, a key role has been postulated for the cerebellum in the control and comprehension of sequences, including speech. Second, humans and other great apes appear to be specialized for such control and comprehension in the context of extractive foraging and tool use. Third, our phylogenetic analyses demonstrate a marked acceleration in the rate of cerebellar expansion and genes involved in cerebellar development in these species. I suggest that below-branch locomotion and route planning may have been a cognitive pre-adaptation for syntactical behaviours, and that this illustrates how the distinction between motor and cognitive control is blurred.

Gesture as a window onto the mind? Exploring Reference and Deception in chimpanzee gestures.

Wednesday 13th November, 4.15-6pm, HW536. Cat Hobaiter (University of St Andrews)

A consistent problem in decoding the communication of others is that we have no direct access to thought. Instead we make a series of inferences based on behaviour and context. When studying the communication of other species this becomes particularly challenging, but with sufficient data, over time, we can find consistent patterns in behaviour that allow us to infer cognitive states. Our research group has used this method to explore meaning in the communication of great apes. Now that simple imperative meanings seem relatively well established - what next? I will provide some examples of possible use of reference and deception in chimpanzee gestures and discuss what other aspects gesture may allow us to explore.

2018

Super Linguistics - an introduction

Time and place: Dec. 10, 2018 10:00 AM–6:00 PM, 12th floor NT Hus

Speakers:

  • Philippe Schlenker (Institut Jean-Nicod, CNRS; New York University) & Pritty Patel-Grosz (University of Oslo) - What is Super Linguistics?
  • Cornelia Ebert (ZAS) - A Closer Look at Semantic Iconic Effects of Co-Speech Gestures and Signs
  • Jeremy Kuhn (Institut Jean-Nicod, CNRS) - Formal Semantics of Primate Communication
  • Uli Sauerland (ZAS) - Animal Communication

Organizer

Pritty Patel-Grosz

Published May 25, 2022 9:13 AM - Last modified May 25, 2022 9:13 AM