UnImplicit: The Second Workshop on
Understanding Implicit and Underspecified Language

at NAACL 2022, Seattle


logo

Implicitness and underspecification are ubiquitous in language. Specifically, language utterances may contain empty or fuzzy elements, such as the following: units of measurement, as in she is 30 vs. it costs 30 (30 what?), bridges and other missing links, as in she tried to enter the car, but the door was stuck (the door of what?), implicit semantic roles, as in I met her while driving (who was driving?), and various sorts of gradable phenomena; is a small elephant smaller than a big bee? Even though these phenomena may increase the chance of having misunderstandings, our conversational partners often manage to understand our utterances because they consider context-specific elements, such as time, culture, background knowledge and previous utterances in the conversation.

In particular, (implicit) domain restrictions reveal the problem of context dependence of the interpretation of utterances (Stanley and Gendler Szab ́o, 2000). While certain expressions might not be underspecified per se, their interpretation is implicitly restricted by the broader discourse they appear in. This is especially valid for expressions with quantifiers such as every marble is red, which is only true for certain sets of marbles. The problem of underspecified language also extends to pragmatics: Presuppositions or implicatures are by definition not explicitly stated (Stalnaker et al.,1977; Beaver, 1997). In addition, collaborative games datasets have revealed that speakers often employ underspecified language (Djalali et al., 2012).

Despite the recent advancements on various semantic tasks, modeling implicitness and underspecification remains a challenging problem in NLP because the elements are not realized on the surface level. Instead of relying on superficial patterns, models are required to leverage contextual aspects of discourse that go beyond the sentence-level. Often, there is a lack of available resources to train such models due to the need for human annotation. Multiple datasets and tasks targeting implicit phenomena have been proposed in recent years, including implicit semantic role labels and event arguments (Gerber and Chai, 2010; Ruppenhofer et al., 2010; Moor et al., 2013; Ebner et al., 2020; Cheng and Erk, 2018,2019), bridging and noun phrase linking (R ̈osiger et al., 2018; Elazar et al., 2021) and other empty elements (Elazar and Goldberg, 2019; McMahan and Stone, 2020). These endeavours were usually narrow in scope and it is still not clear how current models can appropriately model how linguistic meaning is shaped and influenced by context.

The first workshop on Implicit and Underspecified Language (held at ACL2021) brought a diverse group of NLP practitioners and theoreticians together to address the challenges that implicit and underspecified language poses on NLP. More specifically, the workshop brought to light the wide range of phenomena that fall under that topic and how researchers from different perspectives tackle them. In addition, the workshop proved that there is a strong interest in implicit and underspecified language within the NLP community.

The goal of the second edition of the workshop is to continue eliciting future progress on implicit and underspecified language with a strong focus on the annotation and development of aspects that go beyond the sentence-level of the phenomena. Similar to the first edition, we would accept theoretical and practical contributions (long, short and non-archival) on all aspects related to the computational modeling (benchmarks, evaluation schemes, annotation, applications) of phenomena such as: implicit arguments, fuzzy elements, zero anaphora, metonomy, and discourse markers. In addition,we specifically encourage papers with a strong focus on the interpretation of these elements on the discourse, pragmatic and cognitive levels of language understanding.

Judith Degen

Stanford University
How can neural language models be leveraged for pragmatic theory-building?
logo Utterances are notoriously underspecified with respect to the speaker's intended meaning. Linguistic pragmatics has long been devoted to characterizing the principles underlying listeners' contextual reasoning about intended meanings. The advent of Bayesian computational modeling has brought about a qualitative shift in our understanding of pragmatic reasoning by virtue of allowing for precise formalization of these principles. However, Bayesian models suffer from scaling and intractability issues. In this talk, I propose that NLP and pragmatic theory can mutually inform each other. I review a key set of phenomena for neural language models to capture, and lay out a path for neural language models to be leveraged in pragmatic theory-building.

Nathan Schneider

Georgetown University
A pandemic’s worth of plastic utensils: a Spragmatic view on meaning
logo How do we think about meaning? It is easy to look at words that are written down and conclude that those are the source of most textual meaning, and any meaning that goes beyond compositional semantics is the exception. But another view is that this is precisely backwards—our understanding of language is dominated by extrasemantic machinery, with the explicit linguistic cues as scaffolding. My remarks will explore the consequences of this "Spragmatic" perspective for meaning representation and models of meaning.

Michael Franke

University of Tübingen
Helpfulness of answers and goal-signaling questions
logo Relevance is a central notion in the study of communication, but it is multifaceted, if not elusive and therefore hard to approach formally. This talk tries to get closer to a formalization of useful notions of relevance by looking at experimental data based on which different information-theoretic concepts of relevance of an answer can be compared. It then introduces a probabilistic model of question and answer choice which pivots around an action-based notion of relevance of information and which predicts how helpfulness of an answer is informed by the goal-signaling quality of relevant questions.

Judith Degen's talk is unfortunately cancelled due to unforseen circumstances. We've moved breakout session II to an earlier time (see below).

08:30 Opening
8:45 Invited talk (Room: 701 Clallum, also streamed on zoom)
Michael Franke
09:30 Virtual Poster Session (Gather.town)
Pre-trained Language Models' Interpretation of Evaluativity Implicature: Evidence from Gradable Adjectives Usage in Context
Yan Cong
Pragmatic and Logical Inferences in NLI Systems: The Case of Conjunction Buttressing
Paolo Pedinotti, Emmanuele Chersoni, Enrico Santus and Alessandro Lenci
"Devils are in the Details'' — Annotating Specificity of Clinical Advice from Medical Literature
Yingya Li and Bei Yu
Searching for PETs: Using Distributional and Sentiment-Based Methods to Find Potentially Euphemistic Terms
Patrick Lee, Martha Gavidia, Anna Feldman and Jing Peng
AbstractsGenerating Discourse Connectives with Pre-trained Language Models: Discourse Relations Help Yet Again
Symon Stevens-Guille, Aleksandre Maskharashvili, Xintong Li and Michael White
Looking Beyond Syntax: Detecting Implicit Semantic Arguments
Paul Roit, Valentina Pyatkin, Yoav Goldberg and Ido Dagan
ULN: Towards Underspecified Vision-and-Language Navigation
Weixi Feng, Tsu-Jui Fu, Yujie Lu and William Yang Wang
Bridging the Gap: Recovering Elided VPs in Coordination Structures
Royi Rassin, Yoav Goldberg and Reut Tsarfaty
Inferring Implicit Relations with Language Models for Question Answering
Uri Katz, Mor Geva and Jonathan Berant
Life after BERT: What do Other Muppets Understand about Language?
Vladislav Lialin, Kevin Zhao, Namrata Shivagunde and Anna Rumshisky
10:30 Oral Session (Room: 701 Clallum, also streamed on zoom)
Pre-trained Language Models' Interpretation of Evaluativity Implicature: Evidence from Gradable Adjectives Usage in Context
Yan Cong
Searching for PETs: Using Distributional and Sentiment-Based Methods to Find Potentially Euphemistic Terms
Patrick Lee, Martha Gavidia, Anna Feldman and Jing Peng
11:00 In-Person Poster Session (Regency Ballroom, 7th floor)
13:30 Breakout Session I (discussion / presentation)(Room: 701 Clallum or zoom) What is the range of implicit phenomena? And how should we use ML-based modeling of these phenomena and tasks?
14:15 Invited talk (Room: 701 Clallum, also streamed on zoom)
Nathan Schneider
15:30 Breakout Session II (discussion / presentation) (Room: 701 Clallum or zoom) What are the next steps in implicit and underspecified language research?
16:15 (Official) closing (Room: 701 Clallum, also streamed on zoom)