UnImplicit: The Second Workshop on
Understanding Implicit and Underspecified Language

at NAACL 2022, Seattle


logo

Implicitness and underspecification are ubiquitous in language. Specifically, language utterances may contain empty or fuzzy elements, such as the following: units of measurement, as in she is 30 vs. it costs 30 (30 what?), bridges and other missing links, as in she tried to enter the car, but the door was stuck (the door of what?), implicit semantic roles, as in I met her while driving (who was driving?), and various sorts of gradable phenomena; is a small elephant smaller than a big bee? Even though these phenomena may increase the chance of having misunderstandings, our conversational partners often manage to understand our utterances because they consider context-specific elements, such as time, culture, background knowledge and previous utterances in the conversation.

In particular, (implicit) domain restrictions reveal the problem of context dependence of the interpretation of utterances (Stanley and Gendler Szab ́o, 2000). While certain expressions might not be underspecified per se, their interpretation is implicitly restricted by the broader discourse they appear in. This is especially valid for expressions with quantifiers such as every marble is red, which is only true for certain sets of marbles. The problem of underspecified language also extends to pragmatics: Presuppositions or implicatures are by definition not explicitly stated (Stalnaker et al.,1977; Beaver, 1997). In addition, collaborative games datasets have revealed that speakers often employ underspecified language (Djalali et al., 2012).

Despite the recent advancements on various semantic tasks, modeling implicitness and underspecification remains a challenging problem in NLP because the elements are not realized on the surface level. Instead of relying on superficial patterns, models are required to leverage contextual aspects of discourse that go beyond the sentence-level. Often, there is a lack of available resources to train such models due to the need for human annotation. Multiple datasets and tasks targeting implicit phenomena have been proposed in recent years, including implicit semantic role labels and event arguments (Gerber and Chai, 2010; Ruppenhofer et al., 2010; Moor et al., 2013; Ebner et al., 2020; Cheng and Erk, 2018,2019), bridging and noun phrase linking (R ̈osiger et al., 2018; Elazar et al., 2021) and other empty elements (Elazar and Goldberg, 2019; McMahan and Stone, 2020). These endeavours were usually narrow in scope and it is still not clear how current models can appropriately model how linguistic meaning is shaped and influenced by context.

The first workshop on Implicit and Underspecified Language (held at ACL2021) brought a diverse group of NLP practitioners and theoreticians together to address the challenges that implicit and underspecified language poses on NLP. More specifically, the workshop brought to light the wide range of phenomena that fall under that topic and how researchers from different perspectives tackle them. In addition, the workshop proved that there is a strong interest in implicit and underspecified language within the NLP community.

The goal of the second edition of the workshop is to continue eliciting future progress on implicit and underspecified language with a strong focus on the annotation and development of aspects that go beyond the sentence-level of the phenomena. Similar to the first edition, we would accept theoretical and practical contributions (long, short and non-archival) on all aspects related to the computational modeling (benchmarks, evaluation schemes, annotation, applications) of phenomena such as: implicit arguments, fuzzy elements, zero anaphora, metonomy, and discourse markers. In addition,we specifically encourage papers with a strong focus on the interpretation of these elements on the discourse, pragmatic and cognitive levels of language understanding.

Judith Degen

Stanford University
TBD
TBD

Nathan Schneider

Georgetown University
TBD
TBD

Michael Franke

University of Tübingen
TBD
TBD