Implicitness and underspecification are ubiquitous in language. Specifically, language utterances may contain empty or fuzzy elements, such as the following: units of measurement, as in she is 30 vs. it costs 30 (30 what?), bridges and other missing links, as in she tried to enter the car, but the door was stuck (the door of what?), implicit semantic roles, as in I met her while driving (who was driving?), and various sorts of gradable phenomena; is a small elephant smaller than a big bee? Even though these phenomena may increase the chance of having misunderstandings, our conversational partners often manage to understand our utterances because they consider context-specific elements, such as time, culture, background knowledge and previous utterances in the conversation.
In particular, (implicit) domain restrictions reveal the problem of context dependence of the interpretation of utterances (Stanley and Gendler Szab ́o, 2000). While certain expressions might not be underspecified per se, their interpretation is implicitly restricted by the broader discourse they appear in. This is especially valid for expressions with quantifiers such as every marble is red, which is only true for certain sets of marbles. The problem of underspecified language also extends to pragmatics: Presuppositions or implicatures are by definition not explicitly stated (Stalnaker et al.,1977; Beaver, 1997). In addition, collaborative games datasets have revealed that speakers often employ underspecified language (Djalali et al., 2012).
Despite the recent advancements on various semantic tasks, modeling implicitness and underspecification remains a challenging problem in NLP because the elements are not realized on the surface level. Instead of relying on superficial patterns, models are required to leverage contextual aspects of discourse that go beyond the sentence-level. Often, there is a lack of available resources to train such models due to the need for human annotation. Multiple datasets and tasks targeting implicit phenomena have been proposed in recent years, including implicit semantic role labels and event arguments (Gerber and Chai, 2010; Ruppenhofer et al., 2010; Moor et al., 2013; Ebner et al., 2020; Cheng and Erk, 2018,2019), bridging and noun phrase linking (R ̈osiger et al., 2018; Elazar et al., 2021) and other empty elements (Elazar and Goldberg, 2019; McMahan and Stone, 2020). These endeavours were usually narrow in scope and it is still not clear how current models can appropriately model how linguistic meaning is shaped and influenced by context.
The first workshop on Implicit and Underspecified Language (held at ACL2021) brought a diverse group of NLP practitioners and theoreticians together to address the challenges that implicit and underspecified language poses on NLP. More specifically, the workshop brought to light the wide range of phenomena that fall under that topic and how researchers from different perspectives tackle them. In addition, the workshop proved that there is a strong interest in implicit and underspecified language within the NLP community.
The goal of the second edition of the workshop is to continue eliciting future progress on implicit and underspecified language with a strong focus on the annotation and development of aspects that go beyond the sentence-level of the phenomena. Similar to the first edition, we would accept theoretical and practical contributions (long, short and non-archival) on all aspects related to the computational modeling (benchmarks, evaluation schemes, annotation, applications) of phenomena such as: implicit arguments, fuzzy elements, zero anaphora, metonomy, and discourse markers. In addition,we specifically encourage papers with a strong focus on the interpretation of these elements on the discourse, pragmatic and cognitive levels of language understanding.
Judith Degen's talk is unfortunately cancelled due to unforseen circumstances. We've moved breakout session II to an earlier time (see below).
08:30 | Opening | ||||
8:45 |
Invited talk (Room: 701 Clallum, also streamed on zoom) Michael Franke |
||||
09:30 | Virtual Poster Session (Gather.town) | ||||
Pre-trained Language Models' Interpretation of Evaluativity Implicature: Evidence from Gradable Adjectives Usage in Context Yan Cong | |||||
Pragmatic and Logical Inferences in NLI Systems: The Case of Conjunction Buttressing Paolo Pedinotti, Emmanuele Chersoni, Enrico Santus and Alessandro Lenci | |||||
"Devils are in the Details'' — Annotating Specificity of Clinical Advice from Medical Literature Yingya Li and Bei Yu | |||||
Searching for PETs: Using Distributional and Sentiment-Based Methods to Find Potentially Euphemistic Terms Patrick Lee, Martha Gavidia, Anna Feldman and Jing Peng | |||||
Abstracts | Generating Discourse Connectives with Pre-trained Language Models: Discourse Relations Help Yet Again Symon Stevens-Guille, Aleksandre Maskharashvili, Xintong Li and Michael White | ||||
Looking Beyond Syntax: Detecting Implicit Semantic Arguments Paul Roit, Valentina Pyatkin, Yoav Goldberg and Ido Dagan | |||||
ULN: Towards Underspecified Vision-and-Language Navigation Weixi Feng, Tsu-Jui Fu, Yujie Lu and William Yang Wang | |||||
Bridging the Gap: Recovering Elided VPs in Coordination Structures Royi Rassin, Yoav Goldberg and Reut Tsarfaty | |||||
Inferring Implicit Relations with Language Models for Question Answering Uri Katz, Mor Geva and Jonathan Berant | |||||
Life after BERT: What do Other Muppets Understand about Language? Vladislav Lialin, Kevin Zhao, Namrata Shivagunde and Anna Rumshisky | |||||
10:30 | Oral Session (Room: 701 Clallum, also streamed on zoom)
Pre-trained Language Models' Interpretation of Evaluativity Implicature: Evidence from Gradable Adjectives Usage in Context | Yan Cong Searching for PETs: Using Distributional and Sentiment-Based Methods to Find Potentially Euphemistic Terms | Patrick Lee, Martha Gavidia, Anna Feldman and Jing Peng | ||
11:00 | In-Person Poster Session (Regency Ballroom, 7th floor) | ||||
13:30 | Breakout Session I (discussion / presentation)(Room: 701 Clallum or zoom) What is the range of implicit phenomena? And how should we use ML-based modeling of these phenomena and tasks? | ||||
14:15 |
Invited talk (Room: 701 Clallum, also streamed on zoom) Nathan Schneider |
||||
15:30 | Breakout Session II (discussion / presentation) (Room: 701 Clallum or zoom) What are the next steps in implicit and underspecified language research? | ||||
16:15 | (Official) closing (Room: 701 Clallum, also streamed on zoom) |
Please submit your papers at https://www.softconf.com/naacl2022/UnImplicit2022/
At this link you will find two submission options: one for papers having been reviewed at ARR and the other one for direct submissions.
Please use the ACL style templates.
If you have any questions please email us at unimplicitworkshop -AT- gmail.com