The Second International Workshop on Designing Meaning Representations (DMR 2020)

September 15, 2020, in Barcelona, Spain, in conjunction with COLING 2020

Background: After a successful inaugural workshop in 2019 in Florence Italy, the Second International Workshop on Designing Meaning Representations (DMR 2020) will be held on September 14, 2020, in Barcelona, Spain, in conjunction with COLING 2020.

While deep learning methods have led to many breakthroughs in practical natural language applications, most notably in Machine Translation, Machine Reading, Question Answering, Recognizing Textual Entailment, and so on, there is still a sense among many NLP researchers that we have a long way to go before we can develop systems that can actually “understand” human language and explain the decisions they make. Indeed, “understanding” natural language entails many different human-like capabilities, and they include but are not limited to the ability to track entities in a text, understand the relations between these entities, track events and their participants, understand how events unfold in time, and distinguish events that have actually happened from events that are planned or intended, are uncertain, or did not happen at all. “Understanding” also entails human-like ability to perform qualitative and quantitative reasoning, possibly with knowledge acquired about the real world.  We believe a critical step in achieving natural language understanding is to design meaning representations for text that have the necessary meaning “ingredients” that help us achieve these capabilities.

 

There has been a growing body of research devoted to the design, annotation, and parsing of meaning representations in recent years. The meaning representations that have been used for semantic parsing research are developed with different linguistic perspectives and practical goals in mind and have different formal properties. Formal meaning representation frameworks such as Minimal Recursion Semantics (MRS) and Discourse Representation Theory (as exemplified in the Groningen Meaning Bank and the Parallel Meaning Bank) are developed with the goal of supporting logical inference  in reasoning-based AI systems and are therefore easily translatable into first-order logic, requiring proper representation of semantic components such as quantification, negation, tense, and modality. Other meaning representation frameworks such as Abstract Meaning Representation, Tectogrammatical Representation (TR) in Prague Dependency Treebanks and the Universal Conceptual Cognitive Annotation (UCCA), put more emphasis on the representation of core predicate-argument structure, lexical semantic information such as semantic roles and word senses, or named entities and relations. The automatic parsing of natural language text into these meaning representations and to a lesser degree the generation of natural language text from these meaning representations are also very active areas of research, and a wide range of technical approaches and learning methods have been applied to these problems. In addition, there have also been early attempts to use these meaning representations in natural language applications.

This workshop intends to bring together researchers who are producers and consumers of meaning representations and through their interaction gain a deeper understanding of the key elements of meaning representations that are the most valuable to the NLP community. The workshop will also provide an opportunity for meaning representation researchers to critically examine existing frameworks with the goal of using their findings to inform the design of next-generation meaning representations. A third goal of the workshop is to explore opportunities and identify challenges in the design and use of meaning representations in multilingual settings. A final goal of the workshop is to understand the relationship between distributed meaning representations trained on large data sets using network models and the symbolic meaning representations that are carefully designed and annotated by CL researchers and gain a deeper understanding of areas where each type of meaning representation is the most effective.

Solicitation: We solicit papers that address one or more of the following topics:

Submission Information: TBA.

Invited Speakers: TBA.

Co-organizers