The First International Workshop on Designing Meaning Representations (DMR)

will be held on August 1, 2019, in Florence, Italy. in conjunction with ACL 2019.

Workshop overview:

While deep learning methods have led to many breakthroughs in practical natural language applications, most notably in Machine Translation, Machine Reading/Question Answering, etc., there is still a sense among many NLP researchers that we have a long way to go before we can develop systems that can actually “understand” human language. “Understanding” natural language entails many different human-like capabilities, and they include but are not limited to the ability to track entities in a text, understand the relations between these entities, track events and their participants described in a text, understand how events unfold in time, and distinguish events that have actually happened from events that are planned or intended, are uncertain, or did not happen at all. We believe a critical step in achieving natural language understanding is to design meaning representations for text that have the necessary meaning “ingredients” that help us achieve these capabilities. Such meaning representations also need to be easy to produce in order to generate sufficient amounts of data to allow the training of accurate meaning representation parsers.

There has been a growing body of research devoted to the design, annotation, and parsing of meaning representations in recent years. The meaning representations that have been used for meaning representation parsing research are developed with different linguistic perspectives and practical goals in mind and have different formal properties [17]. Earlier meaning representation frameworks such as Minimal Recursion Semantics (MRS) [7, 11] and Discourse Representation Theory [15] (as exemplified in the Grongingen Meaning Bank [32] and the Parallel Meaning Bank [33]) are developed with the goal of supporting logical inference in reasoning based AI systems and are therefore easily translatable into first-order logic, requiring proper representation of semantic components such as quantification, negation, tense, and modality. More recent meaning representation frameworks such as Abstract Meaning Representation [4], Tectogrammatical Representation (TR) in Prague Dependency Treebanks [14], and the Universal Conceptual Cognitive Annotation (UCCA) [1], put more emphasis on the representation of lexical semantic information such as semantic roles and word sense, or entities and relations. The automatic parsing of natural language text into these meaning representations [34, 10, 30, 31, 2, 5, 12, 29, 3, 6, 16, 23, 21, 20] and to a lesser degree the generation of natural language text from these meaning representations [9, 16, 27, 26, 28] are also very active areas of research, and a wide range of technical approaches and learning methods have been applied to these problems. In addition, there have also been early attempts to use these meaning representations in natural language applications [22, 18, 13, 25, 8, 24, 19].

This workshop intends to bring together researchers who are producers and consumers of meaning representations and through their interaction gain a deeper understanding of the key elements of meaning representations that are the most valuable to the NLP community. The workshop will also provide an opportunity for meaning representation researchers to critically examine existing meaning representations with the goal of using their findings to inform the design of next-generation meaning representations. A third goal of the workshop is to explore opportunities and identify challenges in the design and use of meaning representations in multilingual settings. A final goal of the workshop is to understand the relationship between distributed meaning representations trained on large data sets using neural network models and the symbolic meaning representations that are carefully designed and annotated by CL researchers and gain a deeper understanding of areas where each type of meaning representation is the most effective. The workshop will solicit papers that address any one or combination of the following topics:

Submission information can be found here

The workshop program is now available for Download

Invited speakers:


Program committee: