Probabilistic Representations of Linguistic Meaning (PRELIM)

This team will first undertake an open-ended and substantive deliberation of meaning representations for linguistic processing, and then focus on a pragmatic problem in semantic processing by machines.

Goal 1: Deliberate upon representations of linguistic meaning. “Deep” natural-language understanding will eventually need more sophisticated semantic representations. What representations should we be using in 10 years? How will they relate to non-linguistic processing? How can we start to recover them from text or other linguistic resources?

Linguists currently rely on modal logic as the foundation of semantics. However, semantics and knowledge representation must connect to reasoning and pragmatics, which are increasingly regarded by the AI and cognitive science communities as involving probabilistic inference and not just logical inference. Can we find a probabilistic foundation to integrate the whole enterprise? What is the role of probability distributions over semantic representations and within semantic representations?

The team includes leaders from multiple communities — linguistics,natural language processing, machine learning, and computational cognitive science. We hope to make progress toward an acceptable theory by integrating the constraints and formal techniques contributed by all of these communities.

This week-long immersive exercise, which will take a broad perspective on meaning and its representation, is expected to inform the long-term thinking of all workshop participants, even as they pursue near-term practical uses of meaning representations.

Goal 2: Explore semantic/proto-roles, from both a theoretical and an empirical perspective.

This research is motivated by linguists such as David Dowty, who have considered the meta-question of which (if any) of the semantic role theories espoused in the literature are well founded. Instead of the traditional coarse labels, they create a binary feature structure representation by collecting human responses to questions on proto-roles (e.g., “does the subject of this verb have a causal role in the event?” or “does the object of this verb change location as a result of the event?”).

From a computational perspective, the team will build a classifier for automatic prediction of these binary features, adapting recent work led by Van Durme on models for PropBank semantic role classification. They will also perform corpus-based studies on how these feature structures correlate with existing resources such as Framenet and PropBank. PropBank is a precursor to the current work in AMR, which will lead to interesting discussions with the CLAMR team.

 

Team Members
Team Leader
Jason EisnerJohns Hopkins University
Benjamin Van DurmeJohns Hopkins University
Senior Members
Oren EtzioniUniversity of Washington, Allen Institute
Craig HarmanJohns Hopkins University
Shalom LappinKing's College London
Staffan LarssonUniversity of Gothenburg
Dan LassiterStanford University
Percy LiangStanford University
David McAllesterToyota Technical Institute
James PustejovskyBrandeis University
Kyle RawlinsJohns Hopkins University
Graduate Students
Nicholas AndrewsJohns Hopkins University
Frank FerraroJohns Hopkins University
Drew ReisingerJohns Hopkins University
Darcey RileyJohns Hopkins University
Rachel RudingerJohns Hopkins University

Center for Language and Speech Processing