2.0-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//GREGORIANPUBLISHhttps://www.clsp.jhu.eduAmerica/New_YorkAmerica/New_YorkAmerica/New_York2022-11-06T02:00:002023-11-05T02:00:002024-11-03T02:00:00EST-04:00-05:002023-03-12T02:00:002024-03-10T02:00:00EDT-05:00-04:00ai1ec-21259@www.clsp.jhu.edu2023-06-04T19:00:07ZTom McCoy (Johns Hopkins University) “Opening the Black Box of Deep Learning: Representations, Inductive Biases, and Robustness”<p><strong>Abstract</strong></p>
<p>Natural language processing has been revolutionized by neural networks, which perform impressively well in applications such as machine translation and question answering. Despite their success, neural networks still have some substantial shortcomings: Their internal workings are poorly understood, and they are notoriously brittle, failing on example types that are rare in their training data. In this talk, I will use the unifying thread of hierarchical syntactic structure to discuss approaches for addressing these shortcomings. First, I will argue for a new evaluation paradigm based on targeted, hypothesis-driven tests that better illuminate what models have learned; using this paradigm, I will show that even state-of-the-art models sometimes fail to recognize the hierarchical structure of language (e.g., to conclude that “The book on the table is blue” implies “The table is blue.”) Second, I will show how these behavioral failings can be explained through analysis of models’ inductive biases and internal representations, focusing on the puzzle of how neural networks represent discrete symbolic structure in continuous vector space. I will close by showing how insights from these analyses can be used to make models more robust through approaches based on meta-learning, structured architectures, and data augmentation.</p>
<p><strong>Biography</strong></p>
<p>Tom McCoy is a PhD candidate in the Department of Cognitive Science at Johns Hopkins University. As an undergraduate, he studied computational linguistics at Yale. His research combines natural language processing, cognitive science, and machine learning to study how we can achieve robust generalization in models of language, as this remains one of the main areas where current AI systems fall short. In particular, he focuses on inductive biases and representations of linguistic structure, since these are two of the major components that determine how learners generalize to novel types of input.</p>America/New_York2022-01-31T12:00:00America/New_York2022-01-31T13:15:00en-USSeminarsAmes Hall 234 @ 3400 N. Charles Street, Baltimore, MD 212180https://www.clsp.jhu.edu/events/tom-mccoy-johns-hopkins-university-opening-the-black-box-of-deep-learning-representations-inductive-biases-and-robustness/freeen-US2022,January,McCoyai1ec-22395@www.clsp.jhu.edu2023-06-04T19:00:07ZDavid Chiang (University of Notre Dame) “Exact Recursive Probabilistic Programming with Colin McDonald, Darcey Riley, Kenneth Sible (Notre Dame) and Chung-chieh Shan (Indiana)”<p><strong>Abstract</strong></p>
<div dir="ltr">Recursive calls over recursive data are widely useful for generating probability distributions, and probabilistic programming allows computations over these distributions to be expressed in a modular and intuitive way. Exact inference is also useful, but unfortunately, existing probabilistic programming languages do not perform exact inference on recursive calls over recursive data, forcing programmers to code many applications manually. We introduce a probabilistic language in which a wide variety of recursion can be expressed naturally, and inference carried out exactly. For instance, probabilistic pushdown automata and their generalizations are easy to express, and polynomial-time parsing algorithms for them are derived automatically. We eliminate recursive data types using program transformations related to defunctionalization and refunctionalization. These transformations are assured correct by a linear type system, and a successful choice of transformations, if there is one, is guaranteed to be found by a greedy algorithm. I will also describe the implementation of this language in two phases: first, compilation to a factor graph grammar, and second, computing the sum-product of the factor graph grammar.</div>
<div dir="ltr"></div>
<div dir="ltr"><strong>Biography</strong></div>
<div dir="ltr"><span dir="ltr">David Chiang (PhD, University of Pennsylvania, 2004) is an associate professor in the Department of Computer Science and Engineering at the University of Notre Dame. His research is on computational models for learning human languages, particularly how to translate from one language to another. His work on applying formal grammars and machine learning to translation has been recognized with two best paper awards (at ACL 2005 and NAACL HLT 2009). He has received research grants from DARPA, NSF, Google, and Amazon, has served on the executive board of NAACL and the editorial board of Computational Linguistics and JAIR, and is currently on the editorial board of Transactions of the ACL.</span></div>America/New_York2022-10-17T12:00:00America/New_York2022-10-17T13:15:00en-USSeminarsHackerman Hall B17 @ 3400 N. Charles Street, Baltimore, MD 212180https://www.clsp.jhu.edu/events/david-chiang-university-of-notre-dame/freeen-US2022,Chiang,October