BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20120@www.clsp.jhu.edu DTSTAMP:20240328T212716Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nRobotics@Google’s mission is to make robots useful in the real world through machine learning. We are excited about a new model for robotics\, designed for generalization across diverse environments an d instructions. This model is focused on scalable data-driven learning\, w hich is task-agnostic\, leverages simulation\, learns from past experience \, and can be quickly adapted to work in the real-world through limited in teractions. In this talk\, we’ll share some of our recent work in this dir ection in both manipulation and locomotion applications.\nBiography\nCarol ina Parada is a Senior Engineering Manager at Google Robotics. She leads t he robot-mobility group\, which focuses on improving robot motion planning \, navigation\, and locomotion\, using reinforcement learning. Prior to th at\, she led the camera perception team for self-driving cars at Nvidia fo r 2 years. She was also a lead with Speech @ Google for 7 years\, where sh e drove multiple research and engineering efforts that enabled Ok Google\, the Google Assistant\, and Voice-Search. Carolina grew up in Venezuela an d moved to the US to pursue a B.S. and M.S. degree in Electrical Engineeri ng at University of Washington and her Phd at Johns Hopkins University at the Center for Language and Speech Processing (CLSP). DTSTART;TZID=America/New_York:20210423T120000 DTEND;TZID=America/New_York:20210423T131500 LOCATION:via Zoom SEQUENCE:0 SUMMARY:Carolina Parada (Google AI) “State of Robotics @ Google” URL:https://www.clsp.jhu.edu/events/carolina-parada-google-ai/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nRobotics@Google’s mission is to make robots useful i n the real world through machine learning. We are excited about a new mode l for robotics\, designed for generalization across diverse environments a nd instructions. This model is focused on scalable data-driven learning\, which is task-agnostic\, leverages simulation\, learns from past experienc e\, and can be quickly adapted to work in the real-world through limited i nteractions. In this talk\, we’ll share some of our recent work in this di rection in both manipulation and locomotion applications.
\n< strong>Biography
\nCarolina Parad a is a Senior Engineering Manager at Google Robotics. She leads the robot-mobility group\, which focuses on improving robot motion planning\, navigation\, and locomotion\, using reinforcement learning. Prior to that \, she led the camera perception team for self-driving cars at Nvidia for 2 years. She was also a lead with Speech @ Google for 7 years\, where she drove multiple research and engineering efforts that enabled Ok Google\, t he Google Assistant\, and Voice-Search. Carolina< /span> grew up in Venezuela and moved to the US to pursue a B.S. and M.S. degree in Electrical Engineering at University of Washington and her Phd a t Johns Hopkins University at the Center for Language and Speech Processin g (CLSP).
\n X-TAGS;LANGUAGE=en-US:2021\,April\,Parada END:VEVENT BEGIN:VEVENT UID:ai1ec-21072@www.clsp.jhu.edu DTSTAMP:20240328T212716Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nEmotion has intrigued researchers for generations. Th is fascination has permeated the engineering community\, motivating the de velopment of affective computing methods. However\, human emotion remains notoriously difficult to accurately detect. As a result\, emotion classifi cation techniques are not always effective when deployed. This is a probl em because we are missing out on the potential that emotion recognition pr ovides: the opportunity to automatically measure an aspect of behavior tha t provides critical insight into our health and wellbeing\, insight that i s not always easily accessible. In this talk\, I will discuss our efforts in developing emotion recognition approaches that are effective in natura l environments and demonstrate how these approaches can be used to support mental health.\n\nBiography\n\nEmily Mower Provost is an Associate Profes sor in Computer Science and Engineering and Toyota Faculty Scholar at the University of Michigan. She received her Ph.D. in Electrical Engineering f rom the University of Southern California (USC)\, Los Angeles\, CA in 2010 . She has been awarded a National Science Foundation CAREER Award (2017)\, the Oscar Stern Award for Depression Research (2015)\, a National Science Foundation Graduate Research Fellowship (2004-2007). She is a co-author o n the paper\, “Say Cheese vs. Smile: Reducing Speech-Related Variability f or Facial Emotion Recognition\,” winner of Best Student Paper at ACM Multi media\, 2014\, and a co-author of the winner of the Classifier Sub-Challen ge event at the Interspeech 2009 emotion challenge. Her research interests are in human-centered speech and video processing\, multimodal interfaces design\, and speech-based assistive technology. The goals of her research are motivated by the complexities of the perception and expression of hum an behavior. DTSTART;TZID=America/New_York:20211206T120000 DTEND;TZID=America/New_York:20211206T131500 LOCATION:Maryland Hall 110 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Emily Mower-Provost (University of Michigan) “Automatically Measuri ng Emotion from Speech: New Methods to Move from the Lab to the Real World ” URL:https://www.clsp.jhu.edu/events/emily-mower-provost-university-of-michi gan/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nAbstr act
\nAs AI-driven language interfaces (such as c hat-bots) become more integrated into our lives\, they need to become more versatile and reliable in their communication with human users. How can w e make progress toward building more “general” models that are capable of understanding a broader spectrum of language commands\, given practical co nstraints such as the limited availability of labeled data?
\nIn this talk\, I will describe my research toward addressing this ques tion along two dimensions of generality. First I will discuss progress in “breadth” — models that address a wider variety of tasks and abilities\, d rawing inspiration from existing statistical learning techniques such as m ulti-task learning. In particular\, I will showcase a system that works we ll on several QA benchmarks\, resulting in state-of-the-art results on 10 benchmarks. Furthermore\, I will show its extension to tasks beyond QA (su ch as text generation or classification) that can be “defined” via natural language. In the second part\, I will focus on progress in “depth” — mod els that can handle complex inputs such as compositional questions. I will introduce Text Modular Networks\, a general framework that casts problem- solving as natural language communication among simpler “modules.” Applyin g this framework to compositional questions by leveraging discrete optimiz ation and existing non-compositional closed-box QA models results in a mod el with strong empirical performance on multiple complex QA benchmarks whi le providing human-readable reasoning.
\nI will conclude w ith future research directions toward broader NLP systems by addressing th e limitations of the presented ideas and other missing elements needed to move toward more general-purpose interactive language understanding system s.
\nBiography
\nDaniel Khashabi is a postdoctoral researcher at the Allen Institute for Artificia l Intelligence (AI2)\, Seattle. Previously\, he completed his Ph.D. in Com puter and Information Sciences at the University of Pennsylvania in 2019. His interests lie at the intersection of artificial intelligence and natur al language processing\, with a vision toward more general systems through unified algorithms and theories.
\n X-TAGS;LANGUAGE=en-US:2022\,February\,Khashabi END:VEVENT BEGIN:VEVENT UID:ai1ec-23886@www.clsp.jhu.edu DTSTAMP:20240328T212716Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nThe arms race to build increasingly larger\, powerful language models (LMs) in the past year has been remarkable. Yet incorpora ting LMs effectively into practical applications that facilitate manual wo rkflows remains challenging. I will discuss LMs’ limiting factors and our efforts to overcome them. I will start with challenges surrounding efficie nt and robust LM alignment. I will share insights from our recent paper “S elf-Instruct” (ACL 2023)\, where we used vanilla (unaligned) LMs for align ing itself\, an approach that has yielded some success. Then\, I will move on to the challenge of tracing the output of LMs to reliable sources\, a weakness that makes them prone to hallucinations. I will discuss our recen t approach of ‘according-to’ prompting\, which steers LMs to quote directl y from sources observed in its pre-training. If time permits\, I will disc uss our ongoing project to adapt LMs to interact with web pages. Throughou t the presentation\, I will highlight our progress\, and end with question s about our future progress.\nBiography\nDaniel Khashabi is an assistant p rofessor in computer science at Johns Hopkins University and the Center fo r Language and Speech Processing (CLSP) member. He is interested in buildi ng reasoning-driven modular NLP systems that are robust\, transparent\, an d communicative\, particularly those that use natural language as the comm unication medium. Khashabi has published over 40 papers on natural languag e processing and AI in top-tier venues. His work touches upon developing. His research has won the ACL 2023 Outstanding Paper Award\, NAACL 2022 Bes t Paper Award\, research gifts from the Allen Institute for AI\, and an Am azon Research Award 2023. Before joining Hopkins\, he was a postdoctoral f ellow at the Allen Institute for AI (2019-2022) and obtained a Ph.D. from the University of Pennsylvania in 2019. DTSTART;TZID=America/New_York:20230908T120000 DTEND;TZID=America/New_York:20230908T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Daniel Khashabi (Johns Hopkins University) “Building More Helpful L anguage Models” URL:https://www.clsp.jhu.edu/events/daniel-khashabi-johns-hopkins-universit y/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nThe arms race to build increasingly larger\, powerful language models (LMs) in the past year has been remarkable. Yet incorpora ting LMs effectively into practical applications that facilitate manual wo rkflows remains challenging. I will discuss LMs’ limiting factors and our efforts to overcome them. I will start with challenges surrounding efficie nt and robust LM alignment. I will share insights from our recent paper “Self-Instruct” (ACL 2023)\, where we used vanilla (unaligned) LMs for aligning itself\, an approach that has yielded some success. Then\, I will move on to the challenge of t racing the output of LMs to reliable sources\, a weakness that makes them prone to hallucinations. I will discuss our recent approach of ‘according-to’ prompting\, which steers LM s to quote directly from sources observed in its pre-training. If time per mits\, I will discuss our ongoing project to adapt LMs to interact with we b pages. Throughout the presentation\, I will highlight our progress\, and end with questions about our future progress.
\nBiography strong>
\nDaniel Khashabi is an assistant professor in computer science at Johns Hopkins University and the Center for Language and Speech Pr ocessing (CLSP) member. He is interested in building reasoning-driven modu lar NLP systems that are robust\, transparent\, and communicative\, partic ularly those that use natural language as the communication medium. Khasha bi has published over 40 papers on natural language processing and AI in t op-tier venues. His work touches upon developing. His research has won the ACL 2023 Outstanding Paper Award\, NAACL 2022 Best Paper Award\, research gifts from the Allen Institute for AI\, and an Amazon Research Award 2023 . Before joining Hopkins\, he was a postdoctoral fellow at the Allen Insti tute for AI (2019-2022) and obtained a Ph.D. from the University of Pennsy lvania in 2019.
\n X-TAGS;LANGUAGE=en-US:2023\,Khashabi\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-24157@www.clsp.jhu.edu DTSTAMP:20240328T212716Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nIn this talk\, I will present a simple extension of i mage-based Masked Autoencoders (MAE) to self-supervised representation lea rning from audio spectrograms. Following the Transformer encoder-decoder d esign in MAE\, our Audio-MAE first encodes audio spectrogram patches with a high masking ratio\, feeding only the non-masked tokens through encoder layers. The decoder then re-orders and decodes the encoded context padded with mask tokens\, in order to reconstruct the input spectrogram. We find it beneficial to incorporate local window attention in the decoder\, as au dio spectrograms are highly correlated in local time and frequency bands. We then fine-tune the encoder with a lower masking ratio on target dataset s. Empirically\, Audio-MAE sets new state-of-the-art performance on six au dio and speech classification tasks\, outperforming other recent models th at use external supervised pre-training.\nBio\nFlorian Metze is a Research Scientist Manager at Meta AI in New York\, supporting a team of researche rs and engineers working on multi-modal (image\, video\, audio\, text) con tent understanding for Meta’s Family of Apps (Instagram\, Threads\, Facebo ok\, WhatsApp). He used to be an Associate Research Professor at Carnegie Mellon University\, in the School of Computer Science’s Language Technolog ies Institute\, where he still is an Adjunct Professor. He is also a co-fo under of Abridge\, a company working on extracting information from doctor patient conversations. His work covers many areas of speech recognition a nd multi-media analysis with a focus on end-to-end deep learning. Currentl y\, he focuses on multi-modal processing of videos\, and using that inform ation to recommend unconnected content. In the past\, he has worked on low resource and multi-lingual speech processing\, speech recognition with ar ticulatory features\, large-scale multi-media retrieval and summarization\ , information extraction from medical interviews\, and recognition of pers onality or similar meta-data from speech.\nFor more information\, please s ee http://www.cs.cmu.edu/directory/fmetze\n DTSTART;TZID=America/New_York:20231110T120000 DTEND;TZID=America/New_York:20231110T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Florian Metze (CMU) “Masked Autoencoders that Listen” URL:https://www.clsp.jhu.edu/events/florian-metze-cmu/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nIn this talk\, I will present a simple extension of i mage-based Masked Autoencoders (MAE) to self-supervised representation lea rning from audio spectrograms. Following the Transformer encoder-decoder d esign in MAE\, our Audio-MAE first encodes audio spectrogram patches with a high masking ratio\, feeding only the non-masked tokens through encoder layers. The decoder then re-orders and decodes the encoded context padded with mask tokens\, in order to reconstruct the input spectrogram. We find it beneficial to incorporate local window attention in the decoder\, as au dio spectrograms are highly correlated in local time and frequency bands. We then fine-tune the encoder with a lower masking ratio on target dataset s. Empirically\, Audio-MAE sets new state-of-the-art performance on six au dio and speech classification tasks\, outperforming other recent models th at use external supervised pre-training.
\nBio
\nFlorian Metze is a Research Scientist Manager at Meta AI in New York\ , supporting a team of researchers and engineers working on multi-modal (i mage\, video\, audio\, text) content understanding for Meta’s Family of Ap ps (Instagram\, Threads\, Facebook\, WhatsApp). He used to be an Associate Research Professor at Carnegie Mellon University\, in the School of Compu ter Science’s Language Technologies Institute\, where he still is an Adjun ct Professor. He is also a co-founder of Abridge\, a company working on ex tracting information from doctor patient conversations. His work covers ma ny areas of speech recognition and multi-media analysis with a focus on en d-to-end deep learning. Currently\, he focuses on multi-modal processing o f videos\, and using that information to recommend unconnected content. In the past\, he has worked on low resource and multi-lingual speech process ing\, speech recognition with articulatory features\, large-scale multi-me dia retrieval and summarization\, information extraction from medical inte rviews\, and recognition of personality or similar meta-data from speech.< /p>\n
For more information\, please see http://www.cs.cmu.edu/directory/fmetze
\n\n X-TAGS;LANGUAGE=en-US:2023\,Metze\,November END:VEVENT END:VCALENDAR