BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20120@www.clsp.jhu.edu DTSTAMP:20240329T224137Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nRobotics@Google’s mission is to make robots useful in the real world through machine learning. We a re excited about a new model for robotics\, designed for generalization ac ross diverse environments and instructions. This model is focused on scala ble data-driven learning\, which is task-agnostic\, leverages simulation\, learns from past experience\, and can be quickly adapted to work in the r eal-world through limited interactions. In this talk\, we’ll share some of our recent work in this direction in both manipulation and locomotion app lications.
\nBiography
\nCarolina
Abstract
\nThe growing power in compu ting and AI promises a near-term future of human-machine teamwork. In this talk\, I will present my research group’s efforts in understanding the co mplex dynamics of human-machine interaction and designing intelligent mach ines aimed to assist and collaborate with people. I will focus on 1) tools for onboarding machine teammates and authoring machine assistance\, 2) me thods for detecting\, and broadly managing\, errors in collaboration\, and 3) building blocks of knowledge needed to enable ad hoc human-machine tea mwork. I will also highlight our recent work on designing assistive\, coll aborative machines to support older adults aging in place.
\nBiography
\nChien-Ming Huang is the John C. Malone Assista nt Professor in the Department of Computer Science at the Johns Hopkins Un iversity. His research focuses on designing interactive AI aimed to assist and collaborate with people. He publishes in top-tier venues in HRI\, HCI \, and robotics including Science Robotics\, HRI\, CHI\, and CSCW. His res earch has received media coverage from MIT Technology Review\, Tech Inside r\, and Science Nation. Huang completed his postdoctoral training at Yale University and received his Ph.D. in Computer Science at the University of Wisconsin–Madison. He is a recipient of the NSF CAREER award. https://www.cs.jhu.edu/~cmhuang/
DTSTART;TZID=America/New_York:20230915T120000 DTEND;TZID=America/New_York:20230915T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Chien-Ming Huang (Johns Hopkins University) “Becoming Teammates: De signing Assistive\, Collaborative Machines” URL:https://www.clsp.jhu.edu/events/chien-ming-huang-johns-hopkins-universi ty/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,Huang\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-24479@www.clsp.jhu.edu DTSTAMP:20240329T224137Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract
\nT he speech field is evolving to solve more challenging scenarios\, such as multi-channel recordings with multiple simultaneous talkers. Given the man y types of microphone setups out there\, we present the UniX-Encoder. It’s a universal encoder designed for multiple tasks\, and worked with any mic rophone array\, in both solo and multi-talker environments. Our research e nhances previous multichannel speech processing efforts in four key areas: 1) Adaptability: Contrasting traditional models constrained to certain mi crophone array configurations\, our encoder is universally compatible. 2) MultiTask Capability: Beyond the single-task focus of previous systems\, U niX-Encoder acts as a robust upstream model\, adeptly extracting features for diverse tasks including ASR and speaker recognition. 3) Self-Supervise d Training: The encoder is trained without requiring labeled multi-channel data. 4) End-to-End Integration: In contrast to models that first beamfor m then process single-channels\, our encoder offers an end-to-end solution \, bypassing explicit beamforming or separation. To validate its effective ness\, we tested the UniXEncoder on a synthetic multi-channel dataset from the LibriSpeech corpus. Across tasks like speech recognition and speaker diarization\, our encoder consistently outperformed combinations like the WavLM model with the BeamformIt frontend.
DTSTART;TZID=America/New_York:20240311T200500 DTEND;TZID=America/New_York:20240311T210500 SEQUENCE:0 SUMMARY:Zili Huang (JHU) “Unix-Encoder: A Universal X-Channel Speech Encode r for Ad-Hoc Microphone Array Speech Processing” URL:https://www.clsp.jhu.edu/events/zili-huang-jhu-unix-encoder-a-universal -x-channel-speech-encoder-for-ad-hoc-microphone-array-speech-processing/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,Huang\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-24481@www.clsp.jhu.edu DTSTAMP:20240329T224137Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nNatural language provides an intuitive and powerful interface to access knowledge at scale. Modern l anguage systems draw information from two rich knowledge sources: (1) info rmation stored in their parameters during massive pretraining and (2) docu ments retrieved at inference time. Yet\, we are far from building systems that can reliably provide information from such knowledge sources. In this talk\, I will discuss paths for more robust systems. In the first part of the talk\, I will present a module for scaling retrieval-based knowledge augmentation. We learn a compressor that maps retrieved documents into tex tual summaries prior to in-context integration. This not only reduces the computational costs but also filters irrelevant or incorrect information. In the second half of the talk\, I will discuss the challenges of updating knowledge stored in model parameters and propose a method to prevent mode ls from reciting outdated information by identifying facts that are prone to rapid change. I will conclude my talk by proposing an interactive syste m that can elicit information from users when needed.
\nBiog raphy
\nEunsol Choi is an assistant pro fessor in the Computer Science department at the University of Texas at Au stin. Prior to UT\, she spent a year at Google AI as a visiting researcher . Her research area spans natural language processing and machine learning . She is particularly interested in interpreting and reasoning about text in a dynamic real world context. She is a recipient of a Facebook research fellowship\, Google faculty research award\, Sony faculty award\, and an outstanding paper award at EMNLP. She received a Ph.D. in computer science and engineering from University of Washington and B.A in mathematics and computer science from Cornell University.
\nDTSTART;TZID=America/New_York:20240315T120000 DTEND;TZID=America/New_York:20240315T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21209 SEQUENCE:0 SUMMARY:Eunsol Choi (University of Texas at Austin) “Knowledge-Rich Languag e Systems in a Dynamic World” URL:https://www.clsp.jhu.edu/events/eunsol-choi-university-of-texas-at-aust in-knowledge-rich-language-systems-in-a-dynamic-world/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,Choi\,March END:VEVENT END:VCALENDAR