BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20120@www.clsp.jhu.edu DTSTAMP:20240329T045013Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nRobotics@Google’s mission is to make robots useful in the real world through machine learning. We are excited about a new model for robotics\, designed for generalization across diverse environments an d instructions. This model is focused on scalable data-driven learning\, w hich is task-agnostic\, leverages simulation\, learns from past experience \, and can be quickly adapted to work in the real-world through limited in teractions. In this talk\, we’ll share some of our recent work in this dir ection in both manipulation and locomotion applications.\nBiography\nCarol ina Parada is a Senior Engineering Manager at Google Robotics. She leads t he robot-mobility group\, which focuses on improving robot motion planning \, navigation\, and locomotion\, using reinforcement learning. Prior to th at\, she led the camera perception team for self-driving cars at Nvidia fo r 2 years. She was also a lead with Speech @ Google for 7 years\, where sh e drove multiple research and engineering efforts that enabled Ok Google\, the Google Assistant\, and Voice-Search. Carolina grew up in Venezuela an d moved to the US to pursue a B.S. and M.S. degree in Electrical Engineeri ng at University of Washington and her Phd at Johns Hopkins University at the Center for Language and Speech Processing (CLSP). DTSTART;TZID=America/New_York:20210423T120000 DTEND;TZID=America/New_York:20210423T131500 LOCATION:via Zoom SEQUENCE:0 SUMMARY:Carolina Parada (Google AI) “State of Robotics @ Google” URL:https://www.clsp.jhu.edu/events/carolina-parada-google-ai/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nRobotics@Google’s mission is to make robots useful i n the real world through machine learning. We are excited about a new mode l for robotics\, designed for generalization across diverse environments a nd instructions. This model is focused on scalable data-driven learning\, which is task-agnostic\, leverages simulation\, learns from past experienc e\, and can be quickly adapted to work in the real-world through limited i nteractions. In this talk\, we’ll share some of our recent work in this di rection in both manipulation and locomotion applications.
\n< strong>Biography
\nCarolina Parad a is a Senior Engineering Manager at Google Robotics. She leads the robot-mobility group\, which focuses on improving robot motion planning\, navigation\, and locomotion\, using reinforcement learning. Prior to that \, she led the camera perception team for self-driving cars at Nvidia for 2 years. She was also a lead with Speech @ Google for 7 years\, where she drove multiple research and engineering efforts that enabled Ok Google\, t he Google Assistant\, and Voice-Search. Carolina< /span> grew up in Venezuela and moved to the US to pursue a B.S. and M.S. degree in Electrical Engineering at University of Washington and her Phd a t Johns Hopkins University at the Center for Language and Speech Processin g (CLSP).
\n X-TAGS;LANGUAGE=en-US:2021\,April\,Parada END:VEVENT BEGIN:VEVENT UID:ai1ec-21267@www.clsp.jhu.edu DTSTAMP:20240329T045013Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nIn this talk\, I present a multipronged strategy for zero-shot cross-lingual Information Extraction\, that is the construction of an IE model for some target language\, given existing annotations exclu sively in some other language. This work is part of the JHU team’s effort under the IARPA BETTER program. I explore data augmentation techniques inc luding data projection and self-training\, and how different pretrained en coders impact them. We find through extensive experiments and extension of techniques that a combination of approaches\, both new and old\, leads to better performance than any one cross-lingual strategy in particular.\nBi ography\nMahsa Yarmohammadi is an assistant research scientist in CLSP\, J HU\, who leads state-of-the-art research in cross-lingual language and spe ech applications and algorithms. A primary focus of Yarmohammadi’s researc h is using deep learning techniques to transfer existing resources into ot her languages and to learn representations of language from multilingual d ata. She also works in automatic speech recognition and speech translation . Yarmohammadi received her PhD in computer science and engineering from O regon Health & Science University (2016). She joined CLSP as a post-doctor al fellow in 2017. DTSTART;TZID=America/New_York:20220204T120000 DTEND;TZID=America/New_York:20220204T131500 LOCATION:Ames 234 Presented Virtually via Zoom https://wse.zoom.us/j/967351 83473 SEQUENCE:0 SUMMARY:Mahsa Yarmohammadi (Johns Hopkins University) “Data Augmentation fo r Zero-shot Cross-Lingual Information Extraction” URL:https://www.clsp.jhu.edu/events/mahsa-yarmohammadi-johns-hopkins-univer sity-data-augmentation-for-zero-shot-cross-lingual-information-extraction/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nIn this talk\, I present a multipronged strategy for zero-shot cross-lingual Information Extraction\, that is the construction of an IE model for some target language\, given existing annotations exclu sively in some other language. This work is part of the JHU team’s effort under the IARPA BETTER program. I explore data augmentation techniques inc luding data projection and self-training\, and how different pretrained en coders impact them. We find through extensive experiments and extension of techniques that a combination of approaches\, both new and old\, leads to better performance than any one cross-lingual strategy in particular.
\nBiography
\n