BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-FROM-URL:https://www.clsp.jhu.edu
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20231105T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20241103T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20240310T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20250309T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-20120@www.clsp.jhu.edu
DTSTAMP:20240329T074543Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:
Abstract
\nRobotics@Google’s mission
is to make robots useful in the real world through machine learning. We a
re excited about a new model for robotics\, designed for generalization ac
ross diverse environments and instructions. This model is focused on scala
ble data-driven learning\, which is task-agnostic\, leverages simulation\,
learns from past experience\, and can be quickly adapted to work in the r
eal-world through limited interactions. In this talk\, we’ll share some of
our recent work in this direction in both manipulation and locomotion app
lications.
\nBiography
\nCarolina Parada is a Senior Engineering Manager at Goo
gle Robotics. She leads the robot-mobility group\, which focuses on improv
ing robot motion planning\, navigation\, and locomotion\, using reinforcem
ent learning. Prior to that\, she led the camera perception team for self-
driving cars at Nvidia for 2 years. She was also a lead with Speech @ Goog
le for 7 years\, where she drove multiple research and engineering efforts
that enabled Ok Google\, the Google Assistant\, and Voice-Search. Carolina grew up in Venezuela and moved to the US
to pursue a B.S. and M.S. degree in Electrical Engineering at University
of Washington and her Phd at Johns Hopkins University at the Center for La
nguage and Speech Processing (CLSP).
DTSTART;TZID=America/New_York:20210423T120000
DTEND;TZID=America/New_York:20210423T131500
LOCATION:via Zoom
SEQUENCE:0
SUMMARY:Carolina Parada (Google AI) “State of Robotics @ Google”
URL:https://www.clsp.jhu.edu/events/carolina-parada-google-ai/
X-COST-TYPE:free
X-TAGS;LANGUAGE=en-US:2021\,April\,Parada
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-21615@www.clsp.jhu.edu
DTSTAMP:20240329T074543Z
CATEGORIES;LANGUAGE=en-US:Student Seminars
CONTACT:
DESCRIPTION:Abstract
\n\n
\n
We conside
r a problem of data collection for semantically rich NLU tasks\, where det
ailed semantics of documents (or utterances) are captured using a complex
meaning representation. Previously\, data collection for such tasks was e
ither handled at the cost of extensive annotator training (e.g. in FrameNe
t or PropBank) or simplified meaning representation (e.g. in QA-SRL or Ove
rnight). In this talk\, we present two systems [1\, 2] that aim to suppor
t fast\, accurate\, and expressive semantic annotations by pairing human w
orkers with a trained model in the loop.
\n
\n
The firs
t system\, called Guided K-best [1]\, is an annotation toolkit for convers
ational semantic parsing. Instead of typing annotations from scratch\, da
ta specialists choose a correct parse from the K-best output of a few-shot
prototyped model. As the K-best list can be large (e.g. K=100)\, we guid
e the annotators’ exploration of the K-best list via explainable hierarchi
cal clustering. In addition\, we experiment with RoBERTa-based reranking
of the K-best list to recalibrate the few-shot model towards Accuracy@K.
The final system allows to annotate data up to 35% faster than the standar
d\, non-guided K-best and improves the few-shot model’s top-1 accuracy by
up to 18%. The second system\, called SchemaBlocks [2]\, is an annotation
toolkit for schemas\, or structured descriptions of frequent real-world s
cenarios (e.g.\, cooking a meal). It represents schemas in the annotation
UI as nested blocks. Using a novel Causal ARM model\, we further speed u
p the annotation process and guide data specialists towards expressive and
diverse schemas. As part of this work\, we collect 232 schemas\, evaluat
ing their internal coherence and their coverage on large-scale newswire co
rpora.
\n
\n
\n
DTSTART;TZID=America/New_York:20220311T120000
DTEND;TZID=America/New_York:20220311T131500
LOCATION:Virtual Seminar
SEQUENCE:0
SUMMARY:Student Seminar – Anton Belyy “Systems for Human-AI Cooperation on
Collecting Semantic Annotations”
URL:https://www.clsp.jhu.edu/events/student-seminar-anton-belyy-systems-for
-human-ai-cooperation-on-collecting-semantic-annotations/
X-COST-TYPE:free
X-TAGS;LANGUAGE=en-US:2022\,Belyy\,March
END:VEVENT
END:VCALENDAR