BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-22400@www.clsp.jhu.edu DTSTAMP:20240328T183522Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nModern learning architectures for natural language pr ocessing have been very successful in incorporating a huge amount of texts into their parameters. However\, by and large\, such models store and use knowledge in distributed and decentralized ways. This proves unreliable a nd makes the models ill-suited for knowledge-intensive tasks that require reasoning over factual information in linguistic expressions. In this tal k\, I will give a few examples of exploring alternative architectures to t ackle those challenges. In particular\, we can improve the performance of such (language) models by representing\, storing and accessing knowledge i n a dedicated memory component.\nThis talk is based on several joint works with Yury Zemlyanskiy (Google Research)\, Michiel de Jong (USC and Google Research)\, William Cohen (Google Research and CMU) and our other collabo rators in Google Research.\nBiography\nFei is a research scientist at Goog le Research. Before that\, he was a Professor of Computer Science at Unive rsity of Southern California. His primary research interests are machine l earning and its application to various AI problems: speech and language pr ocessing\, computer vision\, robotics and recently weather forecast and cl imate modeling. He has a PhD (2007) from Computer and Information Scienc e from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering from Southeast University (Nanjing\, China). DTSTART;TZID=America/New_York:20221024T120000 DTEND;TZID=America/New_York:20221024T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Fei Sha (University of Southern California) “Extracting Information from Text into Memory for Knowledge-Intensive Tasks” URL:https://www.clsp.jhu.edu/events/fei-sha-university-of-southern-californ ia/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nModern learning architectures for natural language processing have been very successful in incorporating a huge amount of texts into their parameters. However\, by and large\, such models store and use knowledge in distributed and decentralized ways. This proves unreliable and makes the models ill-suited for knowledge-intensive tasks that require reasoning over factual information in linguistic expre ssions. In this talk\, I will give a few examples of exploring alternativ e architectures to tackle those challenges. In particular\, we can improve the performance of such (language) models by representing\, storing and a ccessing knowledge in a dedicated memory component.
\nThis talk is based on several joint works with Yury Zemlyanskiy (Goo gle Research)\, Michiel de Jong (USC and Google Research)\, William Cohen (Google Research and CMU) and our other collaborators in Google Research.< /p>\n
Biography
\nFei is a research scientist at Google Research. Before that\, he was a Professor of Computer Science at U niversity of Southern California. His primary research interests are machi ne learning and its application to various AI problems: speech and languag e processing\, computer vision\, robotics and recently weather forecast an d climate modeling. He has a PhD (2007) from Computer and Information Sc ience from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering from Southeast University (Nanjing\, China).
\n X-TAGS;LANGUAGE=en-US:2022\,October\,Sha END:VEVENT BEGIN:VEVENT UID:ai1ec-23505@www.clsp.jhu.edu DTSTAMP:20240328T183522Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nRecent advances in large pretrained language models h ave unlocked new exciting applications for Natural Language Generation for creative tasks\, such as lyrics or humour generation. In this talk we wil l discuss recent works by our team at Alexa AI and discuss current challen ges: (1) Pun understanding and generation: We release new datasets for pun understanding and the novel task of context-situated pun generation\, and demonstrate the value of our annotations for pun classification and gener ation tasks. (2) Song lyric generation: we design a hierarchical lyric gen eration framework that enables us to generate pleasantly-singable lyrics w ithout training on melody-lyric aligned data\, and show that our approach is competitive with strong baselines supervised on parallel data. (3) Crea te with Alexa: a multimodal story creation experience recently launched on Alexa devices\, which leverages story text generation models in tandem wi th story visualization and background music generation models to produce m ultimodal stories for kids.\nBiography\nAlessandra Cervone is an Applied S cientist in the Natural Understanding team at Amazon Alexa AI. Alessandra holds an MSc in Speech and Language Processing from University of Edinburg h and a PhD in CS from University of Trento (Italy). During her PhD\, Ales sandra worked on computational models of coherence in open-domain dialogue advised by Giuseppe Riccardi. In the first year of the PhD\, she was the team leader of one of the teams selected to compete in the first edition o f the Alexa Prize. More recently\, her research interests have been focuse d on natural language generation and its evaluation\, in particular in the context of creative AI applications. DTSTART;TZID=America/New_York:20230317T120000 DTEND;TZID=America/New_York:20230317T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Alessandra Cervone (Amazon) “Controllable Text Generation for Creat ive Applications URL:https://www.clsp.jhu.edu/events/alexxandra-cervone-amazon/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nRecent advances in large pretrain ed language models have unlocked new exciting applications for Natural Lan guage Generation for creative tasks\, such as lyrics or humour generation. In this talk we will discuss recent works by our team at Alexa AI and dis cuss current challenges: (1) Pun understanding and generation: We release new datasets for pun understanding and the novel task of context-situated pun generation\, and demonstrate the value of our annotations for pun clas sification and generation tasks. (2) Song lyric generation: we design a hi erarchical lyric generation framework that enables us to generate pleasant ly-singable lyrics without training on melody-lyric aligned data\, and sho w that our approach is competitive with strong baselines supervised on par allel data. (3) Create with Alexa: a multimodal story creation experience recently launched on Alexa devices\, which leverages story text generation models in tandem with story visualization and background music generation models to produce multimodal stories for kids.
\nBiography< /strong>
\nAlessandra Cervone is an Applied Scientist in the Natural Understanding team at Amazon Alexa AI. Alessandra holds an MSc in Speech and Language Processing from University of Edinburgh and a PhD in CS from University of Trento (Italy). During her PhD\, Alessandra worked on comput ational models of coherence in open-domain dialogue advised by Giuseppe Ri ccardi. In the first year of the PhD\, she was the team leader of one of t he teams selected to compete in the first edition of the Alexa Prize. More recently\, her research interests have been focused on natural language g eneration and its evaluation\, in particular in the context of creative AI applications.
\n