BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-23586@www.clsp.jhu.edu DTSTAMP:20240329T075559Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20230410T120000 DTEND;TZID=America/New_York:20230410T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Ruizhe Huang URL:https://www.clsp.jhu.edu/events/student-seminar-ruizhe-huang/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,April\,Huang END:VEVENT BEGIN:VEVENT UID:ai1ec-23892@www.clsp.jhu.edu DTSTAMP:20240329T075559Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nThe growing power in computing and AI promises a near -term future of human-machine teamwork. In this talk\, I will present my r esearch group’s efforts in understanding the complex dynamics of human-mac hine interaction and designing intelligent machines aimed to assist and co llaborate with people. I will focus on 1) tools for onboarding machine tea mmates and authoring machine assistance\, 2) methods for detecting\, and b roadly managing\, errors in collaboration\, and 3) building blocks of know ledge needed to enable ad hoc human-machine teamwork. I will also highligh t our recent work on designing assistive\, collaborative machines to suppo rt older adults aging in place.\nBiography\nChien-Ming Huang is the John C . Malone Assistant Professor in the Department of Computer Science at the Johns Hopkins University. His research focuses on designing interactive AI aimed to assist and collaborate with people. He publishes in top-tier ven ues in HRI\, HCI\, and robotics including Science Robotics\, HRI\, CHI\, a nd CSCW. His research has received media coverage from MIT Technology Revi ew\, Tech Insider\, and Science Nation. Huang completed his postdoctoral t raining at Yale University and received his Ph.D. in Computer Science at t he University of Wisconsin–Madison. He is a recipient of the NSF CAREER aw ard. https://www.cs.jhu.edu/~cmhuang/ DTSTART;TZID=America/New_York:20230915T120000 DTEND;TZID=America/New_York:20230915T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Chien-Ming Huang (Johns Hopkins University) “Becoming Teammates: De signing Assistive\, Collaborative Machines” URL:https://www.clsp.jhu.edu/events/chien-ming-huang-johns-hopkins-universi ty/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nThe growing power in computing and AI promises a near -term future of human-machine teamwork. In this talk\, I will present my r esearch group’s efforts in understanding the complex dynamics of human-mac hine interaction and designing intelligent machines aimed to assist and co llaborate with people. I will focus on 1) tools for onboarding machine tea mmates and authoring machine assistance\, 2) methods for detecting\, and b roadly managing\, errors in collaboration\, and 3) building blocks of know ledge needed to enable ad hoc human-machine teamwork. I will also highligh t our recent work on designing assistive\, collaborative machines to suppo rt older adults aging in place.
\nBiography
\nChien-Ming Huang is the John C. Malone Assistant Professor in the Departm ent of Computer Science at the Johns Hopkins University. His research focu ses on designing interactive AI aimed to assist and collaborate with peopl e. He publishes in top-tier venues in HRI\, HCI\, and robotics including S cience Robotics\, HRI\, CHI\, and CSCW. His research has received media co verage from MIT Technology Review\, Tech Insider\, and Science Nation. Hua ng completed his postdoctoral training at Yale University and received his Ph.D. in Computer Science at the University of Wisconsin–Madison. He is a recipient of the NSF CAREER award. https://www .cs.jhu.edu/~cmhuang/
\n X-TAGS;LANGUAGE=en-US:2023\,Huang\,September END:VEVENT BEGIN:VEVENT UID:ai1ec-24479@www.clsp.jhu.edu DTSTAMP:20240329T075559Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract\nThe speech field is evolving to solve more challengin g scenarios\, such as multi-channel recordings with multiple simultaneous talkers. Given the many types of microphone setups out there\, we present the UniX-Encoder. It’s a universal encoder designed for multiple tasks\, a nd worked with any microphone array\, in both solo and multi-talker enviro nments. Our research enhances previous multichannel speech processing effo rts in four key areas: 1) Adaptability: Contrasting traditional models con strained to certain microphone array configurations\, our encoder is unive rsally compatible. 2) MultiTask Capability: Beyond the single-task focus o f previous systems\, UniX-Encoder acts as a robust upstream model\, adeptl y extracting features for diverse tasks including ASR and speaker recognit ion. 3) Self-Supervised Training: The encoder is trained without requiring labeled multi-channel data. 4) End-to-End Integration: In contrast to mod els that first beamform then process single-channels\, our encoder offers an end-to-end solution\, bypassing explicit beamforming or separation. To validate its effectiveness\, we tested the UniXEncoder on a synthetic mult i-channel dataset from the LibriSpeech corpus. Across tasks like speech re cognition and speaker diarization\, our encoder consistently outperformed combinations like the WavLM model with the BeamformIt frontend. DTSTART;TZID=America/New_York:20240311T200500 DTEND;TZID=America/New_York:20240311T210500 SEQUENCE:0 SUMMARY:Zili Huang (JHU) “Unix-Encoder: A Universal X-Channel Speech Encode r for Ad-Hoc Microphone Array Speech Processing” URL:https://www.clsp.jhu.edu/events/zili-huang-jhu-unix-encoder-a-universal -x-channel-speech-encoder-for-ad-hoc-microphone-array-speech-processing/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nThe speech field is evolving to solve more challenging scenarios\, such as multi-channel recordings wi th multiple simultaneous talkers. Given the many types of microphone setup s out there\, we present the UniX-Encoder. It’s a universal encoder design ed for multiple tasks\, and worked with any microphone array\, in both sol o and multi-talker environments. Our research enhances previous multichann el speech processing efforts in four key areas: 1) Adaptability: Contrasti ng traditional models constrained to certain microphone array configuratio ns\, our encoder is universally compatible. 2) MultiTask Capability: Beyon d the single-task focus of previous systems\, UniX-Encoder acts as a robus t upstream model\, adeptly extracting features for diverse tasks including ASR and speaker recognition. 3) Self-Supervised Training: The encoder is trained without requiring labeled multi-channel data. 4) End-to-End Integr ation: In contrast to models that first beamform then process single-chann els\, our encoder offers an end-to-end solution\, bypassing explicit beamf orming or separation. To validate its effectiveness\, we tested the UniXEn coder on a synthetic multi-channel dataset from the LibriSpeech corpus. Ac ross tasks like speech recognition and speaker diarization\, our encoder c onsistently outperformed combinations like the WavLM model with the Beamfo rmIt frontend.
\n X-TAGS;LANGUAGE=en-US:2024\,Huang\,March END:VEVENT BEGIN:VEVENT UID:ai1ec-24507@www.clsp.jhu.edu DTSTAMP:20240329T075559Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nHistory repeats itself\, sometimes in a bad way. Prev enting natural or man-made disasters requires being aware of these pattern s and taking pre-emptive action to address and reduce them\, or ideally\, eliminate them. Emerging events\, such as the COVID pandemic and the Ukrai ne Crisis\, require a time-sensitive comprehensive understanding of the si tuation to allow for appropriate decision-making and effective action resp onse. Automated generation of situation reports can significantly reduce t he time\, effort\, and cost for domain experts when preparing their offici al human-curated reports. However\, AI research toward this goal has been very limited\, and no successful trials have yet been conducted to automat e such report generation and “what-if” disaster forecasting. Pre-existing natural language processing and information retrieval techniques are insuf ficient to identify\, locate\, and summarize important information\, and l ack detailed\, structured\, and strategic awareness. In this talk I will p resent SmartBook\, a novel framework that cannot be solved by large langua ge models alone\, to consume large volumes of multimodal multilingual news data and produce a structured situation report with multiple hypotheses ( claims) summarized and grounded with rich links to factual evidence throug h multimodal knowledge extraction\, claim detection\, fact checking\, misi nformation detection and factual error correction. Furthermore\, SmartBook can also serve as a novel news event simulator\, or an intelligent prophe tess. Given “What-if” conditions and dimensions elicited from a domain ex pert user concerning a disaster scenario\, SmartBook will induce schemas f rom historical events\, and automatically generate a complex event graph a long with a timeline of news articles that describe new simulated events a nd character-centric stories based on a new Λ-shaped attention mask that c an generate text with infinite length. By effectively simulating disaster scenarios in both event graph and natural language format\, we expect Smar tBook will greatly assist humanitarian workers and policymakers to exercis e reality checks\, and thus better prevent and respond to future disasters .\nBio\nHeng Ji is a professor at Computer Science Department\, and an aff iliated faculty member at Electrical and Computer Engineering Department a nd Coordinated Science Laboratory of University of Illinois Urbana-Champai gn. She is an Amazon Scholar. She is the Founding Director of Amazon-Illin ois Center on AI for Interactive Conversational Experiences (AICE). She re ceived her B.A. and M. A. in Computational Linguistics from Tsinghua Unive rsity\, and her M.S. and Ph.D. in Computer Science from New York Universit y. Her research interests focus on Natural Language Processing\, especiall y on Multimedia Multilingual Information Extraction\, Knowledge-enhanced L arge Language Models\, Knowledge-driven Generation and Conversational AI. She was selected as a Young Scientist to attend the 6th World Laureates As sociation Forum\, and selected to participate in DARPA AI Forward in 2023. She was selected as “Young Scientist” and a member of the Global Future C ouncil on the Future of Computing by the World Economic Forum in 2016 and 2017. The awards she received include Women Leaders of Conversational AI ( Class of 2023) by Project Voice\, “AI’s 10 to Watch” Award by IEEE Intelli gent Systems in 2013\, NSF CAREER award in 2009\, PACLIC2012 Best paper ru nner-up\, “Best of ICDM2013” paper award\, “Best of SDM2013” paper award\, ACL2018 Best Demo paper nomination\, ACL2020 Best Demo Paper Award\, NAAC L2021 Best Demo Paper Award\, Google Research Award in 2009 and 2014\, IBM Watson Faculty Award in 2012 and 2014 and Bosch Research Award in 2014-20 18. She was invited to testify to the U.S. House Cybersecurity\, Data Anal ytics\, & IT Committee as an AI expert in 2023. She was invited by the Sec retary of the U.S. Air Force and AFRL to join Air Force Data Analytics Exp ert Panel to inform the Air Force Strategy 2030\, and invited to speak at the Federal Information Integrity R&D Interagency Working Group (IIRD IWG) briefing in 2023. She is the lead of many multi-institution projects and tasks\, including the U.S. ARL projects on information fusion and knowledg e networks construction\, DARPA ECOLE MIRACLE team\, DARPA KAIROS RESIN te am and DARPA DEFT Tinker Bell team. She has coordinated the NIST TAC Knowl edge Base Population task 2010-2022. She was the associate editor for IEEE /ACM Transaction on Audio\, Speech\, and Language Processing\, and served as the Program Committee Co-Chair of many conferences including NAACL-HLT2 018 and AACL-IJCNLP2022. She is elected as the North American Chapter of t he Association for Computational Linguistics (NAACL) secretary 2020-2023. Her research has been widely supported by the U.S. government agencies (DA RPA\, NSF\, DoE\, ARL\, IARPA\, AFRL\, DHS) and industry (Apple\, Amazon\, Google\, Facebook\, Bosch\, IBM\, Disney). DTSTART;TZID=America/New_York:20240405T120000 DTEND;TZID=America/New_York:20240405T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, Maryland 21218 SEQUENCE:0 SUMMARY:Heng Ji (University of Illinois Urbana-Champaign) “SmartBook: an AI Prophetess for Disaster Reporting and Forecasting” URL:https://www.clsp.jhu.edu/events/heng-ji-university-of-illinois-urbana-c hampaign-smartbook-an-ai-prophetess-for-disaster-reporting-and-forecasting / X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nHistory repeats itself\, sometimes in a bad way. Prev enting natural or man-made disasters requires being aware of these pattern s and taking pre-emptive action to address and reduce them\, or ideally\, eliminate them. Emerging events\, such as the COVID pandemic and the Ukrai ne Crisis\, require a time-sensitive comprehensive understanding of the si tuation to allow for appropriate decision-making and effective action resp onse. Automated generation of situation reports can significantly reduce t he time\, effort\, and cost for domain experts when preparing their offici al human-curated reports. However\, AI research toward this goal has been very limited\, and no successful trials have yet been conducted to automat e such report generation and “what-if” disaster forecasting. Pre-existing natural language processing and information retrieval techniques are insuf ficient to identify\, locate\, and summarize important information\, and l ack detailed\, structured\, and strategic awareness. In this talk I will p resent SmartBook\, a novel framework that cannot be solved by large langua ge models alone\, to consume large volumes of multimodal multilingual news data and produce a structured situation report with multiple hypotheses ( claims) summarized and grounded with rich links to factual evidence throug h multimodal knowledge extraction\, claim detection\, fact checking\, misi nformation detection and factual error correction. Furthermore\, SmartBook can also serve as a novel news event simulator\, or an intelligent prophe tess. Given “What-if” conditions and dimensions elicited from a domain ex pert user concerning a disaster scenario\, SmartBook will induce schemas f rom historical events\, and automatically generate a complex event graph a long with a timeline of news articles that describe new simulated events a nd character-centric stories based on a new Λ-shaped attention mask that c an generate text with infinite length. By effectively simulating disaster scenarios in both event graph and natural language format\, we expect Smar tBook will greatly assist humanitarian workers and policymakers to exercis e reality checks\, and thus better prevent and respond to future disasters .
\nBio
\nHeng Ji is a professor at Computer Science Department\, and an affiliated faculty member at Electrical and Co mputer Engineering Department and Coordinated Science Laboratory of Univer sity of Illinois Urbana-Champaign. She is an Amazon Scholar. She is the Fo unding Director of Amazon-Illinois Center on AI for Interactive Conversati onal Experiences (AICE). She received her B.A. and M. A. in Computational Linguistics from Tsinghua University\, and her M.S. and Ph.D. in Computer Science from New York University. Her research interests focus on Natural Language Processing\, especially on Multimedia Multilingual Information Ex traction\, Knowledge-enhanced Large Language Models\, Knowledge-driven Gen eration and Conversational AI. She was selected as a Young Scientist to at tend the 6th World Laureates Association Forum\, and selected to participa te in DARPA AI Forward in 2023. She was selected as “Young Scientist” and a member of the Global Future Council on the Future of Computing by the Wo rld Economic Forum in 2016 and 2017. The awards she received include Women Leaders of Conversational AI (Class of 2023) by Project Voice\, “AI’s 10 to Watch” Award by IEEE Intelligent Systems in 2013\, NSF CAREER award in 2009\, PACLIC2012 Best paper runner-up\, “Best of ICDM2013” paper award\, “Best of SDM2013” paper award\, ACL2018 Best Demo paper nomination\, ACL20 20 Best Demo Paper Award\, NAACL2021 Best Demo Paper Award\, Google Resear ch Award in 2009 and 2014\, IBM Watson Faculty Award in 2012 and 2014 and Bosch Research Award in 2014-2018. She was invited to testify to the U.S. House Cybersecurity\, Data Analytics\, & IT Committee as an AI expert in 2 023. She was invited by the Secretary of the U.S. Air Force and AFRL to jo in Air Force Data Analytics Expert Panel to inform the Air Force Strategy 2030\, and invited to speak at the Federal Information Integrity R&D Inter agency Working Group (IIRD IWG) briefing in 2023. She is the lead of many multi-institution projects and tasks\, including the U.S. ARL projects on information fusion and knowledge networks construction\, DARPA ECOLE MIRAC LE team\, DARPA KAIROS RESIN team and DARPA DEFT Tinker Bell team. She has coordinated the NIST TAC Knowledge Base Population task 2010-2022. She wa s the associate editor for IEEE/ACM Transaction on Audio\, Speech\, and La nguage Processing\, and served as the Program Committee Co-Chair of many c onferences including NAACL-HLT2018 and AACL-IJCNLP2022. She is elected as the North American Chapter of the Association for Computational Linguistic s (NAACL) secretary 2020-2023. Her research has been widely supported by t he U.S. government agencies (DARPA\, NSF\, DoE\, ARL\, IARPA\, AFRL\, DHS) and industry (Apple\, Amazon\, Google\, Facebook\, Bosch\, IBM\, Disney).
\n X-TAGS;LANGUAGE=en-US:2024\,April\,Ji END:VEVENT END:VCALENDAR