BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-21057@www.clsp.jhu.edu DTSTAMP:20240328T202156Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nThis talk will outline the major challenging in porti ng mainstream speech technology to the domain of clinical applications\; i n particular\, the need for personalised systems\, the challenge of workin g in an inherently sparse data domain and developing meaningful collaborat ions with all stakeholders. The talk will give an overview of recent state -of-the-art research from current projects including in the areas of recog nition of disordered speech\, automatic processing of conversations and th e automatic detection and tracking of paralinguistic information at the Un iversity of Sheffield (UK)’s Speech and Hearing (SPandH) & Healthcare lab. \nBiography\nHeidi is a Senior Lecturer (associate professor) in Computer Science at the University of Sheffield\, United Kingdom. Her research inte rests are on the application of AI-based voice technologies to healthcare. In particular\, the detection and monitoring of people’s physical and men tal health including verbal and non-verbal traits for expressions of emoti on\, anxiety\, depression and neurodegenerative conditions in e.g.\, thera peutic or diagnostic settings. DTSTART;TZID=America/New_York:20211119T120000 DTEND;TZID=America/New_York:20211119T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Heidi Christensen (University of Sheffield\, UK) Virtual Seminar “A utomated Processing of Pathological Speech: Recent Work and Ongoing Challe nges” URL:https://www.clsp.jhu.edu/events/heidi-christensen-university-of-sheffie ld-uk-virtual-seminar-automated-processing-of-pathological-speech-recent-w ork-and-ongoing-challenges/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\nAbstr act
\nThis talk will outline the major challenging in porti ng mainstream speech technology to the domain of clinical applications\; i n particular\, the need for personalised systems\, the challenge of workin g in an inherently sparse data domain and developing meaningful collaborat ions with all stakeholders. The talk will give an overview of recent state -of-the-art research from current projects including in the areas of recog nition of disordered speech\, automatic processing of conversations and th e automatic detection and tracking of paralinguistic information at the Un iversity of Sheffield (UK)’s Speech and Hearing (SPandH) & Healthcare lab.
\nBiography
\nHeidi is a Senior Lecturer (as sociate professor) in Computer Science at the University of Sheffield\, Un ited Kingdom. Her research interests are on the application of AI-based vo ice technologies to healthcare. In particular\, the detection and monitori ng of people’s physical and mental health including verbal and non-verbal traits for expressions of emotion\, anxiety\, depression and neurodegenera tive conditions in e.g.\, therapeutic or diagnostic settings.
\n X-TAGS;LANGUAGE=en-US:2021\,Christensen\,November END:VEVENT BEGIN:VEVENT UID:ai1ec-21267@www.clsp.jhu.edu DTSTAMP:20240328T202156Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract\nIn this talk\, I present a multipronged strategy for zero-shot cross-lingual Information Extraction\, that is the construction of an IE model for some target language\, given existing annotations exclu sively in some other language. This work is part of the JHU team’s effort under the IARPA BETTER program. I explore data augmentation techniques inc luding data projection and self-training\, and how different pretrained en coders impact them. We find through extensive experiments and extension of techniques that a combination of approaches\, both new and old\, leads to better performance than any one cross-lingual strategy in particular.\nBi ography\nMahsa Yarmohammadi is an assistant research scientist in CLSP\, J HU\, who leads state-of-the-art research in cross-lingual language and spe ech applications and algorithms. A primary focus of Yarmohammadi’s researc h is using deep learning techniques to transfer existing resources into ot her languages and to learn representations of language from multilingual d ata. She also works in automatic speech recognition and speech translation . Yarmohammadi received her PhD in computer science and engineering from O regon Health & Science University (2016). She joined CLSP as a post-doctor al fellow in 2017. DTSTART;TZID=America/New_York:20220204T120000 DTEND;TZID=America/New_York:20220204T131500 LOCATION:Ames 234 Presented Virtually via Zoom https://wse.zoom.us/j/967351 83473 SEQUENCE:0 SUMMARY:Mahsa Yarmohammadi (Johns Hopkins University) “Data Augmentation fo r Zero-shot Cross-Lingual Information Extraction” URL:https://www.clsp.jhu.edu/events/mahsa-yarmohammadi-johns-hopkins-univer sity-data-augmentation-for-zero-shot-cross-lingual-information-extraction/ X-COST-TYPE:free X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\nAbstr act
\nIn this talk\, I present a multipronged strategy for zero-shot cross-lingual Information Extraction\, that is the construction of an IE model for some target language\, given existing annotations exclu sively in some other language. This work is part of the JHU team’s effort under the IARPA BETTER program. I explore data augmentation techniques inc luding data projection and self-training\, and how different pretrained en coders impact them. We find through extensive experiments and extension of techniques that a combination of approaches\, both new and old\, leads to better performance than any one cross-lingual strategy in particular.
\nBiography
\nAbstr act
\nSpeech communications represents a core domain for ed ucation\, team problem solving\, social engagement\, and business interact ions. The ability for Speech Technology to extract layers of knowledge and assess engagement content represents the next generation of advanced spee ch solutions. Today\, the emergence of BIG DATA\, Machine Learning\, as we ll as voice enabled speech systems have required the need for effective vo ice capture and automatic speech/speaker recognition. The ability to emplo y speech and language technology to assess human-to-human interactions off ers new research paradigms having profound impact on assessing human inter action. In this talk\, we will focus on big data naturalistic audio proces sing relating to (i) child learning spaces\, and (ii) the NASA APOLLO luna r missions. ML based technology advancements include automatic audio diari zation\, speech recognition\, and speaker recognition. Child-Teacher based assessment of conversational interactions are explored\, including keywor d and “WH-word” (e.g.\, who\, what\, etc.). Diarization processing solutio ns are applied to both classroom/learning space child speech\, as well as massive APOLLO data. CRSS-UTDallas is expanding our original Apollo-11 cor pus\, resulting in a massive multi-track audio processing challenge to mak e available 150\,000hrs of Apollo mission data to be shared with science c ommunities: (i) speech/language technology\, (ii) STEM/science and team-ba sed researchers\, and (iii) education/historical/archiving specialists. Ou r goals here are to provide resources which allow to better understand how people work/learn collaboratively together. For Apollo\, to accomplish on e of mankind’s greatest scientific/technological challenges in the last ce ntury.
\nBiography
\nJohn H.L. Hansen\, recei ved Ph.D. & M.S. degrees from Georgia Institute of Technology\, and B.S.E. E. from Rutgers Univ. He joined Univ. of Texas at Dallas (UTDallas) in 200 5\, where he currently serves as Associate Dean for Research\, Prof. of EC E\, Distinguished Univ. Chair in Telecom. Engineering\, and directs Center for Robust Speech Systems (CRSS). He is an ISCA Fellow\, IEEE Fellow\, an d has served as Member and TC-Chair of IEEE Signal Proc. Society\, Speech & Language Proc. Tech. Comm.(SLTC)\, and Technical Advisor to U.S. Delegat e for NATO (IST/TG-01). He served as ISCA President (2017-21)\, continues to serve on ISCA Board (2015-23) as Treasurer\, has supervised 99 PhD/MS t hesis candidates (EE\,CE\,BME\,TE\,CS\,Ling.\,Cog.Sci.\,Spch.Sci.\,Hear.Sc i)\, was recipient of 2020 UT-Dallas Provost’s Award for Grad. PhD Researc h Mentoring\; author/co-author of 865 journal/conference papers including 14 textbooks in the field of speech/language/hearing processing & technolo gy including coauthor of textbook Discrete-Time Processing of Speech Signa ls\, (IEEE Press\, 2000)\, and lead author of the report “The Impact of Sp eech Under ‘Stress’ on Military Speech Technology\,” (NATO RTO-TR-10\, 200 0). He served as Organizer\, Chair/Co-Chair/Tech.Chair for ISCA INTERSPEEC H-2022\, IEEE ICASSP-2010\, IEEE SLT-2014\, ISCA INTERSPEECH-2002\, and Te ch. Chair for IEEE ICASSP-2024. He received the 2022 IEEE Signal Processin g Society Leo Beranek MERITORIOUS SERVICE Award.
\n\n X-TAGS;LANGUAGE=en-US:2023\,Hansen\,March END:VEVENT END:VCALENDAR