Events

Mental Programs in Humans and Machines

January 16, 2025
When: January 24, 2025 @ 12:00 pm – 1:15 pm

Abstract How do humans efficiently learn new rules, causal laws, and mental algorithms, and how could AI systems do the same? From the perspective of human behavior, I will present results suggesting that representing knowledge[…]

Read More

Nanyun Peng (UCLA) “Controllable and Creative Natural Language Generation”

November 25, 2024
When: December 6, 2024 @ 12:00 pm – 1:15 pm
Where: Hackerman Hall B17, 3400 N CHARLES ST, Baltimore, MD 21218

Abstract Recent advances in large language models (LLMs) have achieved remarkable results across a wide range of natural language processing (NLP) applications, including text classification, summarization, machine translation, and dialogue systems. As LLMs grow increasingly[…]

Read More

Jieyu Zhao (USC) – “Trustworthy LLMs — our efforts on mitigating issues regarding social bias, safety and reliability”

November 5, 2024
When: November 8, 2024 @ 12:00 pm – 1:15 pm
Where: Hackerman Hall B17, 3400 N CHARLES ST, Baltimore, MD 21218

Abstract The rapid advancement of large language models (LLMs) has unlocked a myriad of possibilities for positive societal impact, ranging from enhancing accessibility and communication to supporting disaster response and public health initiatives. However, the[…]

Read More

Adam Byerly (JHU) “How Effective Is Self-Consistency for Long-Context Problems?”

October 31, 2024
When: November 4, 2024 @ 12:00 pm – 1:15 pm
Where: Hackerman Hall B17, 3400 N CHARLES ST, Baltimore, MD 21218

Abstract Self-consistency (SC) has been demonstrated to enhance the performance of large language models (LLMs) across various tasks and domains involving short content. However, does this evidence support its effectiveness for long-context problems? In this talk, we examine the[…]

Read More

Yen-ju Lu (JHU) “CA-SSLR: Condition-Aware Self-Supervised Learning Representation for Generalized Speech Processing”

October 31, 2024
When: November 1, 2024 @ 12:00 pm – 1:15 pm
Where: Hackerman Hall B17, 3400 N CHARLES ST, Baltimore, MD 21218

Abstract We introduce Condition-Aware Self-Supervised Learning Representation (CA-SSLR), a generalist conditioning model broadly applicable to various speech-processing tasks. Compared to standard fine-tuning methods that optimize for downstream models, CA-SSLR integrates language and speaker embeddings from[…]

Read More

Center for Language and Speech Processing