The limitations of AI-generated text

Capabilities of autoregressive AI models will always be limited by their inability to reason like humans, says PhD candidate Chu-Cheng Lin

Get the Story

Upcoming Seminars

Jan
30
Mon
12:00 pm Daniel Fried (CMU) @ Hackerman Hall B17
Daniel Fried (CMU) @ Hackerman Hall B17
Jan 30 @ 12:00 pm – 1:15 pm
 
Feb
3
Fri
12:00 pm Sasha Rush (Cornell University) ... @ Hackerman Hall B17
Sasha Rush (Cornell University) ... @ Hackerman Hall B17
Feb 3 @ 12:00 pm – 1:15 pm
Abstract Transformers are essential to pretraining. As we approach 5 years of BERT, the connection between attention as architecture and transfer learning remains key to this central thread in NLP. Other architectures such as CNNs and RNNs[...]
Feb
6
Mon
12:00 pm Sharon Levy (University of Calif... @ Hackerman Hall B17
Sharon Levy (University of Calif... @ Hackerman Hall B17
Feb 6 @ 12:00 pm – 1:15 pm
Abstract While large language models have advanced the state-of-the-art in natural language processing, these models are trained on large-scale datasets, which may include harmful information. Studies have shown that as a result, the models exhibit[...]
Feb
20
Mon
12:00 pm Hanjie Chen (University of Virgi... @ Hackerman Hall B17
Hanjie Chen (University of Virgi... @ Hackerman Hall B17
Feb 20 @ 12:00 pm – 1:15 pm
Abstract Advanced neural language models have grown ever larger and more complex, pushing forward the limits of language understanding and generation, while diminishing interpretability. The black-box nature of deep neural networks blocks humans from understanding[...]
Feb
27
Mon
12:00 pm Saadia Gabriel (University of Wa... @ Hackerman Hall B17
Saadia Gabriel (University of Wa... @ Hackerman Hall B17
Feb 27 @ 12:00 pm – 1:15 pm
Abstract Understanding the implications underlying a text is critical to assessing its impact, in particular the social dynamics that may result from a reading of the text. This requires endowing artificial intelligence (AI) systems with[...]

Center for Language and Speech Processing