CLSP Student Seminar – David Mueller
Abstract Pretrained language models (LMs) encode implicit representations of knowledge in their parameters. Despite this observation, our best methods for interpreting these representations yield few actionable insights on how to manipulate this parameter space for[…]
Abstract Any valuable NLP dataset has traditionally been shipped with crowdsourced categorical labels. Instructions for collecting these labels are easy to communicate and the labels themselves are easy to annotate. However, as self-supervision based methods[…]
Abstract The field of NLP is in the midst of a disruptive shift, fueled most recently by the advent of large language models (LLMs), with impacts on our methodologies, funding and public perception. While the[…]
Abstract The use of NLP in the realm of financial technology is broad and complex, with applications ranging from sentiment analysis and named entity recognition to question answering. Large Language Models (LLMs) have been shown[…]