Lara J. Martin (University of Maryland) “Neurosymbolic AI or: How I Learned to Stop Worrying and Love the Large Language Model”
3400 N. Charles Street
Baltimore
MD 21218
Abstract
Large language models like ChatGPT have shown extraordinary abilities for writing. While impressive at first glance, large language models aren’t perfect and often make mistakes humans would not make. The main architecture behind ChatGPT mostly doesn’t differ from early neural networks, and as a consequence, carries some of the same limitations. My work revolves around the use of neural networks like ChatGPT mixed with symbolic methods from early AI and how these two families of methods can combine to create more robust AI. I talk about some of the neurosymbolic methods I used for applications in story generation and understanding — with the goal of eventually creating AI that can play Dungeons & Dragons. I also discuss pain points that I found for improving accessible communication and show how large language models can supplement such communication.
Biography