Large language models (LLMs) have ushered in exciting capabilities in language understanding and text generation, with systems like ChatGPT holding fluent dialogs with users and being almost indistinguishable from humans. While this has obviously raised conversational systems and chatbots to a new level, it also presents exciting new opportunities for building artificial agents with improved decision making capabilities. Specifically, the ability to reason with language can allow us to build agents that can 1) execute complex action sequences to effect change in the world, 2) learn new skills by ‘reading’ in addition to ‘doing’, and 3) allow for easier personalization and control over their behavior. In this talk, I will demonstrate how we can build such language-enabled agents that exhibit the above traits across various use cases such as multi-hop question answering, web interaction, and robotic tool manipulation. In the end, I will also discuss some dangers of using these LLM-based systems and some challenges that lie ahead in ensuring their safe use.
Karthik Narasimhan is an assistant professor in the Computer Science department at Princeton University and a co-Director of the Princeton NLP group. His research spans the areas of natural language processing and reinforcement learning, with the goal of building intelligent agents that learn to operate in the world through both their own experience (”doing things”) and leveraging existing human knowledge (”reading about things”). Karthik received his PhD from MIT in 2017, and spent a year as a visiting research scientist at OpenAI contributing to the GPT language model, prior to joining Princeton in 2018. His research has been recognized by the NSF CAREER, a Google Research Scholar Award, an Amazon research award (2019), Bell Labs runner-up prize and outstanding paper awards at EMNLP (2015, 2016) and NeurIPS (2022).