Abstract While large language models (LLMs) have revolutionized AI, their token-level processing contrasts sharply with human cognition’s multi-level abstraction. This talk explores moving beyond token-based manipulation to reason in a latent space. We introduce the[...]
Abstract Current Natural Language Processing systems rely heavily on Large Language Models (LLMs) which are trained rather naively on large amounts of text using autoregressive next token prediction. Often, they are fine-tuned to mimic human[...]