9:00-9:20 EDT: Project overview: Tasks, approaches, datasets and metrics. (Alex Rudnicky and Markus Mueller)
9:20-9:35 EDT: Exploration of End-to-End SLU Models (Xuandi Fu and Sahar Ghannay)
9:35-9:50 EDT: End-to-End Models – Dealing with Dearth of Data. (Bhuvan Agrawal, Muqiao Yang, Roshan Sharma and Ian Lane)
9:50-10:05 EDT: Beyond Utterances: Context-aware NLU with BERT. (Brandon Liang)
10:05-10:10 EDT: Stretch/Caffeine break
10:10-10:25 EDT: Datasets preparation for training modern dialogue systems. (Mario Rodriguez-Cantelar, Luis Fernando D’Haro and Rafael Banchs)
10:25-10:40 EDT: Reference-free Multi-dimensional Automatic Dialogue Evaluation. (Zhang Chen, Grandee Lee and Luis Fernando D’Haro)
10:40-10:55 EDT: Learning to predict dialogue breakdown from turn transition probabilities in embedded spaces. (Andrew Koh and Rafael Banchs)
10:55-11:10 EDT: Turn and dialogue level evaluation metrics for dialogue quality and AMT HIT designs. (Hugo Boulanger, Sam Davidson, Mario Rodriguez-Cantelar and Mathilde Veron)
11:10-11:25 EDT: Collection of large-scale human-machine dialogue data for public release. (Sam Davidson and Anant Kaushik)
11:25-11:30 EDT: Stretch/Caffeine break
11:30-11:45 EDT: Project summary and take-home messages. (Alex Rudnicky and Markus Mueller)
Abstract How do humans efficiently learn new rules, causal laws, and mental algorithms, and how could AI systems do the same? From the perspective of human behavior, I will present results suggesting that representing knowledge[...]