People love to argue. In recent years, Artificial Intelligence has achieved great advances in modelling natural language argumentation. While analysing and creating arguments is a highly complex (and enjoyable!) task at which even humans are not good, let alone perfect, we describe our natural language processing (NLP) research to identify arguments, their stance and aspects, aggregate arguments into topically coherent clusters, and finally, even to generate new arguments, given their desired topic, aspect and stance. The talk will tell you the story how the ArgumenText project has been conceptualized into a set of novel NLP tasks and highlight their main research outcomes. Argument mining has a tremendous number of possible applications, of which the talk discusses a few selected ones.
Iryna Gurevych (PhD 2003, U. Duisburg-Essen, Germany) is a professor of Computer Science and director of the Ubiquitous Knowledge Processing (UKP) Lab at the Technical University (TU) of Darmstadt in Germany. She joined TU Darmstadt in 2005 (tenured as full professor in 2009). Her main research interest is machine learning for large-scale language understanding, including text analysis for social sciences and humanities. She is one of the co-founders of the field of computational argumentation with many applications, such as the identification of fake news and decision-making support. Iryna’s work received numerous awards, e.g. a highly competitive Lichtenberg-Professorship Award from the Volkswagen Foundation and a DFG Emmy-Noether Young Researcher’s Excellence Career Award. Iryna was elected to be President of SIGDAT, one of the most important scientific bodies in the ACL community. She was program co-chair of ACL’s most important conference in 2018, the Annual Meeting of the Association for Computational Linguistics, and she is General Chair of *SEM 2020, the 9th Joint Conference on Lexical and Computational Semantics.
Never before was it so easy to write a powerful NLP system, never before did it have such a potential impact. However, these systems are now increasingly used in applications they were not intended for, by people who treat them as interchangeable black boxes. The results can be simple performance drops, but also systematic biases against various user groups.
Fueling deep learning models with big, curated datasets can yield unprecedented results for recognizing objects, scenes, human activities, and attributes. However, as we continue to advance the boundary of visual recognition and the number of classes scales up, long tails become the elephant in the room since object frequency often follows a power law in the real world. That is the challenge of learning. During inference in the wild, model robustness becomes crucial because the wild data is often out of the training data’s distribution (e.g., adversarial examples, data of new domains, etc.).
In this talk, I will present our recent work on long-tailed visual recognition and compound domain adaptation. We develop novel methods by drawing inspiration from meta-learning, memory networks, adversarial training, and curriculum learning. I will also present some empirical studies that verify our approaches’ effectiveness and demonstrate their applications to query-efficiency black-box adversarial attacks.
Boqing Gong is a research scientist at Google, Seattle, and a principal investigator at ICSI, Berkeley. His research in machine learning and computer vision focuses on sample-efficient learning (e.g., domain adaptation, few-shot, reinforcement, webly-supervised, and self-supervised learning) and the visual analytics of objects, scenes, human activities, and their attributes. Before joining Google in 2019, he worked in Tencent and was a tenure-track Assistant Professor at the University of Central Florida (UCF). He received an NSF CRII award in 2016 and an NSF BIGDATA award in 2017, both of which were the first of their kinds ever granted to UCF. He is/was a (senior) area chair of NeurIPS, ICML, CVPR, ICCV, ECCV, AAAI, AISTATS, and WACV. He earned a Ph.D. degree in 2015 at the University of Southern California, where the Viterbi Fellowship partially supported his work.
Our world faces increasingly complex challenges: we destabilized the climate, haven’t beaten all diseases, and haven’t spread the values of democracy and freedom to large parts of the globe, where violence and riots reign supreme. The world must be fixed in our generation – everyone would agree. But in order to take action, build a plan, we need to see the complete picture, and empower decision makers with tools to make those changes. This decade, we have finally reached a critical amount of data to facilitate the creation of such tools.
My work is inspired by Mark Twain’s quote, who once said: “The past does not repeat itself, but it rhymes.” Although future events have unique circumstances, they typically follow familiar past patterns. Over the past few years, I devoted my life to development of prediction techniques. My system inferred that Cholera outbreaks in land-locked areas are more likely to occur following storms, especially when preceded by a long drought. Another inference is that genocide events tend to occur following events where local opinion makers describe minority groups as pests. These types of patterns are composed of several abstractions, over variable-term temporal extents and selected from a large number of possible causes. The algorithms I developed deal with the complexity of discovering such patterns.
Large-scale digital histories, social and real-time media, and human web behavior are harvested and augmented with human knowledge mined from the web to afford real-time estimations of likelihoods of future events. Most recently, these algorithms have accurately predicted the first Cholera outbreak reported in Cuba in fifty years. These types of actionable predictions, that enable preventative measures, have drawn the attention of a UN genocide-prevention organization and the Gates foundations and illustrate the vast potential for real impact on the state of humanity.
In the last few years I have been focusing on applying similar techniques for healthcare and Pharma, leveraging large amounts of data obtained from both medical records, EMR and other medical research results data in a quest to create an AI system for automated medical research and breakthroughs.
Dr. Kira Radinsky is the chairperson and CTO of Diagnostic Robotics, where the most advanced technologies in the field of artificial intelligence are harnessed to make healthcare better, cheaper, and more widely available. Dr. Radinsky has founded SalesPredict, acquired by eBay in 2016 and served as eBay Chief Scientist (IL). She gained international recognition for her work at the Technion and Microsoft Research for developing predictive algorithms that recognized the early warning signs of globally impactful events, as disease epidemics and political unrests. In 2013, she was named one of MIT Technology Review’s 35 Young Innovators Under 35 and in 2015 Forbes included her as “30 Under 30 Rising Stars in Enterprise Tech”. She is a frequent presenter at global tech and industry conferences, including TEDx, Wired, Strata Data Science, Techcrunch and publishes in HBR. Radinsky also serves as a board member in Israel Securities Authority and technology board of HSBC bank. She also holds a visiting professor position at the Technion focusing on the application of predictive data mining in medicine.