BEGIN:VCALENDAR VERSION:2.0 PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9// CALSCALE:GREGORIAN METHOD:PUBLISH X-FROM-URL:https://www.clsp.jhu.edu X-WR-TIMEZONE:America/New_York BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:20231105T020000 TZOFFSETFROM:-0400 TZOFFSETTO:-0500 RDATE:20241103T020000 TZNAME:EST END:STANDARD BEGIN:DAYLIGHT DTSTART:20240310T020000 TZOFFSETFROM:-0500 TZOFFSETTO:-0400 RDATE:20250309T020000 TZNAME:EDT END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:ai1ec-20117@www.clsp.jhu.edu DTSTAMP:20240328T180247Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nNeural sequence generation systems oftentimes generate sequences by searching for the most likely se quence under the learnt probability distribution. This assumes that the mo st likely sequence\, i.e. the mode\, under such a model must also be the b est sequence it has to offer (often in a given context\, e.g. conditioned on a source sentence in translation). Recent findings in neural machine tr anslation (NMT) show that the true most likely sequence oftentimes is empt y under many state-of-the-art NMT models. This follows a large list of oth er pathologies and biases observed in NMT and other sequence generation mo dels: a length bias\, larger beams degrading performance\, exposure bias\, and many more. Many of these works blame the probabilistic formulation of NMT or maximum likelihood estimation. We provide a different view on this : it is mode-seeking search\, e.g. beam search\, that introduces many of t hese pathologies and biases\, and such a decision rule is not suitable for the type of distributions learnt by NMT systems. We show that NMT models spread probability mass over many translations\, and that the most likely translation oftentimes is a rare event. We further show that translation d istributions do capture important aspects of translation well in expectati on. Therefore\, we advocate for decision rules that take into account the entire probability distribution and not just its mode. We provide one exam ple of such a decision rule\, and show that this is a fruitful research di rection.
\nBiography
\nI am an assistant professor (UD) in natural language processing at the Institute for Logic\, Language and Computation where I lead the Probabilistic Language L earning group.
\nMy work concerns the design of models and algor ithms that learn to represent\, understand\, and generate language data. E xamples of specific problems I am interested in include language modelling \, machine translation\, syntactic parsing\, textual entailment\, text cla ssification\, and question answering.
\nI also develop techniques to approach general machine learning problems such as probabilistic inferenc e\, gradient and density estimation.
\nMy interests sit at the inter section of disciplines such as statistics\, machine learning\, approximate inference\, global optimization\, formal languages\, and computational li nguistics.
\n\n
DTSTART;TZID=America/New_York:20210419T120000 DTEND;TZID=America/New_York:20210419T131500 LOCATION:via Zoom SEQUENCE:0 SUMMARY:Wilker Aziz (University of Amsterdam) “The Inadequacy of the Mode in Neural Machine Translation” URL:https://www.clsp.jhu.edu/events/wilker-aziz-university-of-amsterdam/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2021\,April\,Aziz END:VEVENT BEGIN:VEVENT UID:ai1ec-20120@www.clsp.jhu.edu DTSTAMP:20240328T180247Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:
Abstract
\nRobotics@Google’s mission is to make robots useful in the real world through machine learning. We a re excited about a new model for robotics\, designed for generalization ac ross diverse environments and instructions. This model is focused on scala ble data-driven learning\, which is task-agnostic\, leverages simulation\, learns from past experience\, and can be quickly adapted to work in the r eal-world through limited interactions. In this talk\, we’ll share some of our recent work in this direction in both manipulation and locomotion app lications.
\nBiography
\nCarolina
Abstract
\nAbstract
\nOne of the keys to success in machine learning applications is to improve each user’s personal exper ience via personalized models. A personalized model can be a more resource -efficient solution than a general-purpose model\, too\, because it focuse s on a particular sub-problem\, for which a smaller model architecture can be good enough. However\, training a personalized model requires data fro m the particular test-time user\, which are not always available due to th eir private nature and technical challenges. Furthermore\, such data tend to be unlabeled as they can be collected only during the test time\, once after the system is deployed to user devices. One could rely on the genera lization power of a generic model\, but such a model can be too computatio nally/spatially complex for real-time processing in a resource-constrained device. In this talk\, I will present som e techniques to circumvent the lack of labeled personal data in the contex t of speech enhancement. Our machine learning models will require zero or few data samples from the test-time users\, while they can still achieve t he personalization goal. To this end\, we will investigate modularized spe ech enhancement models as well as the potential of self-supervised learnin g for personalized speech enhancement. Because our research achieves the p ersonalization goal in a data- and resource-efficient way\, it is a step t owards a more available and affordable AI for society.
\nBio graphy
\nMinje Kim is an associate professor in the Dept. of Intellig ent Systems Engineering at Indiana University\, where he leads his researc h group\, Signals and AI Group in Engineering (SAIGE). He is also an Amazo n Visiting Academic\, consulting for Amazon Lab126. At IU\, he is affiliat ed with various programs and labs such as Data Science\, Cognitive Science \, Dept. of Statistics\, and Center for Machine Learning. He earned his Ph .D. in the Dept. of Computer Science at the University of Illinois at Urba na-Champaign. Before joining UIUC\, He worked as a researcher at ETRI\, a national lab in Korea\, from 2006 to 2011. Before then\, he received his M aster’s and Bachelor’s degrees in the Dept. of Computer Science and Engine ering at POSTECH (Summa Cum Laude) and in the Division of Information and Computer Engineering at Ajou University (w ith honor) in 2006 and 2004\, respectively. He is a recipient of various a wards including NSF Career Award (2021)\, IU Trustees Teaching Award (2021 )\, IEEE SPS Best Paper Award (2020)\, and Google and Starkey’s grants for outstanding student papers in ICASSP 2013 and 2014\, respectively. He is an IEEE Senior Member and also a member of the IEEE Audio and Acoustic Sig nal Processing Technical Committee (2018-2023). He is serving as an Associ ate Editor for EURASIP Journal of Audio\, Speech\, and Music Processing\, and as a Consulting Associate Editor for IEEE Open Journal of Signal Proce ssing. He is also a reviewer\, program committee member\, or area chair fo r the major machine learning and signal processing. He filed more than 50 patent applications as an inventor.
DTSTART;TZID=America/New_York:20221202T120000 DTEND;TZID=America/New_York:20221202T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Minje Kim (Indiana University) “Personalized Speech Enhancement: Da ta- and Resource-Efficient Machine Learning” URL:https://www.clsp.jhu.edu/events/minje-kim-indiana-university/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,December\,Kim END:VEVENT BEGIN:VEVENT UID:ai1ec-22422@www.clsp.jhu.edu DTSTAMP:20240328T180247Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nZipf’s law is commonly glo ssed by the aphorism “infrequent words are frequent\,” but in practice\, i t has often meant that there are three types of words: frequent\, infreque nt\, and out-of-vocabulary (OOV). Speech recognition solved the problem of frequent words in 1970 (with dynamic time warping). Hidden Markov models worked well for moderately infrequent words\, but the problem of OOV word s was not solved until sequence-to-sequence neural nets de-reified the con cept of a word. Many other social phenomena follow power-law distribution s. The number of native speakers of the N’th most spoken language\, for e xample\, is 1.44 billion over N to the 1.09. In languages with sufficient data\, we have shown that monolingual pre-training outperforms multilingu al pre-training. In less-frequent languages\, multilingual knowledge tran sfer can significantly reduce phone error rates. In languages with no tra ining data\, unsupervised ASR methods can be proven to converge\, as long as the eigenvalues of the language model are sufficiently well separated t o be measurable. Other systems of social categorization may follow similar power-law distributions. Disability\, for example\, can cause speech pat terns that were never seen in the training database\, but not all disabili ties need do so. The inability of speech technology to work for people wi th even common disabilities is probably caused by a lack of data\, and can probably be solved by finding better modes of interaction between technol ogy researchers and the communities served by technology.
\nBiography
\nMark Hasegawa-Johnson is a William L. Everitt F aculty Fellow of Electrical and Computer Engineering at the University of Illinois in Urbana-Champaign. He has published research in speech product ion and perception\, source separation\, voice conversion\, and low-resour ce automatic speech recognition.
DTSTART;TZID=America/New_York:20221209T120000 DTEND;TZID=America/New_York:20221209T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Mark Hasegawa-Johnson (University of Illinois Urbana-Champaign) “Zi pf’s Law Suggests a Three-Pronged Approach to Inclusive Speech Recognition ” URL:https://www.clsp.jhu.edu/events/mark-hasegawa-johnson-university-of-ill inois-urbana-champaign/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2022\,December\,Hasegawa-Johnson END:VEVENT BEGIN:VEVENT UID:ai1ec-23515@www.clsp.jhu.edu DTSTAMP:20240328T180247Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION:Abstract
\nHow important are different temporal s peech modulations for speech recognition? We answer this question from two complementary perspectives. Firstly\, we quantify the amount of phonetic information in the modulation spectrum of speech by computing the mutual i nformation between temporal modulations with frame-wise phoneme labels. Lo oking from another perspective\, we ask – which speech modulations an Auto matic Speech Recognition (ASR) system prefers for its operation. Data-driv en weights are learned over the modulation spectrum and optimized for an e nd-to-end ASR task. Both methods unanimously agree that speech information is mostly contained in slow modulation. Maximum mutual information occurs around 3-6 Hz which also happens to be the range of modulations most pref erred by the ASR. In addition\, we show that the incorporation of this kno wledge into ASRs significantly reduces their dependency on the amount of t raining data.
\n\n
Learning How to Play With The Machines: Taking Stock of Wher e the Collaboration Between Computational and Social Science Stands
\n< p> \nSpeakers: Jeff Gill\, Ernesto Calvo\, Hale Sirin and Antonios Anastasopoulos
DTSTART;TZID=America/New_York:20230407T120000 DTEND;TZID=America/New_York:20230407T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street SEQUENCE:0 SUMMARY:JHU CLSP APSA Roundtable on Learning How to Play with the Machines URL:https://www.clsp.jhu.edu/events/jhu-clsp-apsa-roundtable-on-learning-ho w-to-play-with-the-machines/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,April\,APSA Roundtable END:VEVENT BEGIN:VEVENT UID:ai1ec-23586@www.clsp.jhu.edu DTSTAMP:20240328T180247Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20230410T120000 DTEND;TZID=America/New_York:20230410T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Ruizhe Huang URL:https://www.clsp.jhu.edu/events/student-seminar-ruizhe-huang/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,April\,Huang END:VEVENT BEGIN:VEVENT UID:ai1ec-23588@www.clsp.jhu.edu DTSTAMP:20240328T180247Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nAdvanc es in open domain Large Language Models (LLMs) starting with BERT and more recently with GPT-4\, PaLM\, and LLaMA have facilitated dramatic improvem ents in conversational systems. These improvements include an unprecedente d breadth of conversational interactions between humans and machines while maintaining and sometimes surpassing the accuracy of systems trained spec ifically for known\, closed domains. However\, many applications still req uire higher levels of accuracy than pre-trained LLMs can provide. There ar e many studies underway to accomplish this. Broadly speaking\, the methods assume the pre-trained models are fixed (due to cost/time)\, and instead look to various augmentation methods including prompting strategies and mo del adaptation/fine-tuning.
\nOne augmentation s trategy leverages the context of the conversation. For example\, who are t he participants and what is known about these individuals (personal contex t)\, what was just said (dialogue context)\, where is the conversation tak ing place (geo context)\, what time of day and season is it (time context) \, etc. A powerful form of context is the shared visual setting of the co nversation between the human(s) and machine. The shared visual scene may b e from a device (phone\, smart glasses) or represented on a screen (browse r\, maps\, etc.) The elements in the visual context can be exploited by gr ounding the natural language conversational interaction\, thereby changing the priors of certain concepts and increasing the accuracy of the system. In this talk\, I will present some of my historical work in this area as well as my recent work in the AI Virtual Assistant (AVA) Lab at Georgia Te ch.
\nBio
\nDr. Larry Hec k is a Professor with a joint appointment in the School of Electrical and Computer Engineering and the School of Interactive Computing at the Georgi a Institute of Technology. He holds the Rhesa S. Farmer Distinguished Chai r of Advanced Computing Concepts and is a Georgia Research Alliance Eminen t Scholar. His received the BSEE from Texas Tech University (1986)\, and M SEE and PhD EE from the Georgia Institute of Technology (1989\,1991). He i s a Fellow of the IEEE\, inducted into the Academy of Distinguished Engine ering Alumni at Georgia Tech and received the Distinguished Engineer Award from the Texas Tech University Whitacre College of Engineering. He was a Senior Research Engineer with SRI (1992-98)\, Vice President of R&D at Nua nce (1998-2005)\, Vice President of Search and Advertising Sciences at Yah oo! (2005-2009)\, Chief Scientist of the Microsoft Speech products and Dis tinguished Engineer in Microsoft Research (2009-2014)\, Principal Scientis t with Google Research (2014-2017)\, and CEO of Viv Labs and SVP at Samsun g (2017-2021).
\n\n
Abstract
\nOur models achieve state-of-the-art performance and lay im portant groundwork towards realizing a universal translation system. At th e same time\, we keep making open-source contributions for everyone to kee p advancing the research for the languages they care about.
\nPaco is Research Scientist Manager supporting trans lation teams in Meta AI (FAIR). He works in the field of machine translati on with a focus on low-resource translation (e.g. NLLB\, FLORES) and the a im to break language barriers. He joined Meta in 2016. His research has be en published in top-tier NLP venues like ACL\, EMNLP. He was the co-chair of the Research director at AMTA (2020-2022). He has ave organized several research competitions focused on low-resource translation and data filter ing. Paco obtained his PhD from the ITESM in Mexico\, was a visiting schol ar at the LTI-CMU from 2008-2009\, and participated in DARPA’s GALE evalua tion program. Paco was a post-doc and scientist at Qatar Computing Researc h Institute in Qatar in 2012-2016
DTSTART;TZID=America/New_York:20230417T120000 DTEND;TZID=America/New_York:20230417T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Paco Guzman (Meta AI) “Building a Universal Translation System to B reak Down Language Barriers” URL:https://www.clsp.jhu.edu/events/paco-guzman-meta-ai/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,April\,Guzman END:VEVENT BEGIN:VEVENT UID:ai1ec-23592@www.clsp.jhu.edu DTSTAMP:20240328T180247Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nLarge language models (LLM s) have ushered in exciting capabilities in language understanding and tex t generation\, with systems like ChatGPT holding fluent dialogs with users and being almost indistinguishable from humans. While this has obviously raised conversational systems and chatbots to a new level\, it also presen ts exciting new opportunities for building artificial agents with improved decision making capabilities. Specifically\, the ability to reason with l anguage can allow us to build agents that can 1) execute complex action se quences to effect change in the world\, 2) learn new skills by ‘reading’ i n addition to ‘doing’\, and 3) allow for easier personalization and contro l over their behavior. In this talk\, I will demonstrate how we can build such language-enabled agents that exhibit the above traits across various use cases such as multi-hop question answering\, web interaction\, and rob otic tool manipulation. In the end\, I will also discuss some dangers of u sing these LLM-based systems and some challenges that lie ahead in ensurin g their safe use.
\nBiography
\nKarthi k Narasimhan is an assistant professor in the Computer Science department at Princeton University and a co-Director of the Princeton NLP group. His research spans the areas of natural language processing and reinforcement learning\, with the goal of building intelligent agents that learn to oper ate in the world through both their own experience (”doing things”) and le veraging existing human knowledge (”reading about things”). Karthik receiv ed his PhD from MIT in 2017\, and spent a year as a visiting research scie ntist at OpenAI contributing to the GPT language model\, prior to joining Princeton in 2018. His research has been recognized by the NSF CAREER\, a Google Research Scholar Award\, an Amazon research award (2019)\, Bell Lab s runner-up prize and outstanding paper awards at EMNLP (2015\, 2016) and NeurIPS (2022).
DTSTART;TZID=America/New_York:20230421T120000 DTEND;TZID=America/New_York:20230421T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Karthik Narasimhan (Princeton University) ” Towards General-Purpose Language-Enabled Agents: Machines that can Read\, Think and Act” URL:https://www.clsp.jhu.edu/events/karthik-narasimhan-princeton-university / X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,April\,Narasimhan END:VEVENT BEGIN:VEVENT UID:ai1ec-23606@www.clsp.jhu.edu DTSTAMP:20240328T180247Z CATEGORIES;LANGUAGE=en-US:Student Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20230424T120000 DTEND;TZID=America/New_York:20230424T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Student Seminar – Brian Lu URL:https://www.clsp.jhu.edu/events/student-seminar-brian-lu/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,April\,Lu END:VEVENT BEGIN:VEVENT UID:ai1ec-23608@www.clsp.jhu.edu DTSTAMP:20240328T180247Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nAutomated analysis of stud ent writing has the potential to provide alternatives to selected-response questions such as multiple choice\, and to enable teachers and instructor s to assess students’ reasoning skills based on their long-form writing. F urther\, automated support to assess both short answers and long passages could provide students with a smoother trajectory towards mastery of writt en communication. Our methods focus on the specific ideas students expres s to support formative assessment through different kinds of feedback\, wh ich aims to scaffold their abilities to reason and communicate. In this ta lk I review our work in the PSU NLP lab on methods for automated assessmen t of different forms of student writing\, from younger and older students. I will briefly illustrate highly curated datasets created in collaborati on with researchers in STEM education\, results from deployment of an olde r content analysis tool on middle school physics essays\, and very prelimi nary results on assessment of college students’ physics lab reports. I wi ll also present our current work on short answer assessment using a novel recurrent relation network that incorporates contrastive learning.
\nBio
\nBecky Passonneau has been a Professor in the Department of Computer Science and Engineering at Penn State University s ince 2016\, when she joined as the first NLP researcher. Since that time t he NLP faculty has grown to include Rui Zhang and Wenpeng Yin. Becky’s res earch in natural language processing addresses computational pragmatics\, meaning the investigation of language as a system of interactive behavior that serves a wide range of purposes. She received her PhD in Linguistics from the University of Chicago in 1985\, and worked at several academic an d industry research labs before joining Penn State. Her work is reported i n over 140 publications in journals and refereed conference proceedings\, and has been funded through 27 sponsored projects from 16 sources\, inclu ding government agencies\, corporate sponsors\, corporate gifts\, and foun dations..
DTSTART;TZID=America/New_York:20230428T120000 DTEND;TZID=America/New_York:20230428T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Becky Passonneau (Penn State University) ” Automated Support to Sca ffold Students’ Short- and Long-form STEM Writing” URL:https://www.clsp.jhu.edu/events/becky-passonneau-penn-state-university- automated-support-to-scaffold-students-short-and-long-form-stem-writing/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,April\,Passonneau END:VEVENT BEGIN:VEVENT UID:ai1ec-24167@www.clsp.jhu.edu DTSTAMP:20240328T180247Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nPre-trained speech represe ntation models have become ubiquitous in speech processing over the past f ew years. They have both improved the state of the art and made it feasib le to learn task-specific models with very little labeled data. However\, it is not well understood what linguistic information is encoded in pre-t rained models and how best to apply them to downstream tasks. In this talk I will describe recent work that begins to build an understanding of the layer-wise information learned by pre-trained speech models. We consider a number of popular pre-trained models and investigate the extent to which their layers encode spectral\, phonetic\, and word-level information. Th e results of these analyses also suggest some ways to improve or simplify the application of pre-trained models for downstream tasks. Finally\, I w ill describe our efforts to benchmark model performance on a variety of sp oken language understanding tasks\, in order to broaden our understanding of the capabilities of state-of-the-art models.
\nThis talk is based in part on work presented in
\nA. Pasad et al.\, “Comparative layer-wise analysis of self-supervis ed speech models\,”ICASSP 2023.
\nA. Pasad et al.\, “What do self-supervised speech models know about words?\,” arXiv:2307.00162\, 2023.
\nS. Shon et al.\, “SLUE Phase-2: A Ben chmark Suite of Diverse Spoken Language Understanding Tasks\,” ACL 202 3.
\nBio
\nKaren Livescu is a Professor at TT I-Chicago. She completed her PhD at MIT in 2005. She is an ISCA Fellow and a recent IEEE Distinguished Lecturer. She has served as a program chair/ co-chair for ICLR\, Interspeech\, and ASRU\, and is an Associate Editor fo r TACL and IEEE T-PAMI. Her group’s work spans a variety of topics in spo ken\, written\, and signed language processing.
DTSTART;TZID=America/New_York:20231201T120000 DTEND;TZID=America/New_York:20231201T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Karen Livescu (Toyota Technological Institute at Chicago) “What Do Pre-Trained Speech Representation Models Know? Layer-Wise Analysis and Ben chmarking” URL:https://www.clsp.jhu.edu/events/karen-livescu-toyota-technological-inst itute-at-chicago/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,December\,Livescu END:VEVENT BEGIN:VEVENT UID:ai1ec-24169@www.clsp.jhu.edu DTSTAMP:20240328T180247Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION:Abstract
\nFoundation models\, includ ing Chat-GPT and its many variants\, have come into prominence in the natu ral language processing (NLP) community thanks the ubiquity of text data r eadily available on the internet and the design of modern transformer arch itectures that can effectively learn from such data. However\, the develop ment of a foundation model for sequential decision-making (e.g.\, reinforc ement learning\, planning) is faced with additional challenges not present in NLP. In this talk\, we discuss some of these challenges with the hope of informing future investments that funding agencies and the academic com munity should engage in. The problem of transfer learning in the context o f sequential decision-making is also discussed and constitutes one of the challenges that foundation models must address.
\nBio
\nAlvaro Velasquez a program manager at the D efense Advanced Research Projects Agency (DARPA)\, where he currently lead s programs on neuro-symbolic AI. Before that\, Alvaro oversaw the machine intelligence portfolio for the Information Directorate of the Air Force Re search Laboratory (AFRL). Alvaro is a recipient of the distinguished paper award from AAAI and best paper and patent awards from AFRL\, the National Science Foundation Graduate Research Fellowship. He has authored over 70 papers and two patents and serves as Associate Editor of the IEEE Transact ions on Artificial Intelligence.
DTSTART;TZID=America/New_York:20231204T120000 DTEND;TZID=America/New_York:20231204T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Alvaro Velasquez (DARPA) “Foundation Models and the Transfer of Emb odied Autonomy” URL:https://www.clsp.jhu.edu/events/alvaro-velasquez/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2023\,December\,Velasquez END:VEVENT BEGIN:VEVENT UID:ai1ec-24491@www.clsp.jhu.edu DTSTAMP:20240328T180247Z CATEGORIES;LANGUAGE=en-US:Seminars CONTACT: DESCRIPTION: DTSTART;TZID=America/New_York:20240401T120000 DTEND;TZID=America/New_York:20240401T131500 LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218 SEQUENCE:0 SUMMARY:Yuan Gong URL:https://www.clsp.jhu.edu/events/yuan-gong/ X-COST-TYPE:free X-TAGS;LANGUAGE=en-US:2024\,April\,Gong END:VEVENT END:VCALENDAR