Archived Seminars by Year

Show all Seminars     Only show seminars with video

2001

February 6, 2001

“Geometric Source Separation: Merging convolutive source separation with geometric beamforming”   Video Available

Lucas Parra, Sarnoff Corporation

[abstract] [biography]

Abstract

Blind source separation of broad band signals in a multi-path environment remains a difficult problem. Robustness has been limited due to frequency permutation ambiguities. In principle, increasing the number of sensors allows improved performance but also introduces additional degrees of freedom in the separating filters that are not fully determined by separation criteria. We propose here to further shape the filters and improve the robustness of blind separation by including geometric information such as sensor positions and localized source assumption. This allows us to combine blind source separation with notions from adaptive and geometric beamforming leading to a number of novel algorithms that could be termed collectively "geometric source separation".

Speaker Biography

Lucas C. Parra was born in Tucumán, Argentina. He received his Diploma in Physics in 1992, and Doctorate in Physics in 1996 from the Maximilian-Ludwig-Universi?t, Munich, Germany. From 1992 to 1995 he worked at the Neural Networks Group at Siemens Central Research in Munich, Germany and at the Machine Learning Department at Siemens Corporate Research (SCR) in Princeton, NJ. During 1995-1997 he was member of the Imaging Department at SCR and worked on medical image processing and novel reconstruction algorithms for nuclear medicine. Since 1997 he is with Sarnoff Corp. His current research concentrates on probabilistic models in various image and signal processing areas.  

February 27, 2001

“Whats up with pronunciation variation? Why its so hard to model and what to do about it”   Video Available

Dan Jurafsky, University of Colorado, Boulder Department of Linguistics, Department of Computer Science, Institute of Cognitive Science, & Center for Spoken Language Research

[abstract] [biography]

Abstract

Automatic recognition of human-to-machine speech has made fantastic progress in the last decades, and current systems achieve word error rates below 5% on many tasks. But recognition of human-to-human speech is much harder; error rates are often 30% or even higher. Many studies of human-to-human speech have shown that pronunciation variation is a key factor contributing to these high error rates. Previous models of pronunciation variation, however, have not had significant success in reducing error rates. In order to help understand why gains in pronunciation modeling have proven so elusive, we investigated which kinds of pronunciation variation are well captured by current triphone models, and which are not. By examining the change in behavior of a recognizer as it receives further triphone training, we show that many of the kinds of variation which previous pronunciation models attempt to capture, such as phone substitution or phone reduction due to neighboring phonetic contexts, are already well captured by triphones. Our analysis suggests rather that syllable deletion caused by non-phonetic factors is a major cause of difficulty for recognizers. We then investigated a number of such non-phonetic factors in a large database of phonetically hand-transcribed words from the Switchboard corpus. Using linear and logistic regression to control for phonetic context and rate of speech, we did indeed find very significant effects of non-phonetic factors. For example, words have extraordinarily long and full pronunciations when they occur near "disfluencies" (pauses, filled pauses, and repetitions), or initially or finally in turns or utterances, while words which have a high unigram, bigram, or reverse bigram (given the following word) probability have much more reduced pronunciations. These factors must be modeled with lexicons based on dynamic pronunciation probabilities; I describe our work-in-progress on building such a lexicon. This talk describes joint work with Wayne Ward, Alan Bell, Eric Fosler-Lussier, Dan Gildea, Cynthia Girand, Michelle Gregory, Keith Herold, Zhang Jianping, William D. Raymond, Zhang Sen, and Yu Xiuyang.

Speaker Biography

Dan Jurafsky is an assistant professor in the Linguistics and Computer Science departments, the Institute of Cognitive Science, and the Center for Spoken Language Research at the University of Colorado, Boulder. He was last at Hopkins for the JHU Summer 1997 Workshop managing the dialog act modeling group. Dan is the author with Jim Martin of the recent Prentice Hall textbook "Speech and Language Processing", and is teaching speech synthesis and recognition this semester at Boulder. Dan also plays the drums in mediocre pop bands and the corpse in local opera productions, and is currently working on his recipe for "Three Cups Chicken".

March 6, 2001

“Computational Anatomy: Computing Metrics on Anatomical Shapes”   Video Available

Mirza Faisal Beg, Department of Biomedical Engineering, Johns Hopkins University

[abstract] [biography]

Abstract

In this talk, I will present the problem of quantifying anatomical shape as represented in an image within the framework of the deformable template model. Briefly, the deformable template model approach involves selecting a representative shape to be the reference or the template representing prior knowledge of the shape of the anatomical sub-structures in the anatomy to be characterized and comparing anatomical shapes as represented in given images also called as the target to the image of the template. Comparison is done by computing extremely detailed, high-dimensional diffeomorphisms (smooth and invertible transformations) as a flow between the images that will deform the template image to match the target image. By minimizing a cost comprised of a term representing the energy of the velocity of the flow field and a term that represents the amount of mismatch between images being compared, such a diffeomorphic transformation between the images is computed. The construction of diffeomorphisms between the images allows metrics to be calculated in comparing shapes represented in image data. Transformations "far" from identity represent larger deviations in shape from the template than those "close" to the identity transformation. The minimization procedure to compute the diffeomorphic transformations is implemented via a standard steepest-descent technique. I will show some preliminary results on image matching and the metrics computed on mitochindrial and hippocampal shapes by using this approach. An example of the possible clinical applications of this work are in the area of diagnosis of neuropsychiatric disorders such as Alzheimer's disease, Schizophrenia, and Epilepsy by quantifying shape changes in the hippocampus.  

Speaker Biography

Mirza Faisal BegPh.D. Candidate, Center for Imaging ScienceDepartment of Biomedical EngineeringJohns Hopkins University

March 13, 2001

“Latent-Variable Representations for Speech Processing and Research”   Video Available

Miguel A. Carreira-Perpinan, Georgetown Institute for Computational and Cognitive Sciences, Georgetown University Medical Center

[abstract] [biography]

Abstract

Continuous latent variable models are probabilistic models that represent a distribution in a high-dimensional Euclidean space using a small number of continuous, latent variables. Examples include factor analysis, the generative topographic mapping (GTM) and independent factor analysis (ICA). This type of models is well suited for dimensionality reduction and sequential data reconstruction. In the first part of this talk I will introduce the theory of continuous latent variable models and show an example of their application to the dimensionality reduction of electropalatographic (EPG) data. In the second part I will present a new method for missing data reconstruction of sequential data that includes as a particular case the inversion of many-to-one mappings. The method is based on multiple pointwise reconstruction and constraint optimisation. Multiple pointwise reconstruction uses a Gaussian mixture joint density model for the data, conveniently implemented with a nonlinear continuous latent variable model (GTM). The modes of the conditional distribution of missing values given present values at each point in the sequence represent local candidate reconstructions. A global sequence reconstruction is obtained by efficiently optimising a constraint, such as continuity or smoothness, with dynamic programming. I derive two algorithms for exhaustive mode finding in Gaussian mixtures, based on gradient-quadratic search and fixed-point search, respectively; as well as estimates of error bars for each mode and a measure of distribution sparseness. I will demonstrate the method with synthetic data for a toy example and a robot arm inverse kinematics problem; and describe potential applications in speech, including the acoustic-to-articulatory mapping problem, audiovisual mappings for speech recognition and recognition of occluded speech.  

Speaker Biography

Miguel A. Carreira-Perpinan is a postdoctoral fellow at the Georgetown Institute for Computational and Cognitive Sciences, Georgetown University Medical Center. He has university degrees in computer science and in physics (Technical University of Madrid, Spain, 1991) and a PhD in computer science (University of Sheffield, UK, 2001). In 1993-94 he worked at the European Space Agency in Darmstadt, Germany, on real-time simulation of satellite thermal subsystems. His current research interests are statistical pattern recognition and computational neuroscience.

March 27, 2001

“Learning Probabilistic and Lexicalized Grammars for Natural Language Processing”   Video Available

Rebecca Hwa, University of Maryland

[abstract] [biography]

Abstract

This talk addresses two questions: what are the properties of a good grammar representation for natural language processing applications, and how can such grammars be constructed automatically and efficiently? I shall begin by describing a formalism called the Probabilistic Lexicalized Tree Insertion Grammars (PLTIGs), which has several linguistically motivated properties that are helpful for processing natural languages. Next, I shall present a learning algorithm that automatically induces PLTIGs from human-annotated text corpora. I have conducted empirical studies showing that a trained PLTIG compares favorably with other formalisms on several kinds of tasks. Finally, I shall discuss ways of making grammar induction more efficient. In particular, I want to reduce the dependency of the induction process on human-annotated training data. I will show that by applying a learning technique called sample selection to grammar induction, we can significantly decrease the number of training examples needed, and thereby reducing the human effort spent on annotating training data.

Speaker Biography

Rebecca Hwa is currently a postdoctoral research fellow at University of Maryland, College Park. Her research interests include natural language processing, machine learning, and human computer interaction.    

April 10, 2001

“Deterministic Annealing for Clustering, Compression, Classification, Regression, and Speech Recognition”

Kenneth Rose, Department of Electrical and Computer Engineering, University of California at Santa Barbara

[abstract] [biography]

Abstract

The deterministic annealing approach to clustering and its extensions have demonstrated substantial performance improvement over standard supervised and unsupervised learning methods on a variety of important problems including compression, estimation, pattern recognition and classification, and statistical regression. It has found applications in a broad spectrum of disciplines ranging from various engineering fields to physics, biology, medicine and economics. The method offers three important features: ability to avoid many poor local optima; applicability to many different structures/architectures; and ability to minimize the right cost function even when its gradients vanish almost everywhere as in the case of the empirical classification error. It is derived within a probabilistic framework from basic information theoretic principles. The application-specific cost is minimized subject to a constraint on the randomness (Shannon entropy) of the solution, which is gradually lowered. We emphasize intuition gained from analogy to statistical physics, where this is an annealing process that avoids many shallow local minima of the specified cost and, at the limit of zero "temperature", produces a non-random (hard) solution. Phase transitions in the process correspond to controlled increase in model complexity. Alternatively, the method is derived within rate-distortion theory, where the annealing process is equivalent to computation of Shannon's rate-distortion function, and the annealing temperature is inversely proportional to the slope of the curve. This provides new insights into the method and its performance, as well as new insights into rate-distortion theory itself. The basic algorithm is extended by incorporating structural constraints to allow optimization of numerous popular structures including vector quantizers, decision trees, multilayer perceptrons, radial basis functions, mixtures of experts and hidden Markov models. Experimental results show considerable performance gains over standard structure-specific and application-specific training methods. After covering the basics, the talk will emphasize the speech recognition applications of the approach, including a brief summary of work in progress.

Speaker Biography

Kenneth Rose received the Ph.D. degree in electrical engineering from Caltech in 1991. He then joined the faculty of the Department of Electrical and Computer Engineering, University of California at Santa Barbara. His research interests are in information theory, source and channel coding, speech and general pattern recognition, image coding and processing, and nonconvex optimization in general. He currently serves as the editor for source-channel coding for the IEEE Transactions of Communications. In 1990 he received the William R. Bennett Prize-Paper Award from the IEEE Communications Society for the best paper in the Transactions.

April 17, 2001

“Acoustic-optical Phonetics and Audiovisual Speech Perception”

Lynne Bernstein

[abstract]

Abstract

Several sources of behavioral evidence show that speech perception is audiovisual when acoustic and optical speech signals are afforded the perceiver. The McGurk effect and enhancements to auditory speech intelligibility in noise are two well-known examples. In comparison with acoustic phonetics and auditory speech perception, however, relatively little is known about optical phonetics and visual speech perception. Likewise, how optical and acoustic signals are related and how they are integrated perceptually remains an open question. We have been studying relationships between kinematic and acoustic recordings of speech. The kinematic recordings were made with an optical recording system that tracked movements on talkers faces and with a magnetometer system that simultaneously tracked tongue and jaw movements. Speech samples included nonsense syllables and sentences from four talkers, prescreened for visual intelligibility. Mappings among the kinematic and acoustic signals show a perhaps surprisingly high degree of correlation. However, demonstrations of correlations in speech signals are not evidence about perceptual mechanisms responsible for audiovisual integration. Perceptual evidence from McGurk experiments has been used to hypothesize early phonetic integration of visual and auditory speech information, even though some of these experiments have also shown that the effect occurs despite relatively long crossmodal temporal asynchronies. The McGurk effect can be made to occur when acoustic /ba/ is combined in synchrony with a visual /ga/, typically resulting in the perceiver reporting having heard /da/. To investigate the time course and cortical location of audiovisual integration, we obtained event-related potentials (ERPs) from twelve adults, prescreened for McGurk susceptibility. Stimuli were presented in an oddball paradigm to evoke the mismatch negativity (MMN), a neurophysiological discrimination measure, most robustly demonstrated with acoustic contrasts. Conditions were audiovisual McGurk stimuli, visual-only stimuli from the McGurk condition, and auditory stimuli corresponding to the McGurk condition percepts (/ba/-/da/). The magnitude (area) of the MMN for the audiovisual condition was maximal at a latency > 300ms, much later than the maximal magnitude of the auditory MMN (approximately 260ms), suggesting that integration occurs later than auditory phonetic processing. Additional latency, amplitude, and dipole source analyses revealed similarities and differences between the auditory, visual, and audiovisual conditions. Results support an audiovisual integration neural network that is at least partly distinct from and operates at a longer latency than unimodal networks. In addition, results showed dynamic differences in processing across correlated and uncorrelated audiovisual combinations. These results point to a biocomplex system. We can consider the agents of complexity theory to be for our case (non-inclusively) the unimodal sensory/perceptual systems, which have important heterogeneous characteristics. Auditory and visual perception have their own organization and when combined apparently participate in another organization. Apparently also, the dynamics of audiovisual organization vary depending on the correlation of the acoustic-optical phonetic signals. This view contrasts with views of audiovisual integration based primarily on consideration of algorithms or formats for information combination.    

April 24, 2001

“Effects of sublexical pattern frequency on production accuracy in young children”

Mary Beckman, Department of Linguistics, Ohio State University

[abstract] [biography]

Abstract

A growing body of research on adult speech perception and production suggests that phonological processing is grounded in generalizations about sub-lexical patterns and the relative frequencies with which they occur in the lexicon. Much infant perception work suggests that the acquisition of attentional strategies appropriate for lexical access in the native language similarly is based on generalizations over the lexicon. Given this picture of the adult and the infant, we might expect to see the influence of phoneme frequency and phoneme sequence frequency on young children's production accuracy and fluency as well. We have done several experiments recently which document such influences. For example, we looked at the frequency of /k/ relative to /t/ in Japanese words, and its effect on word-initial lingual stops produced by young children acquiring the language. Both relative accuracy overall and error patterns differed from those observed for English-acquiring children, in ways that reflect the different relative frequencies of coronals and dorsals in the language. Another set of studies focused on the effect of phoneme sequence frequency in English words on the accuracy and fluency of non-word repetition by young English-speaking children. We had 87 children, aged 3-7, imitate nonsense forms containing diphone sequences that were controlled for transitional probability. For example, a non-word containing high-frequency /ft/ was matched to another non-word containing low-frequency /fk/. Children were generally less accurate on the low-frequency sequence, and the size of this effect was correlated with the size of the difference in transitional probability. It was also correlated the child's vocabulary size. That is, the more words a child knows, the more robust the phonological abstraction. These results have important implications for models of phonological competence and its relationship to the lexicon.

Speaker Biography

Mary E. Beckman is a Professor of Linguistics and Speech & Hearing Science at Ohio State University in Columbus, OH, and also spends part of the year as a Professor at the Macquarie Centre for Cognitive Sciences in Sydney, Australia. Much of her work has focused on intonation and prosodic structure in English and Japanese. For example, she is co-author (with Janet Pierrehumbert) of the 1988 monograph Japanese Tone Structure, and she wrote the associated intonation synthesis program. She also has worked on speech kinematics, on effects of lexical frequency and sub-lexical pattern frequency on phonological processing, and on phonological acquisition.

June 1, 2001

“Geometric Source Separation:
Merging convolutive source separation with geometric beamforming”
   Video Available

Lucas Parra, Sarnoff Corporation

[abstract] [biography]

Abstract

Blind source separation of broad band signals in a multi-path environment remains a difficult problem. Robustness has been limited due to frequency permutation ambiguities. In principle, increasing the number of sensors allows improved performance but also introduces additional degrees of freedom in the separating filters that are not fully determined by separation criteria. We propose here to further shape the filters and improve the robustness of blind separation by including geometric information such as sensor positions and localized source assumption. This allows us to combine blind source separation with notions from adaptive and geometric beamforming leading to a number of novel algorithms that could be termed collectively "geometric source separation".

Speaker Biography

Lucas C. Parra was born in Tucumán, Argentina. He received his Diploma in Physics in 1992, and Doctorate in Physics in 1996 from the Maximilian-Ludwig-Universit, Munich, Germany. From 1992 to 1995 he worked at the Neural Networks Group at Siemens Central Research in Munich, Germany and at the Machine Learning Department at Siemens Corporate Research (SCR) in Princeton, NJ. During 1995-1997 he was member of the Imaging Department at SCR and worked on medical image processing and novel reconstruction algorithms for nuclear medicine. Since 1997 he is with Sarnoff Corp. His current research concentrates on probabilistic models in various image and signal processing areas.

September 11, 2001

“Predictive clustering: smaller, faster language modeling”

Joshua Goodman, Microsoft Corp.

[abstract] [biography]

Abstract

I'll start with a brief overview of current research directions in several Microsoft Research groups, including the Speech Group (where I used to be), the Machine Learning Group (where I currently work) and the Natural Language Processing Group (with whom I kibitz.) Then, I will go on to describe my own recent research in clustering. Clusters have been one of the staples of language modeling research for almost as long as there has been language modeling research. I will give a novel clustering approach that allows us to create smaller models, and to train maximum entropy models faster. First, I examine how to use clusters for language model compression, with a surprising result. I achieve my best results by first making the models larger using clustering, and then pruning them. This can result in a factor of three or more reduction in model size at the same perplexity. I then go on to examine a novel way of using clustering to speed up maximum entropy training. Maximum entropy is considered by many people to be one of the more promising avenues of language model research, but it is prohibitively expensive to train large models. I show how to use clustering to speed up training time by up to a factor of 35 over standard techniques, while slightly improving perplexity. The same approach can be used to speed up some other learning algorithms that try to predict a very large number of outputs.

Speaker Biography

Joshua Goodman worked at Dragon Systems for two years, where he designed the speech recognition engine that was used until their recent demise, which he claims he had nothing to do with. He then went to graduate school at Harvard, where he studied statistical natural language processing, especially statistical parsing. Next, he joined the Microsoft Speech Technology Group, where he worked on language modeling, especially language model combination, and clustering. Recently, he switched to the Machine Learning and Applied Statistics Group, where he plans to do "something with language and probabilities.

September 25, 2001

“Time-frequency auditory processing in bat sonar”

James Simmons, Brown University

[abstract]

Abstract

I'm interested in understanding how the bat's sonar works and how?the bat's brain makes sonar images. They make sounds, listen to echoes, and then see objects. To study echolocation, we go into the field and videotape bats using sonar for different purposes. These observations tell us in what situations bats use their sonar, and what sorts of sounds they use. If we know where the objects are in the videos, we can figure out what sounds get back to the bats. We then use a computer to generate these sounds and play them to the bats while we record responses from their brains. We want to know what the neurons in the bat's auditory system do to process the echoes to allow the brain to see. ?We also train bats in the lab to respond to computer-generated echoes, so we can tell something about the images the bat perceives. We are developing a computer model of how the bat's brain processes the echoes to see if the model produces the same kind of images the bat perceives. This model is part of a project to design new high-performance sonar for the U.S. Navy.

October 2, 2001

“Integrating behavioral, neuroscience, and computational methods in the study of human speech”   Video Available

Lynne Bernstein, Department of Communication Neuroscience, House Ear Institute

[abstract] [biography]

Abstract

Speech perception is a process that transforms speech stimuli into neural representations that are then projected onto word-form representations in the mental lexicon. This process is conventionally thought to involve the encoding of stimulus information as abstract linguistic categories, such as features or phonemes. We have been using a variety of methods to study auditory and visual speech perception and spoken word recognition. Across behavioral, brain imaging, electrophysiology, and computational methods, we are obtaining evidence for modality-specific speech processing, for example: Based on computational modeling and behavioral testing, it appears that modality-specific representations contact the mental lexicon during spoken word recognition; Based on fMRI results, there is a visual phonetic processing route in human cortex that is distinct from the auditory phonetic processing route; and Direct correlations between optical phonetic similarity measures and visual speech perception are high, approximately .80. An implication of these findings is that speech perception operates on modality-specific representations rather than being mediated by abstract, amodal representations; And spoken language processing is far more highly distributed in the brain than heretofore thought.

Speaker Biography

Dr. Bernstein received her Ph.D. in Psycholinguistics from the University of Michigan. She holds current academic appointments at UCLA, the California Institute of Technology, and California State University. For more information, please visit her webpage at http://www.hei.org/research/scientists/bernbio.htm.

October 9, 2001

“Spatial Language and Spatial Cognition: The Case of Motion Events”

Barbara Landau, Johns Hopkins University

[abstract]

Abstract

For some years, our lab has been working on questions about the nature of semantic representations of space, in particular, representations of objects, motions, locations, and paths involved in motion events. One central question we address is the nature of the correspondence between these semantic representations of space and their possible non-linguistic counterparts. To what extent are there direct mappings between our non-linguistic representation of motion events, and the language we use to express these? We have recently approached this question by studying the language of motion events (as well as other spatial language) in people with severely impaired spatial cognition. Individuals with Williams syndrome (a rare genetic defect) have an unusual cognitive profile, in which spatial cognition is severely impaired, but language (i.e. morphology, syntax) is spared. The crucial question is whether these individuals will show impaired spatial language, commensurate with their spatial impairment, or spared spatial language, commensurate with the rest of their linguistic system. Evidence suggests remarkable sparing in the language of motion events for this group, with rich structure in the nature of expressions for objects, motions, and paths. These findings suggest that the linguistic encoding of motion events may be insulated from the effects of more general spatial breakdown, and raises questions about the nature of the mapping between spatial language and spatial cognition.

October 9, 2001

“CLSP Fall Seminar Series”   Video Available

Barbara Landau

October 16, 2001

“Inductive Databases and Knowledge Scouts”   Video Available

Ryszard S. Michalski

[abstract] [biography]

Abstract

The development of very large databases and WWW has created extraordinary opportunities for monitoring, analyzing and predicting economical, ecological, demographic, and other processes in the world. Our current technologies for data mining are, however, insufficient for such tasks. This talk will describe an ongoing research in the GMU MLI Laboratory on the development of "inductive databases and knowledge scouts" that represent a new approach to the problem of semi-automatically deriving user-oriented knowledge from databases. If time permits, a demo will be presented that illustrates the methodology natural induction that is at the heart of this research.

Speaker Biography

Ryszard S. Michalski is Planning Research Corporation Chaired Professor of Computational Sciences, and Director of the Machine Learning and Inference Laboratory at George Mason University. He is also Fellow of AAAI, and a Foreign Member of the Polish Academy of Sciences. Dr. Michalski is viewed as a co-founder of the field of machine learning and has initiated a number of research directions, such as constructive induction, conceptual clustering, variable-precision logic (with Patrick Winston, MIT), human plausible reasoning (with Alan Collins, BBN), multistrategy learning, the LEM model of non-Darwinian evolutionary computation, and most recently inductive databases and knowledge scouts. He has authored, co-authored and co-edited 14 books/proceedings, and over 350 papers in the areas of his interest.

October 23, 2001

“Knowledge Discovery in Text”   Video Available

Dekang Lin

October 23, 2001

“Knowledge Discovery From Text”   Video Available

Dekang Lin

October 30, 2001

“How Important is "Starting Small" in Language Acquisition”   Video Available

David Plaut, Carnegie Mellon University

[abstract] [biography]

Abstract

Elman (1993, Cognition) reported that recurrent connectionist networks could learn the structure of English-like artificial grammars by performing implicit word prediction, but that learning was successful only when "starting small" (e.g., starting with limited memory that only gradually improves). This finding provided critical computational support for Newport's (1990, Cognitive Science) ``less is more'' account of critical period effects in language acquisition---that young children are aided rather than hindered by limited cognitive resources. The current talk presents connectionist simulations that indicate, to the contrary, that language learning by recurrent networks does not depend on starting small; in fact, such restrictions hinder acquisition as the languages are made more natural by introducing graded semantic constraints. Such networks can nonetheless exhibit apparent critical-period effects as a result of the entrenchment of representations learned in the service of performing other tasks, including other languages. Finally, although the word prediction task may appear unrelated to actual language processing, a preliminary large-scale simulation illustrates how performing implicit prediction during sentence comprehension can provide indirect training for sentence production. The results suggest that language learning may succeed in the absence of innate maturational constraints or explicit negative evidence by taking advantage of the indirect negative evidence that is available in performing online implicit prediction.

Speaker Biography

Dr. Plaut is an Associate Professor of Psychology and Computer Science at Carnegie Mellon University, with a joint appointment in the Center for the Neural Basis of Cognition. His research involves using connectionist modeling, complemented by empirical studies, to extend our understanding of normal and impaired cognitive processing in the domains of reading, language, and semantics. For more details, see his vitae at http://www.cnbc.cmu.edu/~plaut/vitae.html.

Back to Top