The Center for Language and Speech Processing (CLSP) at the Johns Hopkins University seeks applicants for postdoctoral fellowship positions in speech and language processing and machine learning. Applicants must have a Ph.D. in a relevant discipline and a strong research record.
Johns Hopkins University is a private university located in Baltimore, Maryland. The campus provides easy access to a number of affordable and vibrant neighborhoods and waterfront dining options. Hopkins is also connected to Washington DC (40 mins), Philadephia (1.5 hours) and New York city (2.5 hours) via direct trains and buses.
CLSP has a history of placing students in top academic and industry positions, with a large network of alumni at Google, Amazon, Microsoft Research, IBM Research, Facebook, Twitter, Nuance, BBN, and numerous startups.
The center has a number of postdoctoral positions available for the coming year. Possible research topics include:
Trends embedded in large streams of social media have been shown to capture important population information across a number of applications, including political polling, product sentiment, disaster response and public health. This project will investigate these and other applications by developing new machine learning and NLP algorithms for mining social media data.
We are seeking potentially several postdocs to work on a wide range of problems in computational morphology. We will address a very large and diverse set of world languages using a broad spectrum of approaches, over large annotated and unannotated datasets of both text and speech. Both trained linguists and those without formal linguistic training but an interest in morphological phenomena are welcome.
Unsupervised learning of useful features, or representations, is one of the most basic challenges of machine learning. Unsupervised representation learning techniques capitalize on unlabeled data which is often cheap and abundant and sometimes virtually unlimited. The goal of these ubiquitous techniques is to learn a representation that reveals intrinsic low-dimensional structure in data, disentangles underlying factors of variation by incorporating universal AI priors such as smoothness and sparsity, and is useful across multiple tasks and domains. This project aims to develop new theory and methods for representation learning that can easily scale to large datasets. To capitalize on massive amounts of unlabeled data, this project will develop appropriate computational approaches and study them in the “data-laden” regime.
This project involves applications of bayesian non-parametrics, causal inference and approximate inference to large-scale time series data (in healthcare). Email for further details.
Applicants should apply using the online application: https://academicjobsonline.org/ajo/jobs/6930
Deadline: Apply by March 31, 2016 for full consideration, but applications will be accepted until positions are filled.
Questions should be directed to email@example.com or the project contact when available.
Application Material Required:
Please indicate projects of interest in your cover letter/research statement.
Applicants are not required to be to US citizens or permanent residents.