BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-FROM-URL:https://www.clsp.jhu.edu
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20231105T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20241103T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20240310T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20250309T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-20115@www.clsp.jhu.edu
DTSTAMP:20240328T124508Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nData science in small medical datasets usually means
doing precision guesswork on unreliable data provided by those with high e
xpectations. The first part of this talk will focus on issues that data sc
ientists and engineers have to address when working with this kind of data
(e.g. unreliable labels\, the effect of confounding factors\, necessity o
f clinical interpretability\, difficulties with fusing more data sets). Th
e second part of the talk will include some real examples of this kind of
data science in the field of neurology (prediction of motor deficits in Pa
rkinson’s disease based on acoustic analysis of speech\, diagnosis of Park
inson’s disease dysgraphia utilising online handwriting\, exploring the Mo
zart effect in epilepsy based on the music information retrieval) and psyc
hology (assessment of graphomotor disabilities in children with developmen
tal dysgraphia).\nBiography\nJiri Mekyska is the head of the BDALab (Brain
Diseases Analysis Laboratory) at the Brno University of Technology\, wher
e he leads a multidisciplinary team of researchers (signal processing engi
neers\, data scientists\, neurologists\, psychologists) with a special foc
us on the development of new digital endpoints and digital biomarkers enab
ling to better understand\, diagnose and monitor neurodegenerative (e.g. P
arkinson’s disease) and neurodevelopmental (e.g. dysgraphia) diseases.
DTSTART;TZID=America/New_York:20210329T120000
DTEND;TZID=America/New_York:20210329T131500
LOCATION:via Zoom
SEQUENCE:0
SUMMARY:Jiri Mekyska (Brno University of Technology) “Data Science in Small
Medical Data Sets: From Logistic Regression Towards Logistic Regression”
URL:https://www.clsp.jhu.edu/events/jiri-mekyska-brno-university-of-technol
ogy/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\n\\n\\nAbstr
act
\nData science in small medical datasets usually means
doing precision guesswork on unreliable data provided by those with high e
xpectations. The first part of this talk will focus on issues that data sc
ientists and engineers have to address when working with this kind of data
(e.g. unreliable labels\, the effect of confounding factors\, necessity o
f clinical interpretability\, difficulties with fusing more data sets). Th
e second part of the talk will include some real examples of this kind of
data science in the field of neurology (prediction of motor deficits in Pa
rkinson’s disease based on acoustic analysis of speech\, diagnosis of Park
inson’s disease dysgraphia utilising online handwriting\, exploring the Mo
zart effect in epilepsy based on the music information retrieval) and psyc
hology (assessment of graphomotor disabilities in children with developmen
tal dysgraphia).
\nBiography
\nJiri Mekyska is the he
ad of the BDALab (Brain Diseases Analysis Laboratory) at the Brno Universi
ty of Technology\, where he leads a multidisciplinary team of researchers
(signal processing engineers\, data scientists\, neurologists\, psychologi
sts) with a special focus on the development of new digital endpoints and
digital biomarkers enabling to better understand\, diagnose and monitor ne
urodegenerative (e.g. Parkinson’s disease) and neurodevelopmental (e.g. dy
sgraphia) diseases.
\n\n
X-TAGS;LANGUAGE=en-US:2021\,March\,Mekyska
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-22395@www.clsp.jhu.edu
DTSTAMP:20240328T124508Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nRecursive calls over recursive data are widely useful
for generating probability distributions\, and probabilistic programming
allows computations over these distributions to be expressed in a modular
and intuitive way. Exact inference is also useful\, but unfortunately\, ex
isting probabilistic programming languages do not perform exact inference
on recursive calls over recursive data\, forcing programmers to code many
applications manually. We introduce a probabilistic language in which a wi
de variety of recursion can be expressed naturally\, and inference carried
out exactly. For instance\, probabilistic pushdown automata and their gen
eralizations are easy to express\, and polynomial-time parsing algorithms
for them are derived automatically. We eliminate recursive data types usin
g program transformations related to defunctionalization and refunctionali
zation. These transformations are assured correct by a linear type system\
, and a successful choice of transformations\, if there is one\, is guaran
teed to be found by a greedy algorithm. I will also describe the implement
ation of this language in two phases: first\, compilation to a factor grap
h grammar\, and second\, computing the sum-product of the factor graph gra
mmar.\n\nBiography\nDavid Chiang (PhD\, University of Pennsylvania\, 2004)
is an associate professor in the Department of Computer Science and Engin
eering at the University of Notre Dame. His research is on computational m
odels for learning human languages\, particularly how to translate from on
e language to another. His work on applying formal grammars and machine le
arning to translation has been recognized with two best paper awards (at A
CL 2005 and NAACL HLT 2009). He has received research grants from DARPA\,
NSF\, Google\, and Amazon\, has served on the executive board of NAACL and
the editorial board of Computational Linguistics and JAIR\, and is curren
tly on the editorial board of Transactions of the ACL.
DTSTART;TZID=America/New_York:20221017T120000
DTEND;TZID=America/New_York:20221017T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:David Chiang (University of Notre Dame) “Exact Recursive Probabilis
tic Programming with Colin McDonald\, Darcey Riley\, Kenneth Sible (Notre
Dame) and Chung-chieh Shan (Indiana)”
URL:https://www.clsp.jhu.edu/events/david-chiang-university-of-notre-dame/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\nAbstr
act
\nRecursive calls over recursive data are w
idely useful for generating probability distributions\, and probabilistic
programming allows computations over these distributions to be expressed i
n a modular and intuitive way. Exact inference is also useful\, but unfort
unately\, existing probabilistic programming languages do not perform exac
t inference on recursive calls over recursive data\, forcing programmers t
o code many applications manually. We introduce a probabilistic language i
n which a wide variety of recursion can be expressed naturally\, and infer
ence carried out exactly. For instance\, probabilistic pushdown automata a
nd their generalizations are easy to express\, and polynomial-time parsing
algorithms for them are derived automatically. We eliminate recursive dat
a types using program transformations related to defunctionalization and r
efunctionalization. These transformations are assured correct by a linear
type system\, and a successful choice of transformations\, if there is one
\, is guaranteed to be found by a greedy algorithm. I will also describe t
he implementation of this language in two phases: first\, compilation to a
factor graph grammar\, and second\, computing the sum-product of the fact
or graph grammar.
\n\nBio
graphy
\nDavid Chiang (PhD\,
University of Pennsylvania\, 2004) is an associate professor in the Depart
ment of Computer Science and Engineering at the University of Notre Dame.
His research is on computational models for learning human languages\, par
ticularly how to translate from one language to another. His work on apply
ing formal grammars and machine learning to translation has been recognize
d with two best paper awards (at ACL 2005 and NAACL HLT 2009). He has rece
ived research grants from DARPA\, NSF\, Google\, and Amazon\, has served o
n the executive board of NAACL and the editorial board of Computational Li
nguistics and JAIR\, and is currently on the editorial board of Transactio
ns of the ACL.
\n
X-TAGS;LANGUAGE=en-US:2022\,Chiang\,October
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-22400@www.clsp.jhu.edu
DTSTAMP:20240328T124508Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nModern learning architectures for natural language pr
ocessing have been very successful in incorporating a huge amount of texts
into their parameters. However\, by and large\, such models store and use
knowledge in distributed and decentralized ways. This proves unreliable a
nd makes the models ill-suited for knowledge-intensive tasks that require
reasoning over factual information in linguistic expressions. In this tal
k\, I will give a few examples of exploring alternative architectures to t
ackle those challenges. In particular\, we can improve the performance of
such (language) models by representing\, storing and accessing knowledge i
n a dedicated memory component.\nThis talk is based on several joint works
with Yury Zemlyanskiy (Google Research)\, Michiel de Jong (USC and Google
Research)\, William Cohen (Google Research and CMU) and our other collabo
rators in Google Research.\nBiography\nFei is a research scientist at Goog
le Research. Before that\, he was a Professor of Computer Science at Unive
rsity of Southern California. His primary research interests are machine l
earning and its application to various AI problems: speech and language pr
ocessing\, computer vision\, robotics and recently weather forecast and cl
imate modeling. He has a PhD (2007) from Computer and Information Scienc
e from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering from
Southeast University (Nanjing\, China).
DTSTART;TZID=America/New_York:20221024T120000
DTEND;TZID=America/New_York:20221024T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Fei Sha (University of Southern California) “Extracting Information
from Text into Memory for Knowledge-Intensive Tasks”
URL:https://www.clsp.jhu.edu/events/fei-sha-university-of-southern-californ
ia/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\nAbstr
act
\nModern learning architectures for
natural language processing have been very successful in incorporating a
huge amount of texts into their parameters. However\, by and large\, such
models store and use knowledge in distributed and decentralized ways. This
proves unreliable and makes the models ill-suited for knowledge-intensive
tasks that require reasoning over factual information in linguistic expre
ssions. In this talk\, I will give a few examples of exploring alternativ
e architectures to tackle those challenges. In particular\, we can improve
the performance of such (language) models by representing\, storing and a
ccessing knowledge in a dedicated memory component.
\nThis talk is based on several joint works with Yury Zemlyanskiy (Goo
gle Research)\, Michiel de Jong (USC and Google Research)\, William Cohen
(Google Research and CMU) and our other collaborators in Google Research.<
/p>\n
Biography
\nFei is a research scientist at
Google Research. Before that\, he was a Professor of Computer Science at U
niversity of Southern California. His primary research interests are machi
ne learning and its application to various AI problems: speech and languag
e processing\, computer vision\, robotics and recently weather forecast an
d climate modeling. He has a PhD (2007) from Computer and Information Sc
ience from U. of Pennsylvania and B.Sc and M.Sc in Biomedical Engineering
from Southeast University (Nanjing\, China).
\n
X-TAGS;LANGUAGE=en-US:2022\,October\,Sha
END:VEVENT
END:VCALENDAR