BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.36.25//NONSGML kigkonsult.se iCalcreator 2.26.9//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-FROM-URL:https://www.clsp.jhu.edu
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20231105T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20241103T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20240310T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20250309T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-20115@www.clsp.jhu.edu
DTSTAMP:20240329T093518Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nData science in small medical datasets usually means
doing precision guesswork on unreliable data provided by those with high e
xpectations. The first part of this talk will focus on issues that data sc
ientists and engineers have to address when working with this kind of data
(e.g. unreliable labels\, the effect of confounding factors\, necessity o
f clinical interpretability\, difficulties with fusing more data sets). Th
e second part of the talk will include some real examples of this kind of
data science in the field of neurology (prediction of motor deficits in Pa
rkinson’s disease based on acoustic analysis of speech\, diagnosis of Park
inson’s disease dysgraphia utilising online handwriting\, exploring the Mo
zart effect in epilepsy based on the music information retrieval) and psyc
hology (assessment of graphomotor disabilities in children with developmen
tal dysgraphia).\nBiography\nJiri Mekyska is the head of the BDALab (Brain
Diseases Analysis Laboratory) at the Brno University of Technology\, wher
e he leads a multidisciplinary team of researchers (signal processing engi
neers\, data scientists\, neurologists\, psychologists) with a special foc
us on the development of new digital endpoints and digital biomarkers enab
ling to better understand\, diagnose and monitor neurodegenerative (e.g. P
arkinson’s disease) and neurodevelopmental (e.g. dysgraphia) diseases.
DTSTART;TZID=America/New_York:20210329T120000
DTEND;TZID=America/New_York:20210329T131500
LOCATION:via Zoom
SEQUENCE:0
SUMMARY:Jiri Mekyska (Brno University of Technology) “Data Science in Small
Medical Data Sets: From Logistic Regression Towards Logistic Regression”
URL:https://www.clsp.jhu.edu/events/jiri-mekyska-brno-university-of-technol
ogy/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n
\\n\\n\\nAbstr
act
\nData science in small medical datasets usually means
doing precision guesswork on unreliable data provided by those with high e
xpectations. The first part of this talk will focus on issues that data sc
ientists and engineers have to address when working with this kind of data
(e.g. unreliable labels\, the effect of confounding factors\, necessity o
f clinical interpretability\, difficulties with fusing more data sets). Th
e second part of the talk will include some real examples of this kind of
data science in the field of neurology (prediction of motor deficits in Pa
rkinson’s disease based on acoustic analysis of speech\, diagnosis of Park
inson’s disease dysgraphia utilising online handwriting\, exploring the Mo
zart effect in epilepsy based on the music information retrieval) and psyc
hology (assessment of graphomotor disabilities in children with developmen
tal dysgraphia).
\nBiography
\nJiri Mekyska is the he
ad of the BDALab (Brain Diseases Analysis Laboratory) at the Brno Universi
ty of Technology\, where he leads a multidisciplinary team of researchers
(signal processing engineers\, data scientists\, neurologists\, psychologi
sts) with a special focus on the development of new digital endpoints and
digital biomarkers enabling to better understand\, diagnose and monitor ne
urodegenerative (e.g. Parkinson’s disease) and neurodevelopmental (e.g. dy
sgraphia) diseases.
\n\n
X-TAGS;LANGUAGE=en-US:2021\,March\,Mekyska
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-22394@www.clsp.jhu.edu
DTSTAMP:20240329T093518Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\n\nModel robustness and spurious correlations have rec
eived increasing attention in the NLP community\, both in methods and eval
uation. The term “spurious correlation” is overloaded though and can refer
to any undesirable shortcuts learned by the model\, as judged by domain e
xperts.\n\n\nWhen designing mitigation algorithms\, we often (implicitly)
assume that a spurious feature is irrelevant for prediction. However\, man
y features in NLP (e.g. word overlap and negation) are not spurious in the
sense that the background is spurious for classifying objects in an image
. In contrast\, they carry important information that’s needed to make pre
dictions by humans. In this talk\, we argue that it is more productive to
characterize features in terms of their necessity and sufficiency for pred
iction. We then discuss the implications of this categorization in represe
ntation\, learning\, and evaluation.\nBiography\nHe He is an Assistant Pro
fessor in the Department of Computer Science and the Center for Data Scien
ce at New York University. She obtained her PhD in Computer Science at the
University of Maryland\, College Park. Before joining NYU\, she spent a y
ear at AWS AI and was a post-doc at Stanford University before that. She i
s interested in building robust and trustworthy NLP systems in human-cente
red settings. Her recent research focus includes robust language understan
ding\, collaborative text generation\, and understanding capabilities and
issues of large language models.
DTSTART;TZID=America/New_York:20221014T120000
DTEND;TZID=America/New_York:20221014T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:He He (New York University) “What We Talk about When We Talk about
Spurious Correlations in NLP”
URL:https://www.clsp.jhu.edu/events/he-he-new-york-university/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\nAbstr
act
\n\n
Model robustness and spuri
ous correlations have received increasing attention in the NLP community\,
both in methods and evaluation. The term “spurious correlation” is overlo
aded though and can refer to any undesirable shortcuts learned by the mode
l\, as judged by domain experts.
\n
\n\n
When designing mitigation algorithms\, we often (implicitly) assume that
a spurious feature is irrelevant for prediction. However\, many features
in NLP (e.g. word overlap and negation) are not spurious in the sense that
the background is spurious for classifying objects in an image. In contra
st\, they carry important information that’s needed to make predictions by
humans. In this talk\, we argue that it is more productive to characteriz
e features in terms of their necessity and sufficiency for prediction. We
then discuss the implications of this categorization in representation\, l
earning\, and evaluation.
\n
Biography
\n
He He
is an Assistant Professor in the Department of Computer Science and the C
enter for Data Science at New York University. She obtained her PhD in Com
puter Science at the University of Maryland\, College Park. Before joining
NYU\, she spent a year at AWS AI and was a post-doc at Stanford Universit
y before that. She is interested in building robust and trustworthy NLP sy
stems in human-centered settings. Her recent research focus includes robus
t language understanding\, collaborative text generation\, and understandi
ng capabilities and issues of large language models.
\n
\n<
/HTML>
X-TAGS;LANGUAGE=en-US:2022\,He\,October
END:VEVENT
BEGIN:VEVENT
UID:ai1ec-22408@www.clsp.jhu.edu
DTSTAMP:20240329T093518Z
CATEGORIES;LANGUAGE=en-US:Seminars
CONTACT:
DESCRIPTION:Abstract\nAI-powered applications increasingly adopt Deep Neura
l Networks (DNNs) for solving many prediction tasks\, leading to more than
one DNNs running on resource-constrained devices. Supporting many models
simultaneously on a device is challenging due to the linearly increased co
mputation\, energy\, and storage costs. An effective approach to address t
he problem is multi-task learning (MTL) where a set of tasks are learned j
ointly to allow some parameter sharing among tasks. MTL creates multi-task
models based on common DNN architectures and has shown significantly redu
ced inference costs and improved generalization performance in many machin
e learning applications. In this talk\, we will introduce our recent effor
ts on leveraging MTL to improve accuracy and efficiency for edge computing
. The talk will introduce multi-task architecture design systems that can
automatically identify resource-efficient multi-task models with low infer
ence costs and high task accuracy.\n\nBiography\n\n\nHui Guan is an Assist
ant Professor in the College of Information and Computer Sciences (CICS) a
t the University of Massachusetts Amherst\, the flagship campus of the UMa
ss system. She received her Ph.D. in Electrical Engineering from North Car
olina State University in 2020. Her research lies in the intersection betw
een machine learning and systems\, with an emphasis on improving the speed
\, scalability\, and reliability of machine learning through innovations i
n algorithms and programming systems. Her current research focuses on both
algorithm and system optimizations of deep multi-task learning and graph
machine learning.
DTSTART;TZID=America/New_York:20221111T120000
DTEND;TZID=America/New_York:20221111T131500
LOCATION:Hackerman Hall B17 @ 3400 N. Charles Street\, Baltimore\, MD 21218
SEQUENCE:0
SUMMARY:Hui Guan (University of Massachusetts Amherst) “Towards Accurate an
d Efficient Edge Computing Via Multi-Task Learning”
URL:https://www.clsp.jhu.edu/events/hui-guan-university-of-massachusetts-am
herst/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\nAbstr
act
\nAI-powered applications increasingly adopt
Deep Neural Networks (DNNs) for solving many prediction tasks\, leading t
o more than one DNNs running on resource-constrained devices. Supporting m
any models simultaneously on a device is challenging due to the linearly i
ncreased computation\, energy\, and storage costs. An effective approach t
o address the problem is multi-task learning (MTL) where a set of tasks ar
e learned jointly to allow some parameter sharing among tasks. MTL creates
multi-task models based on common DNN architectures and has shown signifi
cantly reduced inference costs and improved generalization performance in
many machine learning applications. In this talk\, we will introduce our r
ecent efforts on leveraging MTL to improve accuracy and efficiency for edg
e computing. The talk will introduce multi-task architecture design system
s that can automatically identify resource-efficient multi-task models wit
h low inference costs and high task accuracy.
\n\nBiography
\n\n\n
Hui Guan
is an Assistant Professor in the
College of Information and Computer
Sciences (CICS) at the University of Massachusetts Amherst\, t
he flagship campus of the UMass system. She received her Ph.D. in Electric
al Engineering from
North Carolina State University in 2020. He
r research lies in the intersection between machine learning and systems\,
with an emphasis on improving the speed\, scalability\, and reliability o
f machine learning through innovations in algorithms and programming syste
ms. Her current research focuses on both algorithm and system optimization
s of deep multi-task learning and graph machine learning.
\n
\n
\n\n
X-TAGS;LANGUAGE=en-US:2022\,Guan\,November
END:VEVENT
END:VCALENDAR