BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//128.220.36.13//NONSGML kigkonsult.se iCalcreator 2.26.9//
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Center for Language and Speech Processing
X-WR-CALDESC:Johns Hopkins University
X-FROM-URL:https://www.clsp.jhu.edu
X-WR-TIMEZONE:America/New_York
BEGIN:VTIMEZONE
TZID:America/New_York
X-LIC-LOCATION:America/New_York
BEGIN:STANDARD
DTSTART:20201101T020000
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
RDATE:20211107T020000
TZNAME:EST
END:STANDARD
BEGIN:DAYLIGHT
DTSTART:20210314T020000
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
RDATE:20220313T020000
TZNAME:EDT
END:DAYLIGHT
END:VTIMEZONE
BEGIN:VEVENT
UID:ai1ec-14506@www.clsp.jhu.edu
DTSTAMP:20210309T041833Z
CATEGORIES;LANGUAGE=en-US:Seminars\,Workshops
CONTACT:
DESCRIPTION:Abstract\nThe past few years have seen a dramatic increase in t
he performance of recognition systems thanks to the introduction of deep n
etworks for representation learning. However\, the mathematical reasons fo
r this success remain elusive. A key issue is that the neural network trai
ning problem is non-convex\, hence optimization algorithms may not return
a global minima. In addition\, the regularization properties of algorithms
such as dropout remain poorly understood. Building on ideas from convex r
elaxations of matrix factorizations\, this work proposes a general framewo
rk which allows for the analysis of a wide range of non-convex factorizati
on problems – including matrix factorization\, tensor factorization\, and
deep neural network training. The talk will describe sufficient conditions
under which a local minimum of the non-convex optimization problem is a g
lobal minimum and show that if the size of the factorized variables is lar
ge enough then from any initialization it is possible to find a global min
imizer using a local descent algorithm. The talk will also present an anal
ysis of the optimization and regularization properties of dropout in the c
ase of matrix factorization.\n \nBio\nRené Vidal is the Herschel L. Seder
Professor in the Department of Biomedical Engineering. He joined Johns Hop
kins in 2004. He holds joint appointments in the departments of Electrical
and Computer Engineering\, Computer Science\, and Mechanical Engineering.
He is the director of the Mathematical Institute for Data Science and the
Vision Dynamics and Learning Lab\, and is also a professor in the Institu
te for Computational Medicine\, the Center for Imaging Science\, and the L
aboratory for Computational Sensing and Robotics.\nVidal’s research focuse
s on the development of theory and algorithms for the analysis of complex
high-dimensional datasets such as images\, videos\, time-series and biomed
ical data. His lab creates new technologies for a variety of biomedical ap
plications\, including detection\, classification\, and tracking of blood
cells in holographic images\, classification of embryonic cardio-myocytes
in optical images\, and assessment of surgical skill in surgical videos.
DTSTART;TZID=America/New_York:20180706T090000
DTEND;TZID=America/New_York:20180706T100000
LOCATION:Hackerman Hall\, Room B17
SEQUENCE:0
SUMMARY:René Vidal (Johns Hopkins University): Mathematics of Deep Learning
URL:https://www.clsp.jhu.edu/events/vidal-mathematics-deep-learning/
X-COST-TYPE:free
X-ALT-DESC;FMTTYPE=text/html:\\n\\n\\n\\n\\n###### Abstract

\nThe past few years have seen a dramatic increase in the performance
of recognition systems thanks to the introduction of deep networks for rep
resentation learning. However\, the mathematical reasons for this success
remain elusive. A key issue is that the neural network training problem is
non-convex\, hence optimization algorithms may not return a global minima
. In addition\, the regularization properties of algorithms such as dropou
t remain poorly understood. Building on ideas from convex relaxations of m
atrix factorizations\, this work proposes a general framework which allows
for the analysis of a wide range of non-convex factorization problems – i
ncluding matrix factorization\, tensor factorization\, and deep neural net
work training. The talk will describe sufficient conditions under which a
local minimum of the non-convex optimization problem is a global minimum a
nd show that if the size of the factorized variables is large enough then
from any initialization it is possible to find a global minimizer using a
local descent algorithm. The talk will also present an analysis of the opt
imization and regularization properties of dropout in the case of matrix f
actorization.

\n

\n###### Bio

\nRené Vi
dal is the Herschel L. Seder Professor in the Department of Biomedical
Engineering. He joined Johns Hopkins in 2004. He holds joint appointments
in the departments of Electrical and Computer Engineering\, Computer Scie
nce\, and Mechanical Engineering. He is the director of the Mathematical Institu
te for Data Science and the Vision Dynamics and Learning Lab\, and is
also a professor in the Institute for Computational Medicine\, the Center
for Imaging Science\, and the Laboratory for Computational Sensing and Rob
otics.

\nVidal’s research focuses on the development of theory and a
lgorithms for the analysis of complex high-dimensional datasets such as im
ages\, videos\, time-series and biomedical data. His lab creates new techn
ologies for a variety of biomedical applications\, including detection\, c
lassification\, and tracking of blood cells in holographic images\, classi
fication of embryonic cardio-myocytes in optical images\, and assessment o
f surgical skill in surgical videos.

\n
X-TAGS;LANGUAGE=en-US:Deep Learning\,Rene Vidal
END:VEVENT
END:VCALENDAR