NASIT 2019 Schedule

Jump to other IT Society Websites:
Schedule for the 2019 IEEE North American School of Information Theory (NASIT 2019)

Monday, July 1

同乐城娱乐 www.liuzhitangejiao.com Students arrive and check in.

Tuesday, July 2

08:30 – 10:00
Tutorial: TBA
Tara Javidi
UC San Diego

10:00 – 10:30
Coffee Break

10:30 – 12:00
Tutorial: TBA
Tara Javidi
UC San Diego

12:00 – 1:00
Lunch

1:00 – 2:30
Padovani Lecture: TBA
Kannan Ramchandran
UC Berkeley

2:30 – 3:00
Coffee Break

3:00 – 4:30
Padovani Lecture: TBA
Kannan Ramchandran
UC Berkeley

4:30 – 6:00
POSTER SESSION 1

6:00 – 7:00
Break

7:00 – 9:00
Banquet

Wednesday, July 3

08:30 – 10:00
Tutorial: TBA
Adam Smith
Boston University

10:00 – 10:30
Coffee Break

10:30 – 12:00
Tutorial: TBA
Adam Smith
Boston University

12:00 – 1:30
Lunch

1:30 – 3:00
POSTER SESSION II

3:00 – 4:00
TBA

4:00 – onward
TBA

Thursday, July 4

08:30 – 10:00
Tutorial: TBA
Alexander Barg
University of Maryland, College Park

10:00 – 10:30
Coffee Break

10:30 – 12:00
Tutorial: TBA
Alexander Barg
University of Maryland, College Park

12:00 – 1:30
Lunch

1:30 – onward
FREE TIME
4th of July Fireworks Viewing TBA

Friday, July 5

08:30 – 10:00
Tutorial: Information, Concentration, and Learning
Maxim Raginsky
University of Illinois at Urbana-Champaign

10:00 – 10:30
Coffee Break

10:30 – 12:00
Tutorial: Information, Concentration, and Learning
Maxim Raginsky
University of Illinois at Urbana-Champaign

12:00 – 1:30
Lunch

1:30 – 3:30
TBA

?

Abstracts and Biographies


Kannan Ramchandran
UC Berkeley


Tara Javidi
UC San Diego


Adam Smith
Boston University


Alexander Barg
University of Maryland, College Park

Information, Concentration, and Learning
Maxim Raginsky
University of Illinois at Urbana-Champaign
Abstract: During the last two decades, concentration of measure has been a subject of?various exciting developments in convex geometry, functional analysis,?statistical physics, high-dimensional statistics, probability theory,?information?theory, communications and coding theory, computer science, and?learning theory. One common theme that emerges in these fields is?probabilistic stability: complicated, nonlinear functions of a large number of?independent or?weakly dependent random variables often tend to concentrate?sharply around their expected values. Information theory plays a key role in?the derivation of concentration inequalities. Indeed, both the entropy method?and the approach based on transportation-cost inequalities are two major?information-theoretic paths toward?proving concentration.
Machine learning algorithms can be viewed as stochastic transformations?(or channels, in information-theoretic parlance) that map training data?to hypotheses. Following the classic paper of Bousquet and Elisseeff, we?say that?such an algorithm is stable if its output does not depend too?much on any individual training example. Since stability is closely?connected to generalization capabilities of learning algorithms, it is?of theoretical and practical interest?to obtain sharp quantitative?estimates on the generalization bias of machine learning algorithms in?terms of their stability properties. In this tutorial, I will survey a recent line of work aimed at deriving stability and/or generalization guarantees for learning algorithms?based on mutual?information, erasure mutual information, and related information-theoretic quantities.?
Bio: Maxim Raginsky received the B.S. and M.S. degrees in 2000 and the?Ph.D. degree in 2002 from Northwestern University, all in Electrical?Engineering. He has held research positions with Northwestern, the?University of Illinois at?Urbana-Champaign (where he was a Beckman?Foundation Fellow from 2004 to 2007), and Duke University. In 2012, he?has returned to the UIUC, where he is currently an Associate Professor and William L. Everitt Fellow?with the Department of Electrical?and Computer Engineering and the?Coordinated Science Laboratory. He also holds a courtesy appointment?with the Department of Computer Science. He has received the CAREER award from the National Science Foundation in 2013. Prof. Raginsky's interests cover probability and stochastic?processes, deterministic and stochastic control, machine learning,?optimization, and information theory. Much of his recent research is?motivated by fundamental?questions in modeling, learning, and simulation?of nonlinear dynamical systems, with applications to advanced?electronics, autonomy, and artificial intelligence.

?