科学研究
报告信息
当前位置: 学院主页 > 科学研究 > 报告信息 > 正文

Statistical Principles and Deep Modeling

发布时间:2020-12-14 作者: 浏览次数:
Speaker: 刘传海 DateTime: 2020年12月17日(周四)上午10:30-11:30
Brief Introduction to Speaker:

刘传海,美国普渡大学统计系教授。武汉大学概率统计硕士,1987年、哈佛大学统计学硕士,1990年、哈佛大学统计学博士,1994年。主要研究兴趣包含:贝叶斯、统计推断的计算方法、数据分析的计算机语言和环境、缺失数据和多重插补、多重比较和时间序列等等。获得的奖项/荣誉有美国统计协会会员(2007年)、当选为国际统计学会成员(2006年)、杰出统计应用论文(《美国统计协会杂志》,2000年)、2000年弗兰克·威尔科克森奖、1998年贝尔实验室总裁银奖、1994年哈佛大学杰出教学研究员等等。


Place: 腾讯会议(会议号请联系左国新老师索取)
Abstract: While the development of machine learning methods has dominated the recent research in the area of computer-intensive data analysis, one can imagine that future high-quality of research appears to be also in need of more principled ways of data analysis. In this talk, we will introduce the two obvious but fundamental principles, namely, the {\it Validity} principle and the {\it Efficiency} principle. In their book entitled {\it Inferential Models --- Reasoning with Uncertainty}, Ryan Martin and Chuanhai Liu argued for these two principles in the context of making reliable and efficient inference based on postulated models. With a brief review of the two principles for statistical inference, we discuss their implications in a scenario of model building. Implementation of the two principles for model building will be illustrated with an experimental method, which we call {\it Deep Modeling}, for analyzing the famous MNIST dataset.