网站首页  学院概况  党建思政  师资队伍  学科建设  科学研究  人才培养  专业认证  学生工作  教工之家  校友之窗  English  领导信箱 

“谦之”学术讲坛之一百二十一-Delving into the Calibratability of Deep Neural Networks

作者: 时间:2025-05-13 点击数:

    报告题目:Delving into the Calibratability of Deep Neural Networks

    报告人:张敏灵 东南大学 教授

    报告时间:2025年5月15日 14:00

    报告地点:马鞍山市金鹰尚美酒店8楼A厅 腾讯会议:301-571-424

    报告对象:感兴趣的教师、研究生、本科生等

    主办单位:电力电子与运动控制安徽省高等学校省级重点实验室、电气与信息工程学院

    报告人简介:

张敏灵,东南大学计算机科学与工程学院教授。主要研究领域为机器学习、数据挖掘。现任中国人工智能学会机器学习专委会副主任、江苏省人工智能学会副理事长等。担任《中国科学:信息科学》、《IEEE TPAMI》、《ACM TOIS》、《ACM TIST》、《Frontiers of Computer Science》、《Machine Intelligence Research》等期刊编委,ACML、PAKDD指导委员会委员,AAAI/IJCAI/ICML/ICLR/KDD等国际会议领域主席30余次。曾获CCF - IEEE CS青年科学家奖、国家级人才称号等。


    报告摘要:Reliable predictive models should be accurate when they are confident about their predictions and indicate high uncertainty when they are likely to be inaccurate. However, modern DNNs trained with cross-entropy (CE) loss, despite being highly accurate, have been recently found to predict poorly calibrated probabilities, unlike traditional models trained with the same objective. In recent years, many approaches have been proposed to improve DNNs’ calibration performance while maintain their accuracy. Differing from prior researches, our recent studies focus on the calibratability, which refers to the extent to which a model can be calibrated during the post-calibration phase. Our studies show a disparity between models' calibration performance and their calibratability. Specifically, we found that despite models trained with existing calibration methods are better calibrated, they suffer from not being as calibratable as regularly trained models, namely, it is harder to further calibrate these models with post-hoc calibration approaches. Taking Label Smoothing and Mixup as two illustrative cases, our recent work highlights some surprising phenomena concerning calibrability and offers potential avenues for this issue.

 

欢迎全校师生踊跃参加!

学院地址:安徽省马鞍山市马向路安徽工业大学秀山校区电气楼    邮政编码:243032