大玩家娱乐城开户-乐透乐博彩论坛-博乐足球博彩网

當(dāng)前位置: 首頁(yè) > 學(xué)術(shù)報(bào)告 > 正文

Building interpretable machine learning models using sparse learning

發(fā)布時(shí)間:2023-06-09 15:23:00 發(fā)布人:唐振東  

報(bào)告時(shí)間: 2023.06.13(周二) 上午9:00-11:00

報(bào)告人:王一頡,Indiana University Bloomington (IUB)

報(bào)告地點(diǎn):騰訊會(huì)議(239-665-096)

報(bào)告摘要: The ongoing surge in building interpretable machine learning models has drawn a lot of attention across several scientific communities. In this talk, I will talk about how to use sparse learning to build interpretable machine learning models. First, I will introduce our novel framework to learn sparse models through Boolean relaxation. I will show both theoretical and empirical results to demonstrate that our novel framework outperforms the state-of-the-art methods when the sample size is small. Then, I will talk about how to build interpretable deep learning models using sparse learning. I will introduce our ParsVNN model, an interpretable visible neural network for predicting cancer-specific drug responses. Finally, I will talk about how to use sparse learning to reconstruct cell-type-specific gene regulatory networks.

個(gè)人簡(jiǎn)介:

Yijie Wang is an assistant professor at Computer Science Department in Indiana University Bloomington (IUB). Yijie’s research bridges the areas of computational, mathematical, and biological sciences. His current research focuses on reverse engineering gene regulation, building interpretable machine learning models by sparse learning, and computational oncology (cancer type detection and cancer drug/treatment response prediction). His research is supported by National Institutes of Health (NIH) R35 grant. And also, he is the awardee of the NIH MIRA award.

船舶電氣工程學(xué)院

2023年6月9日