報告時間: 2023.06.13(周二) 上午9:00-11:00
報告人:王一頡,Indiana University Bloomington (IUB)
報告地點:騰訊會議(239-665-096)
報告摘要: The ongoing surge in building interpretable machine learning models has drawn a lot of attention across several scientific communities. In this talk, I will talk about how to use sparse learning to build interpretable machine learning models. First, I will introduce our novel framework to learn sparse models through Boolean relaxation. I will show both theoretical and empirical results to demonstrate that our novel framework outperforms the state-of-the-art methods when the sample size is small. Then, I will talk about how to build interpretable deep learning models using sparse learning. I will introduce our ParsVNN model, an interpretable visible neural network for predicting cancer-specific drug responses. Finally, I will talk about how to use sparse learning to reconstruct cell-type-specific gene regulatory networks.
個人簡介:
Yijie Wang is an assistant professor at Computer Science Department in Indiana University Bloomington (IUB). Yijie’s research bridges the areas of computational, mathematical, and biological sciences. His current research focuses on reverse engineering gene regulation, building interpretable machine learning models by sparse learning, and computational oncology (cancer type detection and cancer drug/treatment response prediction). His research is supported by National Institutes of Health (NIH) R35 grant. And also, he is the awardee of the NIH MIRA award.
船舶電氣工程學院
2023年6月9日