Lee Lab of
AI for bioMedical Sciences (AIMS)
The goal of the AIMS lab led by Prof. Su-In Lee is to conceptually and fundamentally advance the way AI is integrated with biomedicine by addressing novel, forward-looking, and stimulating scientific questions, enabled by AI advances. For example, when the primary focus of AI applications in biomedicine was on making accurate predictions using machine learning (ML) models, we uniquely focused on why a certain prediction was made by developing novel AI principles, and techniques (e.g., SHAP) to improve the interpretability of ML models, applicable to a broad spectrum of problems beyond biomedicine.
Modern complex, black-box ML models such as deep neural networks have become standard tools for biomedical research. Their opaque nature and consequent lack of interpretability have been a bottleneck impeding the widespread adoption of AI in biomedicine and beyond. These models do not answer the key questions, such as molecular basis for complex phenotypes, design strategies for therapeutics, or the reasoning process of clinical decisions. This challenge gave rise to explainable AI (XAI), a.k.a., interpretable ML.
AIMS lab’s research focuses on a broad spectrum of problems, advancing: (A) AI/ML - developing XAI principles and techniques, (B) Biology - identifying the cause and treatment of challenging diseases such as cancer and Alzheimer’s disease, and (C) Clinical medicine & healthcare - developing and auditing clinical AI models. See our latest publications.
We are a group of passionate researchers consisting of CSE Ph.D. students and MSTP (M.D./Ph.D.) students. Our lab is interdisciplinary with backgrounds ranging from computer science, statistics, and mathematics to molecular & cell biology and clinical medicine.
Latest News
Ethan's and Chris' paper on contrastiveVI got published in Nature Methods!
5/22 Hugh's and Ian's paper on Shapley value computation algorithms got published in Nature Machine Intelligence.
5/1 Joe Janizek's explainable AI work on cancer pharmacogenomics in collaboration with Prof. Kamila Naxerova (MGH/Harvard) got accepted to Nature Biomedical Engineering.
4/24 Ian's work on dynamic feature selection got accepted to ICML'23.
4/19 Joe's work on XAI for unsupervised models and Alzheimer's disease is published in Genome Biology.
4/12 Ian's work on spatial transcriptomics is published in Nature Communications.
1/20 Two papers got accepted to ICLR'23. Congratulations Chris Lin, Hugh Chen, Chanwoo Kim, and Ian Covert!
1/11 The Annual CMB Symposium will be held on 1/11.
Prof. Lee shares her view on future trends in digital health with Geekwire.
Wei Qiu's paper on explainable prediction of all-cause mortality is published in Nature Communications Medicine.
Ian Covert and Prof. Lee created and co-taught a new course on explainable AI.
Joe/Alex's work on XAI-based model auditing is highlighted in Nature. Check here.
Hugh Chen's paper on explainable AI for a series of models is published in Nature Communications.