Interpretable Artificial Intelligence for Medicine and Science (AIMS)
Paul G. Allen School Computer Science & Engineering
University of Washington
Our goal is to conceptually and fundamentally advance how AI can be integrated with biomedical sciences by addressing novel, forward-looking, and stimulating questions, enabled by AI possibilities and by advancing foundational AI. For example, our recent research has developed methods to explain AI-driven predictions or inferences, for a broad spectrum of problems, from basic biological sciences to disease biology to bedside applications. AI models, such as deep neural networks, are transforming biomedical sciences; however, their black-box nature has been a well-known bottleneck impeding the widespread adoption of AI in biomedicine. For example, these models do not answer the key questions in biology and medicine on causal relationships or what a hidden variable means biologically.
When the primary focus of AI applications in biology and medicine was in accurately predicting patient's outcome or individual's phenotype (e.g., predicting the response to certain chemotherapy based on the patient's gene expression profile), we uniquely focused on why a certain prediction was made, which can help medical professions make diagnoses or decisions on appropriate clinical actions or point to the molecular mechanisms underlying an individual phenotype. This line of our work has led to highly cited publications: (1) the cover article of Nature Biomedical Engineering, Oct 2018 (cited 174 times over 2 years), (2) the cover article of Nature Machine Intelligence, Jan 2020 (cited 196 times over 1 year; two preprint versions cited 289 and 51 times), and (3) an article in Nature Communications, Jan 2018 (cited 75 times over 3 years; recommended by F1000). Our foundational AI research on interpreting an AI prediction led to (4) a paper on the SHAP framework, which was selected for oral presentation (top 1%) in NeurIPS (Neural Information Processing Systems) in December 2017 (cited 2,048 times over 3 years), which is broadly used by scientists in medicine, biology, finance, computer science, etc (ODSC Open Data Science Award'19).
Some of our highlighted research include (i) finding therapeutic targets for Alzheimer’s disease (press articles), (ii) treating cancer based on a patient’s own molecular profile (Nature Communications 2018; Selected by F1000), (iii) core ML work on explainable AI (NeurIPS Dec 2017, which got full Oral (top 1%) and was cited 233 times as of June, 2019; and Nature Machine Intelligence , cover article), and (iv) preventing complications during surgery (Nature BME 2018, cover article), (v) predicting kidney diseases (Nature Machine Intelligence, cover article), (vi) enabling pre-hospital predictions for trauma patients, and (vii) improving our understanding of pan-cancer biology, (viii) human genome, and (ix) gene regulatory networks. Lee Lab is collaborating with biomedical researchers in UW School of Medicine, Allen Institutes, Harvard Medical School, and so on.
Computational biology & medicine (bioinformatics)- precision medicine, network biology, & genetics
Machine learning - explainable AI, interpretable ML, feature selection, & probabilistic graphical models
3/23/2021: Ethan Weinberger receives the NSF Graduate Research Fellowship (GRF).
6/16/2020: Gabe Erion receives the F30 fellowship from NIH.
3/27/2020: Will Chen receives the Mary Gates Research Scholarship.
1/17/2020: Scott's TreeExplainer is published as a cover article of the January issue of Nature Machine Intelligence.
Allen School News - Seeing the forest for the trees: UW team advances explainable AI for popular machine learning models used to predict human disease and mortality risks.
11/20/2019: Gabe won the Madrona Prize (1st place) at the 2019 Allen School Annual Research Day.
"CoAI: Cost-Aware Artificial Intelligence for Health Care"
Safiye and Scott won this prize in 2018 and 2017, respectively.
11/19/2019: Scott's SHAP paper (NeurIPS, Dec 2017) is cited 500 times over <2 years after publication.
2/21/2019: Nao's AIControl paper got accepted for publication in Nucleic Acids Research (IF: 11.56). See Allen School News.
12/31/2018: Scott's SHAP paper (NeurIPS oral presentation) got cited 100 times as of today after about 1 year it was published.
11/1/2018: Safiye's EMBARKER project on identifying therapeutic targets for Alzheimer's disease won the Madrona Prize (1st place) at 2018 Allen School Annual Research Day.
GeekWire - From fighting Alzheimer’s to AR captions, UW computer science students show cutting-edge innovations
BusinessWire - Madrona Awards 2018 Madrona Prize to UW Project That Applies Machine Learning to Fighting Alzheimer’s Disease
Last year, Scott won this prize on his model interpretation work, SHAP (NIPS oral presentation) and his Nature BME paper featured on the cover (see below).
10/10/2018: Scott's Prescience paper is published as a cover article of the October issue of Nature Biomedical Engineering.
Nature BME Editorial - Towards trustable machine learning
UW News - Prescience: Helping doctors predict the future
GeekWire - Univ. of Washington researchers unveil Prescience, an AI system that predicts problems during surgery
Allen School News - “Prescience” interpretable machine-learning system for predicting complications during surgery featured in Nature Biomedical Engineering
5/3/2018: Safiye Celik's and Su-In Lee's research is featured in GeekWire.
4/3/2018: Hugh Chen receives the NSF Graduate Research Fellowships Program (GRFP).
3/27/2018: Safiye Celik's MERGE paper (Nature Communications 2018) is recommended in F1000Prime as being of special significance in its field.
3/21/2018: Safiye Celik's research is featured in I Am CSE.
Su-In's lecture was selected as the best talk at CGWI 2018 (Computational Genomics Winter Institute): Interpretable Machine Learning for Precision Medicine.
9/4/2017: Scott Lundberg's SHAP paper is accepted to Neural Information Processing Systems (NIPS) 2017 for Full Oral Presentation.
8/7/2017: Gabriel Erion won the Best Poster Award: "Prediction and Prevention of Perioperative Adverse Events with Machine Learning Models", University of Washington MSTP (MD/PhD program) retreat, 2017.
2/7/2017: Scott Lundberg's ChromNet paper (Genome Biology 2016) is recommended in F1000Prime as being of special significance in its field.
11/20/2016: Scott Lundberg receives the Best Paper Award at the NIPS workshop "Interpretable Machine Learning for Complex Systems".
8/12/2016: Javad Hosseini's GRAB paper is accepted to Neural Information Processing Systems (NIPS) 2016.
7/15/2016: Nao Hiranuma's CloudControl paper is accepted to ACM Conference on Bioinformatics, Computational Biology (ACM-BCB) 2016.
7/6/2016: Safiye Celik's INSPIRE paper is featured in Casey Greene. The future is unsupervised. Science Translational Medicine July 2016.
6/10/2016: Safiye Celik's INSPIRE work is published in Genome Medicine June 2016.
5/4/2016: Maxim Grechkin's DISCERN work is published in PLOS Computational Biology May 2016.
2/8/2016: Javad Hosseini got ranked #1 in the DiMSUM competition. See Hosseini, Smith and Lee. NAACL Workshop SemEval 2016 Task10.
Prof. Su-In Lee's lab seeks to develop interpretable machine learning techniques to learn from big data: (1) how the human genome or protein works, (2) how to improve healthcare, and (3) how to treat challenging diseases such as cancer and Alzheimer's disease. Her research page lists her projects, including treating cancer based on a patient's own expression profile, finding therapeutic targets for Alzheimer's, predicting kidney disease, preventing complications during surgery, enabling pre-hospital predictions for trauma patients, analyzing medical images, and improving our understanding of pan-cancer biology and genome biology.