Lee Lab of

Explainable AI for Biological and Medical Sciences (AIMS)

The AIMS lab, led by Prof Su-In Lee, aims to conceptually and fundamentally advance how AI/ML can be integrated with biomedical sciences by addressing novel, forward-looking and stimulating questions, enabled by advancing foundational AI/ML or applying advanced AI/ML methods. For example, when the primary focus of AI applications in biology and medicine was in accurately predicting patient's outcome or individual's phenotype (e.g., predicting the response to certain chemotherapy based on the patient's gene expression profile), we uniquely focused on why a certain prediction was made, which can help medical professions make diagnoses or decisions on appropriate clinical actions or point to the molecular mechanisms underlying an individual phenotype.

AI models, such as deep neural networks, are transforming biomedical sciences; however, their black-box nature has been a well-known bottleneck impeding the widespread adoption of AI in biomedicine and beyond. These models do not answer the key questions, such as mechanistic explanations or causal relationships for biological understanding, therapeutics, or clinical decisions.

The AIMS lab’s recent research focuses on a broad spectrum of problems, including developing explainable AI (a.k.a. interpretable ML) techniques, identifying the cause and treatment of challenging diseases such as cancer and Alzheimer’s disease, and developing and auditing clinical AI models. See our latest publications.

This year, we created a new course on explainable AI for our professional maters program.

News

  • 3/23/2021: Ethan Weinberger receives the NSF Graduate Research Fellowship (GRF).

  • 6/16/2020: Gabe Erion receives the F30 fellowship from NIH.

  • 3/27/2020: Will Chen receives the Mary Gates Research Scholarship.

  • 1/17/2020: Scott's TreeExplainer is published as a cover article of the January issue of Nature Machine Intelligence.

    • Allen School News - Seeing the forest for the trees: UW team advances explainable AI for popular machine learning models used to predict human disease and mortality risks.

  • 11/20/2019: Gabe won the Madrona Prize (1st place) at the 2019 Allen School Annual Research Day.

    • "CoAI: Cost-Aware Artificial Intelligence for Health Care"

    • Safiye and Scott won this prize in 2018 and 2017, respectively.

  • 11/19/2019: Scott's SHAP paper (NeurIPS, Dec 2017) is cited 500 times over <2 years after publication.

  • 2/21/2019: Nao's AIControl paper got accepted for publication in Nucleic Acids Research (IF: 11.56). See Allen School News.

  • 12/31/2018: Scott's SHAP paper (NeurIPS oral presentation) got cited 100 times as of today after about 1 year it was published.

  • 11/1/2018: Safiye's EMBARKER project on identifying therapeutic targets for Alzheimer's disease won the Madrona Prize (1st place) at 2018 Allen School Annual Research Day.

    • GeekWire - From fighting Alzheimer’s to AR captions, UW computer science students show cutting-edge innovations

    • BusinessWire - Madrona Awards 2018 Madrona Prize to UW Project That Applies Machine Learning to Fighting Alzheimer’s Disease

    • Last year, Scott won this prize on his model interpretation work, SHAP (NIPS oral presentation) and his Nature BME paper featured on the cover (see below).

  • 10/10/2018: Scott's Prescience paper is published as a cover article of the October issue of Nature Biomedical Engineering.

    • Nature BME Editorial - Towards trustable machine learning

    • UW News - Prescience: Helping doctors predict the future

    • GeekWire - Univ. of Washington researchers unveil Prescience, an AI system that predicts problems during surgery

    • Allen School News - “Prescience” interpretable machine-learning system for predicting complications during surgery featured in Nature Biomedical Engineering