ChemInformatics Model Explorer (CIME): Exploratory Analysis of Chemical Model Explanations

CIME Teaser


The introduction of machine learning to small molecule research – an inherently multidisciplinary field in which chemists and data scientists combine their expertise and collaborate – has been vital to making screening processes more efficient. In recent years, numerous models that predict pharmacokinetic properties or bioactivity have been published, and these are used on a daily basis by chemists to make decisions and prioritize ideas. The emerging field of explainable artificial intelligence is opening up new possibilities for understanding the reasoning that underlies a model. In small molecule research, this means relating contributions of substructures of compounds to their predicted properties, which in turn also allows the areas of the compounds that have the greatest influence on the outcome to be identified. However, there is no interactive visualization tool that facilitates such interdisciplinary collaborations towards interpretability of machine learning models for small molecules. To fill this gap, we present CIME (ChemInformatics Model Explorer), an interactive web-based system that allows users to inspect chemical data sets, visualize model explanations, compare interpretability techniques, and explore subgroups of compounds. The tool is model-agnostic and can be run on a server or a workstation.


Christina Humer, Henry Heberle, Floriane Montanari, Thomas Wolf, Florian Huber, Ryan Henderson, Julian Heinrich, Marc Streit
ChemInformatics Model Explorer (CIME): Exploratory Analysis of Chemical Model Explanations
ChemRxiv, doi:10.26434/chemrxiv-2021-crpd0, 2021.


    title = {ChemInformatics Model Explorer (CIME): Exploratory Analysis of Chemical Model Explanations},
    author = {Christina Humer and Henry Heberle and Floriane Montanari and Thomas Wolf and Florian Huber and Ryan Henderson and Julian Heinrich and Marc Streit},
    journal = {ChemRxiv},
    doi = {10.26434/chemrxiv-2021-crpd0},
    url = {},
    year = {2021}


This work was supported by the JKU Visual Data Science Lab and Bayer AG (HRB 48248). We thank Michael Koch for participating in the initiation of the project and for follow-up discussions; Michael Pühringer for reading the final version of the article and Moritz Heckmann for technical support.


This work was funded by Bayer AG. HH, RH, FM and JH acknowledge funding from the Bayer AG Life Science Collaboration Project (”Explainable AI”).