23 Jul 2022

ICML 2022

Authors: Adam Foster, Arpi Vezer, Craig A. Glastonbury, Paidi Creed, Sam Abujudeh, Aaron Sim

Abstract

Learning meaningful representations of data that can address challenges such as batch effect correction and counterfactual inference is a central problem in many domains including computational biology. Adopting a Conditional VAE framework, we show that marginal independence between the representation and a condition variable plays a key role in both of these challenges. We propose the Contrastive Mixture of Posteriors (CoMP) method that uses a novel misalignment penalty defined in terms of mixtures of the variational posteriors to enforce this independence in latent space. We show that CoMP has attractive theoretical properties compared to previous approaches, and we prove counterfactual identifiability of CoMP under additional assumptions. We demonstrate state-of-the-art performance on a set of challenging tasks including aligning human tumour samples with cancer cell-lines, predicting transcriptome-level perturbation responses, and batch correction on single-cell RNA sequencing data. We also find parallels to fair representation learning and demonstrate that CoMP is competitive on a common task in the field.


Back to publications

Latest publications

01 Jun 2024
arXiv Computer Science
Retrieve to Explain: Evidence-driven Predictions with Language Models
Read more
01 May 2024
Journal of Biomedical Semantics, volume 15, Article number: 5 (2024)
Elucidating the Semantics-Topology Trade-off for Knowledge Inference-Based Pharmacological Discovery
Read more
12 Oct 2023
Translational Neurodegeneration. 2023; 12: 47
Janus kinase inhibitors are potential therapeutics for amyotrophic lateral sclerosis
Read more