Search papers, labs, and topics across Lattice.
The paper introduces Causal Autoencoding and Treatment Conditioning (CAETC), a new method for counterfactual estimation over time that addresses time-dependent confounding bias. CAETC uses adversarial representation learning within an autoencoding architecture to learn a partially invertible and treatment-invariant representation. Experiments on synthetic, semi-synthetic, and real-world data show that CAETC significantly improves counterfactual estimation compared to existing methods, and is model-agnostic.
Achieve more accurate counterfactual estimation over time by learning treatment-invariant representations with a model-agnostic autoencoding framework.
Counterfactual estimation over time is important in various applications, such as personalized medicine. However, time-dependent confounding bias in observational data still poses a significant challenge in achieving accurate and efficient estimation. We introduce causal autoencoding and treatment conditioning (CAETC), a novel method for this problem. Built on adversarial representation learning, our method leverages an autoencoding architecture to learn a partially invertible and treatment-invariant representation, where the outcome prediction task is cast as applying a treatment-specific conditioning on the representation. Our design is independent of the underlying sequence model and can be applied to existing architectures such as long short-term memories (LSTMs) or temporal convolution networks (TCNs). We conduct extensive experiments on synthetic, semi-synthetic, and real-world data to demonstrate that CAETC yields significant improvement in counterfactual estimation over existing methods.