Search papers, labs, and topics across Lattice.
This paper investigates the problem of generating rigorous, logically-sound explanations for tree ensemble predictions, focusing on random forests and boosted trees. They formalize the notion of rigorous explanations and develop methods to compute them, ensuring the explanations accurately reflect the underlying predictor's behavior. The work provides a foundation for building trust in tree ensembles by offering verifiable justifications for their predictions.
Trust in tree ensembles hinges on rigorous explanations, and this paper delivers a method to generate them.
Tree ensembles (TEs) find a multitude of practical applications. They represent one of the most general and accurate classes of machine learning methods. While they are typically quite concise in representation, their operation remains inscrutable to human decision makers. One solution to build trust in the operation of TEs is to automatically identify explanations for the predictions made. Evidently, we can only achieve trust using explanations, if those explanations are rigorous, that is truly reflect properties of the underlying predictor they explain This paper investigates the computation of rigorously-defined, logically-sound explanations for the concrete case of two well-known examples of tree ensembles, namely random forests and boosted trees.