Pairwise Difference Learning
- Karim Belaid, LMU Munich, Porsche AG
Pairwise difference learning (PDL) has recently been introduced as a new meta-learning technique for regression by Wetzel et al. Instead of learning a mapping from instances to outcomes in the standard way, the key idea is to learn a function that takes two instances as input and predicts the difference between the respective outcomes. Given a function of this kind, predictions for a query instance are derived from every training example and then averaged. This presentation focus on the classification version of PDL, proposing a meta-learning technique for inducing a classifier by solving a suitably defined (binary) classification problem on a paired version of the original training data. This presentation will also discuss an enhancement to PDL through anchor weighting, which adjusts the influence of anchor points based on the reliability and precision of their predictions, thus improving the robustness and accuracy of the method. We analyze the performance of the PDL classifier in a large-scale empirical study, finding that it outperforms state-of-the-art methods in terms of prediction performance. Finally, we provide an easy-to-use and publicly available implementation of PDL in a Python package.