TY - JOUR
T1 - Hybrid generative-discriminative training of Gaussian mixture models
AU - Roth, Wolfgang
AU - Peharz, Robert
AU - Tschiatschek, Sebastian
AU - Pernkopf, Franz
PY - 2018/9/1
Y1 - 2018/9/1
N2 - Recent work has shown substantial performance improvements of discriminative probabilistic models over their generative counterparts. However, since discriminative models do not capture the input distribution of the data, their use in missing data scenarios is limited. To utilize the advantages of both paradigms, we present an approach to train Gaussian mixture models (GMMs) in a hybrid generative-discriminative way. This is accomplished by optimizing an objective that trades off between a generative likelihood term and either a discriminative conditional likelihood term or a large margin term using stochastic optimization. Our model substantially improves the performance of classical maximum likelihood optimized GMMs while at the same time allowing for both a consistent treatment of missing features by marginalization, and the use of additional unlabeled data in a semi-supervised setting. For the covariance matrices, we employ a diagonal plus low-rank matrix structure to model important correlations while keeping the number of parameters small. We show that a non-diagonal matrix structure is crucial to achieve good performance and that the proposed structure can be utilized to considerably reduce classification time in case of missing features. The capabilities of our model are demonstrated in extensive experiments on real-world data.
AB - Recent work has shown substantial performance improvements of discriminative probabilistic models over their generative counterparts. However, since discriminative models do not capture the input distribution of the data, their use in missing data scenarios is limited. To utilize the advantages of both paradigms, we present an approach to train Gaussian mixture models (GMMs) in a hybrid generative-discriminative way. This is accomplished by optimizing an objective that trades off between a generative likelihood term and either a discriminative conditional likelihood term or a large margin term using stochastic optimization. Our model substantially improves the performance of classical maximum likelihood optimized GMMs while at the same time allowing for both a consistent treatment of missing features by marginalization, and the use of additional unlabeled data in a semi-supervised setting. For the covariance matrices, we employ a diagonal plus low-rank matrix structure to model important correlations while keeping the number of parameters small. We show that a non-diagonal matrix structure is crucial to achieve good performance and that the proposed structure can be utilized to considerably reduce classification time in case of missing features. The capabilities of our model are demonstrated in extensive experiments on real-world data.
KW - Gaussian mixture model
KW - Hybrid generative-discriminative learning
KW - Large margin learning
KW - Missing features
KW - Semi-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=85049300938&partnerID=8YFLogxK
U2 - 10.1016/j.patrec.2018.06.014
DO - 10.1016/j.patrec.2018.06.014
M3 - Article
AN - SCOPUS:85049300938
SN - 0167-8655
VL - 112
SP - 131
EP - 137
JO - Pattern Recognition Letters
JF - Pattern Recognition Letters
ER -