Learning Linear Bayesian Networks with Latent Variables

Citation:

A. Anandkumar, D. Hsu, A. Javanmard, and S. M. Kakade, Learning Linear Bayesian Networks with Latent Variables. Algorithmica special issue on Machine Learning (2014), ICML: ArXiv Report, 2013.

Abstract:

Unsupervised estimation of latent variable models is a fundamental problem central to numerous applications of machine learning and statistics. This work presents a principled approach for estimating broad classes of such models, including probabilistic topic models and latent linear Bayesian networks, using only second-order observed moments. The sufficient conditions for identifiability of these models are primarily based on weak expansion constraints on the topic-word matrix, for topic models, and on the directed acyclic graph, for Bayesian networks. Because no assumptions are made on the distribution among the latent variables, the approach can handle arbitrary correlations among the topics or latent factors. In addition, a tractable learning method via ℓ1 optimization is proposed and studied in numerical experiments.

Publisher's Version

See also: 2013
Last updated on 10/10/2021