Publications by Year: 2016

2016
Q. Huang, R. Ge, S. M. Kakade, and M. Dahleh, Minimal Realization Problems for Hidden Markov Models. IEEE Transactions on Signal Processing: ArXiv Report, 2016. Publisher's VersionAbstract
Consider a stationary discrete random process with alphabet size d, which is assumed to be the output process of an unknown stationary Hidden Markov Model (HMM). Given the joint probabilities of finite length strings of the process, we are interested in finding a finite state generative model to describe the entire process. In particular, we focus on two classes of models: HMMs and quasi-HMMs, which is a strictly larger class of models containing HMMs. In the main theorem, we show that if the random process is generated by an HMM of order less or equal than k, and whose transition and observation probability matrix are in general position, namely almost everywhere on the parameter space, both the minimal quasi-HMM realization and the minimal HMM realization can be efficiently computed based on the joint probabilities of all the length N strings, for N > 4 lceil log_d(k) rceil +1. In this paper, we also aim to compare and connect the two lines of literature: realization theory of HMMs, and the recent development in learning latent variable models with tensor decomposition techniques.
Minimal Realization Problems for Hidden Markov Models
M. R. Khan and J. Blumenstock, “Predictors without Borders: Behavioral Modeling of Product Adoption in Three Developing Countries,” KDD '16: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 145–154, 2016. Publisher's VersionAbstract
Billions of people around the world live without access to banks or other formal financial institutions. In the past several years, many mobile operators have launched "Mobile Money" platforms that deliver basic financial services over the mobile phone network. While many believe that these services can improve the lives of the poor, in many countries adoption of Mobile Money still remains anemic. In this paper, we develop a predictive model of Mobile Money adoption that uses billions of mobile phone communications records to understand the behavioral determinants of adoption. We describe a novel approach to feature engineering that uses a Deterministic Finite Automaton to construct thousands of behavioral metrics of phone use from a concise set of recursive rules. These features provide the foundation for a predictive model that is tested on mobile phone operators logs from Ghana, Pakistan, and Zambia, three very different developing-country contexts. The results highlight the key correlates of Mobile Money use in each country, as well as the potential for such methods to predict and drive adoption. More generally, our analysis provides insight into the extent to which homogenized supervised learning methods can generalize across geographic contexts. We find that without careful tuning, a model that performs very well in one country frequently does not generalize to another.
Predictors without Borders: Behavioral Modeling of Product Adoption in Three Developing Countries
P. Jain, C. Jin, S. M. Kakade, P. Netrapalli, and A. Sidford, Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm. COLT: ArXiv Report, 2016. Publisher's VersionAbstract
This work provides improved guarantees for streaming principle component analysis (PCA). Given $A_1, \ldots, A_n\in \mathbb{R}^{d\times d}$ sampled independently from distributions satisfying $\mathbb{E}[A_i] = \Sigma$ for $\Sigma \succeq \mathbf{0}$, this work provides an $O(d)$-space linear-time single-pass streaming algorithm for estimating the top eigenvector of $\Sigma$. The algorithm nearly matches (and in certain cases improves upon) the accuracy obtained by the standard batch method that computes top eigenvector of the empirical covariance $\frac{1}{n} \sum_{i \in [n]} A_i$ as analyzed by the matrix Bernstein inequality. Moreover, to achieve constant accuracy, our algorithm improves upon the best previous known sample complexities of streaming algorithms by either a multiplicative factor of $O(d)$ or $1/\mathrm{gap}$ where $\mathrm{gap}$ is the relative distance between the top two eigenvalues of $\Sigma$. These results are achieved through a novel analysis of the classic Oja's algorithm, one of the oldest and most popular algorithms for streaming PCA. In particular, this work shows that simply picking a random initial point $w_0$ and applying the update rule $w_{i + 1} = w_i + \eta_i A_i w_i$ suffices to accurately estimate the top eigenvector, with a suitable choice of $\eta_i$. We believe our result sheds light on how to efficiently perform streaming PCA both in theory and in practice and we hope that our analysis may serve as the basis for analyzing many variants and extensions of streaming PCA.
Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm
R. Ge, C. Jin, S. M. Kakade, P. Netrapalli, and A. Sidford, Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis. ICML: ArXiv Report, 2016. Publisher's VersionAbstract
This paper considers the problem of canonical-correlation analysis (CCA) (Hotelling, 1936) and, more broadly, the generalized eigenvector problem for a pair of symmetric matrices. These are two fundamental problems in data analysis and scientific computing with numerous applications in machine learning and statistics (Shi and Malik, 2000; Hardoon et al., 2004; Witten et al., 2009). We provide simple iterative algorithms, with improved runtimes, for solving these problems that are globally linearly convergent with moderate dependencies on the condition numbers and eigenvalue gaps of the matrices involved.We obtain our results by reducing CCA to the top-k generalized eigenvector problem. We solve this problem through a general framework that simply requires black box access to an approximate linear system solver. Instantiating this framework with accelerated gradient descent we obtain a running time of O(zkκ√ρlog(1/ϵ)log(kκ/ρ)) where z is the total number of nonzero entries, κ is the condition number and ρ is the relative eigenvalue gap of the appropriate matrices. Our algorithm is linear in the input size and the number of components k up to a log(k) factor. This is essential for handling large-scale matrices that appear in practice. To the best of our knowledge this is the first such algorithm with global linear convergence. We hope that our results prompt further research and ultimately improve the practical running time for performing these important data analysis procedures on large data sets.
Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis
D. Garber, et al., Faster Eigenvector Computation via Shift-and-Invert Preconditioning. ICML: ArXiv Report, 2016. Publisher's VersionAbstract
We give faster algorithms and improved sample complexities for estimating the top eigenvector of a matrix Σ -- i.e. computing a unit vector x such that xTΣx≥(1−ϵ)λ1(Σ): Offline Eigenvector Estimation: Given an explicit A∈ℝn×d with Σ=ATA, we show how to compute an ϵ approximate top eigenvector in time Õ ([nnz(A)+d∗sr(A)gap2]∗log1/ϵ) and Õ ([nnz(A)3/4(d∗sr(A))1/4gap√]∗log1/ϵ). Here nnz(A) is the number of nonzeros in A, sr(A) is the stable rank, gap is the relative eigengap. By separating the gap dependence from the nnz(A) term, our first runtime improves upon the classical power and Lanczos methods. It also improves prior work using fast subspace embeddings [AC09, CW13] and stochastic optimization [Sha15c], giving significantly better dependencies on sr(A) and ϵ. Our second running time improves these further when nnz(A)≤d∗sr(A)gap2. Online Eigenvector Estimation: Given a distribution D with covariance matrix Σ and a vector x0 which is an O(gap) approximate top eigenvector for Σ, we show how to refine to an ϵ approximation using O(var(D)gap∗ϵ) samples from D. Here var(D) is a natural notion of variance. Combining our algorithm with previous work to initialize x0, we obtain improved sample complexity and runtime results under a variety of assumptions on D. We achieve our results using a general framework that we believe is of independent interest. We give a robust analysis of the classic method of shift-and-invert preconditioning to reduce eigenvector computation to approximately solving a sequence of linear systems. We then apply fast stochastic variance reduced gradient (SVRG) based system solvers to achieve our claims.
Faster Eigenvector Computation via Shift-and-Invert Preconditioning
C. Jin, S. M. Kakade, and P. Netrapalli, Provable Efficient Online Matrix Completion via Non-convex Stochastic Gradient Descent. NIPS: ArXiv Report, 2016. Publisher's VersionAbstract
Matrix completion, where we wish to recover a low rank matrix by observing a few entries from it, is a widely studied problem in both theory and practice with wide applications. Most of the provable algorithms so far on this problem have been restricted to the offline setting where they provide an estimate of the unknown matrix using all observations simultaneously. However, in many applications, the online version, where we observe one entry at a time and dynamically update our estimate, is more appealing. While existing algorithms are efficient for the offline setting, they could be highly inefficient for the online setting. In this paper, we propose the first provable, efficient online algorithm for matrix completion. Our algorithm starts from an initial estimate of the matrix and then performs non-convex stochastic gradient descent (SGD). After every observation, it performs a fast update involving only one row of two tall matrices, giving near linear total runtime. Our algorithm can be naturally used in the offline setting as well, where it gives competitive sample complexity and runtime to state of the art algorithms. Our proofs introduce a general framework to show that SGD updates tend to stay away from saddle surfaces and could be of broader interests for other non-convex problems to prove tight rates.
Provable Efficient Online Matrix Completion via Non-convex Stochastic Gradient Descent