Faster Eigenvector Computation via Shift-and-Invert Preconditioning

Abstract:

We give faster algorithms and improved sample complexities for estimating the top eigenvector of a matrix Σ -- i.e. computing a unit vector x such that xTΣx≥(1−ϵ)λ1(Σ): Offline Eigenvector Estimation: Given an explicit A∈ℝn×d with Σ=ATA, we show how to compute an ϵ approximate top eigenvector in time Õ ([nnz(A)+d∗sr(A)gap2]∗log1/ϵ) and Õ ([nnz(A)3/4(d∗sr(A))1/4gap√]∗log1/ϵ). Here nnz(A) is the number of nonzeros in A, sr(A) is the stable rank, gap is the relative eigengap. By separating the gap dependence from the nnz(A) term, our first runtime improves upon the classical power and Lanczos methods. It also improves prior work using fast subspace embeddings [AC09, CW13] and stochastic optimization [Sha15c], giving significantly better dependencies on sr(A) and ϵ. Our second running time improves these further when nnz(A)≤d∗sr(A)gap2. Online Eigenvector Estimation: Given a distribution D with covariance matrix Σ and a vector x0 which is an O(gap) approximate top eigenvector for Σ, we show how to refine to an ϵ approximation using O(var(D)gap∗ϵ) samples from D. Here var(D) is a natural notion of variance. Combining our algorithm with previous work to initialize x0, we obtain improved sample complexity and runtime results under a variety of assumptions on D. We achieve our results using a general framework that we believe is of independent interest. We give a robust analysis of the classic method of shift-and-invert preconditioning to reduce eigenvector computation to approximately solving a sequence of linear systems. We then apply fast stochastic variance reduced gradient (SVRG) based system solvers to achieve our claims.

Publisher's Version

See also: 2016
Last updated on 10/10/2021