Work

Optimization Methods for Scale Invariant Problems in Machine Learning

Public

While optimization has received much attention in the machine learning community, most of them consider unconstrained supervised learning models such as neural networks and support vector machine. In this dissertation, we introduce a new class of optimization problems called scale invariant problems that include interesting unsupervised learning models such as PCA, ICA, GMM and KL-NMF. We develep scalable optimization algorithms for scale invariant problems and provide their convergence guarantees. The first half of this thesis develops deterministic optimization algorithms. Specifically, we develop an iterative optimization algorithm for L1-norm kernel PCA and generalizes it to solve general scale invariant problems. In the second half, we study stochastic optimization methods. We present two stochastic PCA algorithms and develop a stochastic generalization of power iteration to solve scale invariant problems with finite-sum objective functions. Numerical experiments on various scale invariant problems reveal that the proposed algorithms not only scale better than state-of-the-art algorithms but also produce excellent quality robust solutions.

Creator
DOI
Subject
Language
Alternate Identifier
Keyword
Date created
Resource type
Rights statement

Relationships

Items