Fast Implementation of ℓ1 Regularized Learning Algorithms Using Gradient Descent Methods,

Y. Cai, Y. Sun, Y. Cheng, J. Li, and S. Goodison,

in Proc 10th SIAM International Conference on Data Mining (SDM), pp. 862-871, 2010.

(oral presentation/acceptance rate: 82/351 = 23%)

 

With the advent of high-throughput technologies, ℓ1 regularized learning algorithms have attracted much attention recently. Dozens of algorithms have been proposed for fast implementation, using various advanced optimization techniques. In this paper, we demonstrate that ℓ1 regularized learning problems can be easily solved by using gradient-descent techniques. The basic idea is to transform a convex optimization problem with a non-differentiable objective function into an unconstrained non-convex problem, upon which, via gradient descent, reaching a globally optimum solution is guaranteed. We present detailed implementation of the algorithm using ℓ1 regularized logistic regression as a particular application. We conduct large-scale experiments to compare the new approach with other state-of-the-art algorithms on eight medium and large-scale problems. We demonstrate that our algorithm, though simple, performs significantly better than other advanced algorithms in terms of computational efficiency and memory usage. In one of our simulation studies, we demonstrate that the newly proposed algorithm is able to solve a problem with more than two million features in 82 seconds.

 

o   Manuscript

o   Matlab Code

 

Description: Description: weebly reliable statistics