NESVM: a Fast Gradient Method for Support Vector Machines

UTSePress Research/Manakin Repository

Search UTSePress Research


Advanced Search

Browse

My Account

Show simple item record

dc.contributor.author Zhou, Tianyi en_US
dc.contributor.author Tao, Dacheng en_US
dc.contributor.author Wu, Xindong en_US
dc.contributor.editor Webb, Geoffrey I.; Liu, Bing; Zhang, Chengqi; Gunopulos, Dimitrios; Wu, Xindong en_US
dc.date.accessioned 2012-02-02T11:07:33Z
dc.date.available 2012-02-02T11:07:33Z
dc.date.issued 2010 en_US
dc.identifier 2010001724 en_US
dc.identifier.citation Zhou Tianyi, Tao Dacheng, and Wu Xindong 2010, 'NESVM: a Fast Gradient Method for Support Vector Machines', , IEEE, USA, , pp. 679-688. en_US
dc.identifier.issn 978-0-7695-4256-0 en_US
dc.identifier.other E1 en_US
dc.identifier.uri http://hdl.handle.net/10453/16196
dc.description.abstract Support vector machines (SVMs) are invaluable tools for many practical applications in artificial intelligence, e.g., classification and event recognition. However, popular SVM solvers are not sufficiently efficient for applications with a great deal of samples as well as a large number of features. In this paper, thus, we present NESVM, a fast gradient SVM solver that can optimize various SVM models, e.g., classical SVM, linear programming SVM and least square SVM. Compared against SVM-Perf \cite{SVM_Perf}\cite{PerfML} (whose convergence rate in solving the dual SVM is upper bounded by $\mathcal O(1/\sqrt{k})$ where $k$ is the number of iterations) and Pegasos \cite{Pegasos} (online SVM that converges at rate $\mathcal O(1/k)$ for the primal SVM), NESVM achieves the optimal convergence rate at $\mathcal O(1/k^{2})$ and a linear time complexity. In particular, NESVM smoothes the non-differentiable hinge loss and $\ell_1$-norm in the primal SVM. Then the optimal gradient method without any line search is adopted to solve the optimization. In each iteration round, the current gradient and historical gradients are combined to determine the descent direction, while the Lipschitz constant determines the step size. Only two matrix-vector multiplications are required in each iteration round. Therefore, NESVM is more efficient than existing SVM solvers. In addition, NESVM is available for both linear and nonlinear kernels. We also propose ``homotopy NESVM'' to accelerate NESVM by dynamically decreasing the smooth parameter and using the continuation method. Our experiments on census income categorization, indoor/outdoor scene classification, event recognition and scene recognition suggest the efficiency and the effectiveness of NESVM. The MATLAB code of NESVM will be available on our website for further assessment. en_US
dc.language English en_US
dc.publisher IEEE en_US
dc.relation.isbasedon http://dx.doi.org/10.1109/ICDM.2010.135 en_US
dc.title NESVM: a Fast Gradient Method for Support Vector Machines en_US
dc.parent IEEE International Conference on Data Mining en_US
dc.journal.volume en_US
dc.journal.number en_US
dc.publocation USA en_US
dc.identifier.startpage 679 en_US
dc.identifier.endpage 688 en_US
dc.cauo.name FEIT.Faculty of Engineering & Information Technology en_US
dc.conference Verified OK en_US
dc.for 080109 en_US
dc.personcode 11201340 en_US
dc.personcode 111502 en_US
dc.personcode 100507 en_US
dc.percentage 100 en_US
dc.classification.name Pattern Recognition and Data Mining en_US
dc.classification.type FOR-08 en_US
dc.edition en_US
dc.custom IEEE International Conference on Data Mining en_US
dc.date.activity 20101213 en_US
dc.location.activity Sydney, Australia en_US
dc.description.keywords Support vector machines, smooth, hinge loss, $\ell_1$ norm, Nesterov's method, continuation method en_US
dc.staffid 100507 en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record