patternMinor
What algorithm do SVMs use to minimize their objective function?
Viewed 0 times
minimizewhatfunctionalgorithmsvmsobjectiveusetheir
Problem
Support Vector Machines turn machine learning linear classification tasks into a linear optimization problems.
$$ \text{minimize } J(\theta,\theta_0) = \frac1n \sum_1^n \text{HingeLoss}(\theta,\theta_0) + \frac{\lambda}{2} ||\theta||^2 $$
My question is, what linear programming runs on the background for the minimization of the objective function $J$. Is it Simplex?
$$ \text{minimize } J(\theta,\theta_0) = \frac1n \sum_1^n \text{HingeLoss}(\theta,\theta_0) + \frac{\lambda}{2} ||\theta||^2 $$
My question is, what linear programming runs on the background for the minimization of the objective function $J$. Is it Simplex?
Solution
The SVM problem (and other related problems) can be described as a minimization \ maximization of a quadratic function.
This can be easily solved with the gradient descent algorithm, however I recommend using the SMO algorithm since it is a direct solution (to the dual of the SVM problem), and can be also used for kernelized SVMs
This can be easily solved with the gradient descent algorithm, however I recommend using the SMO algorithm since it is a direct solution (to the dual of the SVM problem), and can be also used for kernelized SVMs
Context
StackExchange Computer Science Q#135786, answer score: 6
Revisions (0)
No revisions yet.