**Performance**

“In most of these cases, SVM generalization performance (i.e. error rates on test sets) either matches or is significantly better than that of competing methods.”

Burgess (1998)

“The time complexity of training SVMs scales approximately between quadratic and cubic in the number of training data points [22].”

Cao (2003)

“Practical experience with such methods is rapidly improving, but estimation can be slow since it involves solving a complicated optimization problem that can require *O*(*n*^{2}) and *O*(*n*^{3}) time to solve.”

Hand, Mannila and Smyth (2001)

“At the time of this writing, empirical evidence suggests that it performs well in many real learning problems.”

Hastie, Tibshirani and Friedman (2001)

“The problem: if we have *n* data points, we need *O*(*n*^{2}) memory just to write down the matrix *D*. If *n* = 20000, and it takes 4 bytes to represent an entry of *D*, we would need 1.6 Gigabytes to store the *D* matrix.”

Rifkin

“Computing a single kernel product *K*^{ij} requires *O*(*n*) time, where *n* is the input dimensionality.”

Rifkin

## Bibliography

- CAO, L., 2003. Support vector machines experts for time series forecasting.
*Neurocomputing.*[Cited by 11]

## Leave a Reply

You must be logged in to post a comment.