Abstract
The Support Vector Machine (SVM) is a new learning methodology based on Vapnik-Chervonenkis (VC) theory. SVM has recently attracted growing research interest due to its ability to learn classification and regression tasks with high-dimensional data. The SVM formulation uses kernel representation. The existing algorithm leaves the choice of the kernel type and kernel parameters to the user. This paper describes an important extension to the SVM method: the Multi-resolution SVM (M-SVM), in which several kernels of different scales can be used simultaneously to approximate the target function. The proposed M-SVM approach enables `automatic' selection of the `optimal' kernel width. This usually results in better prediction accuracy of SVM models.
Original language | English (US) |
---|---|
Title of host publication | Proceedings of the International Joint Conference on Neural Networks |
Publisher | IEEE |
Pages | 1065-1070 |
Number of pages | 6 |
Volume | 2 |
State | Published - Dec 1 1999 |
Event | International Joint Conference on Neural Networks (IJCNN'99) - Washington, DC, USA Duration: Jul 10 1999 → Jul 16 1999 |
Other
Other | International Joint Conference on Neural Networks (IJCNN'99) |
---|---|
City | Washington, DC, USA |
Period | 7/10/99 → 7/16/99 |