"SVM:
spitfire_ch
MemberPosts:38Guru
hi,
sorry for bombarding you with posts currently. I guess it's a symptom of the learning phase.
Anyway, when fiddling around with support vector machines, I often run into the problem, that its keeps iterating forever (or at least for many hours) without getting any further. This is only the case for certain kernel types, other complete the analysis within minutes. I think this because they (the slow ones) don't manage to converge. What is the right thing to do in such a situation? I can think of two adjustments:
Maybe this would also be ideas to implement: A "stop sub process button" in addition to the "stop everything button". This would only exit the current operation (eg. the validation of the current kernel type) and move on. The best parameters of the operations that were not canceled will be chosen.
Thanks for your advise
Hanspeter
sorry for bombarding you with posts currently. I guess it's a symptom of the learning phase.
Anyway, when fiddling around with support vector machines, I often run into the problem, that its keeps iterating forever (or at least for many hours) without getting any further. This is only the case for certain kernel types, other complete the analysis within minutes. I think this because they (the slow ones) don't manage to converge. What is the right thing to do in such a situation? I can think of two adjustments:
- Using a higher convergence epsilon
- Decreasing max iterations
Maybe this would also be ideas to implement: A "stop sub process button" in addition to the "stop everything button". This would only exit the current operation (eg. the validation of the current kernel type) and move on. The best parameters of the operations that were not canceled will be chosen.
Thanks for your advise
Hanspeter
Tagged:
0
Answers
Your two adjustments are corect. It could also help sometimes to increase the kernel cache (if available). However, I would recommend to first start with a lower number of iterations first. Start with a real low number (like 500 or 1000) and check if anything was learned at all. Then you could increase the number once or twice in order to check if the results change much. If this is not the case, the SVM is probably not able to learn at all with this kernel function / settings.
You could log the information with a "Log" operator in persistance mode. In this case, you will keep the information you got so far.
In general, this would indeed be helpful and some "anytime" operators (like the genetic feature selection or weightings) already offer such a function. Those operators have a parameter indicating if a "Should Stop?" dialog is shown. We could think of a generic mechanism for this "stop anytime" feature on a operator level but I doubt that it is really intuitive to offer this also on a generic subprocess level. And probably it would not even be possible in general.
Cheers,
Ingo
thank you very much for your advice and insight. I will follow your suggested steps when trying to optimize a SVM learner. I am also very glad to hear about the log operator. I wasn't aware of it's capabilities. This does indeed sound very useful - as does the "stop" dialog feature on some operators. Thanks for pointing these features out!
Best regards
Hanspeter