Ce diaporama a bien été signalé.
Nous utilisons votre profil LinkedIn et vos données d’activité pour vous proposer des publicités personnalisées et pertinentes. Vous pouvez changer vos préférences de publicités à tout moment.
Prochain SlideShare
Chargement dans…5
×

# Financial time series_forecasting_svm

Financial Time series Forecasting using support vector machines ( Elaborated by Mohamed DHAOUI , 3rd engineering student at Tunisia Polytechnic School ) .

• Full Name
Comment goes here.

Are you sure you want to Yes No
Your message goes here
• Identifiez-vous pour voir les commentaires

• Soyez le premier à aimer ceci

### Financial time series_forecasting_svm

1. 1. Tunisia Polytechnic School Data Mining project Presented by Mohamed DHAOUI 3rd year engineering student (contact@Mohamed-dhaoui.com) Academic Year : 2015-2016 Financial time series forecasting using support vector machines
2. 2. 2
3. 3.    ◦ ◦ ◦ 3
4. 4. 4
5. 5. 5
6. 6. 6
7. 7. 7
8. 8. 8
9. 9. 9
10. 10. 10
11. 11. 11
12. 12. 12
13. 13. 13
14. 14. a weight parameter, which needs to be carefully set 14
15. 15. 15
16. 16. 16
17. 17. 17
18. 18. 18
19. 19. 19
20. 20.   20
21. 21. 21
22. 22. 22
23. 23. 23
24. 24. 24
25. 25. 25
26. 26. 26
27. 27. 27
28. 28.  Backpropagation, an abbreviation for "backward propagation of errors", is a common method of training artificial neural networks used in conjunction with an optimasition method such as gradient descent. The method calculates the gradient of a loss function with respect to all the weights in the network and try to update these weights, 28
29. 29. 29
30. 30.  Algorithm:  initialize network weight (randomly)  Do forEach training example ex prediction = neural-net-output(network, ex) actual = teacher-output(ex) compute error (prediction - actual) at the output units compute for all weights update network weights until all examples classified correctly or another stopping criterion satisfied  return the network 30
31. 31.  Weights updating  Δwt = -e* E + α Δwt-1  =H* δ0 e: learning rate α :momentun Wh,o Hidden layer Output layer O H E= actual-ideal δ0= -E*f’(o) δk= f’(h)*Wh,o *δ0 31
32. 32.  Weaknesses • Gradient descent with backpropagation is not guaranteed to find the global minimum. • There is no rule for selecting the best learning rate and the momentum. • Slow algorithm that need a computational resources. 32
33. 33. SVM perfermance • Too small value for C caused underfit the training data while too large a value of C caused overfit the training data 33
34. 34. the best prediction performance of the holdout data is recorded when delta is 25 and C is 78 SVM perfermance 34
35. 35. BP perfermance • The best prediction performance for the holdout data is produced when the number of hidden processing elements are 24 and the stopping criteria is 146 400 epochs. • The prediction performance of the holdout data is 54.7332% and that of the training data is 58.5217%. 35
36. 36. Comparison  SVM outperforms BPN and CBR by 3.0981% and 5.852% for the holdout data, respectively  For the training data, SVM has higher prediction accuracy than BPN by 6.2309%  SVM performs better than CBR at 5% statistical significance level  SVM does not significantly outperform BP  BP and CBR do not significantly outperform each other 36