Finance

Support Vector Machines: Financial Applications

 

Listed in order of citations per year, highest at the top.

Last updated September 2006.

    • PANG, Bo, Lillian LEE and Shivakumar VAITHYANATHAN, 2002. Thumbs up? Sentiment Classification using Machine Learning Techniques, In: EMNLP ’02: Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing – Volume 10, pages 79–86. [Cited by 154] (36.66/year)
      Abstract: “We consider the problem of classifying documents not by topic, but by overall sentiment, e.g., determining whether a review is positive or negative. Using movie reviews as data, we find that standard machine learning techniques definitively outperform human-produced baselines. However, the three machine learning methods we employed (Naive Bayes, maximum entropy classification, and support vector machines) do not perform as well on sentiment classification as on traditional topic-based categorization. We conclude by examining factors that make the sentiment classification problem more challenging.”found that, using film reviews as data, standard machine learning techniques definitively outperformed human-produced baselines. However, they also found that the three machine learning methods they employed (Naive Bayes, maximum entropy classification, and support vector machines) did not perform as well on sentiment classification as on traditional topic-based categorization.
    • VAN GESTEL, Tony, et al., 2001. Financial Time Series Prediction Using Least Squares Support Vector Machines Within the Evidence Framework, IEEE Transactions on Neural Networks, Volume 12, Number 4, July 2001, Pages 809-821. [Cited by 77] (14.82/year)
      Abstract: “The Bayesian evidence framework is applied in this paper to least squares support vector machine (LS-SVM) regression in order to infer nonlinear models for predicting a financial time series and the related volatility. On the first level of inference, a statistical framework is related to the LS-SVM formulation which allows one to include the time-varying volatility of the market by an appropriate choice of several hyper-parameters. The hyper-parameters of the model are inferred on the second level of inference. The inferred hyper-parameters, related to the volatility, are used to construct a volatility model within the evidence framework. Model comparison is performed on the third level of inference in order to automatically tune the parameters of the kernel function and to select the relevant inputs. The LS-SVM formulation allows one to derive analytic expressions in the feature space and practical expressions are obtained in the dual space replacing the inner product by the related kernel function using Mercer’s theorem. The one step ahead prediction performances obtained on the prediction of the weekly 90-day T-bill rate and the daily DAX30 closing prices show that significant out of sample sign predictions can be made with respect to the Pesaran-Timmerman test statistic”applied the Bayesian evidence framework to least squares support vector machine (LS-SVM) regression to predict the weekly 90-day T-bill rate and the daily DAX30 closing prices.
    • TAY, Francis E. H. and Lijuan CAO, 2001. Application of support vector machines in financial time series forecasting, Omega: The International Journal of Management Science, Volume 29, Issue 4, August 2001, Pages 309-317. [Cited by 67] (12.89/year)
      Abstract: “This paper deals with the application of a novel neural network technique, support vector machine (SVM), in financial time series forecasting. The objective of this paper is to examine the feasibility of SVM in financial time series forecasting by comparing it with a multi-layer back-propagation (BP) neural network. Five real futures contracts that are collated from the Chicago Mercantile Market are used as the data sets. The experiment shows that SVM outperforms the BP neural network based on the criteria of normalized mean square error (NMSE), mean absolute error (MAE), directional symmetry (DS) and weighted directional symmetry (WDS). Since there is no structured way to choose the free parameters of SVMs, the variability in performance with respect to the free parameters is investigated in this study. Analysis of the experimental results proved that it is advantageous to apply SVMs to forecast financial time series.”found that an SVM outperformed a multi-layer back-propagation (BP) neural network on five real futures contracts from the Chicago Mercantile Market.
    • TAY, Francis E. H. and L. J. CAO, 2002. Modified support vector machines in financial time series forecasting, Neurocomputing, Volume 48, Issues 1-4, October 2002, Pages 847-861. [Cited by 54] (12.86/year)
      Abstract: “This paper proposes a modified version of support vector machines, called C-ascending support vector machine, to model non-stationary financial time series. The C-ascending support vector machines are obtained by a simple modification of the regularized risk function in support vector machines, whereby the recent ε-insensitive errors are penalized more heavily than the distant ε-insensitive errors. This procedure is based on the prior knowledge that in the non-stationary financial time series the dependency between input variables and output variable gradually changes over the time, specifically, the recent past data could provide more important information than the distant past data. In the experiment, C-ascending support vector machines are tested using three real futures collected from the Chicago Mercantile Market. It is shown that the C-ascending support vector machines with the actually ordered sample data consistently forecast better than the standard support vector machines, with the worst performance when the reversely ordered sample data are used. Furthermore, the C-ascending support vector machines use fewer support vectors than those of the standard support vector machines, resulting in a sparser representation of solution.”developed “C-ascending” support vector machines, which penalise recent ε-insensitive errors are more heavily than distant ε-insensitive errors, and found that they forecast better than standard SVMs on three real futures collected from the Chicago Mercantile Market.
    • HUANG, Zan, et al., 2004. Credit rating analysis with support vector machines and neural networks: a market comparative study, Decision Support Systems, Volume 37, Issue 4 (September 2004), Pages 543-558. [Cited by 21] (9.55/year)
      Abstract: “Corporate credit rating analysis has attracted lots of research interests in the literature. Recent studies have shown that Artificial Intelligence (AI) methods achieved better performance than traditional statistical methods. This article introduces a relatively new machine learning technique, support vector machines (SVM), to the problem in attempt to provide a model with better explanatory power. We used backpropagation neural network (BNN) as a benchmark and obtained prediction accuracy around 80% for both BNN and SVM methods for the United States and Taiwan markets. However, only slight improvement of SVM was observed. Another direction of the research is to improve the interpretability of the AI-based models. We applied recent research results in neural network model interpretation and obtained relative importance of the input financial variables from the neural network models. Based on these results, we conducted a market comparative analysis on the differences of determining factors in the United States and Taiwan markets.”applied backpropagation neural networks and SVMs to corporate credit rating prediction for the United States and Taiwan markets and found that the results were comparable (both were superior to logistic regression), with the SVM slightly better.
    • CAO, Lijuan, 2003. Support vector machines experts for time series forecasting, Neurocomputing, Volume 51, April 2003, Pages 321-339. [Cited by 29] (9.08/year)
      Abstract: “This paper proposes using the support vector machines (SVMs) experts for time series forecasting. The generalized SVMs experts have a two-stage neural network architecture. In the first stage, self-organizing feature map (SOM) is used as a clustering algorithm to partition the whole input space into several disjointed regions. A tree-structured architecture is adopted in the partition to avoid the problem of predetermining the number of partitioned regions. Then, in the second stage, multiple SVMs, also called SVM experts, that best fit partitioned regions are constructed by finding the most appropriate kernel function and the optimal free parameters of SVMs. The sunspot data, Santa Fe data sets A, C and D, and the two building data sets are evaluated in the experiment. The simulation shows that the SVMs experts achieve significant improvement in the generalization performance in comparison with the single SVMs models. In addition, the SVMs experts also converge faster and use fewer support vectors.”showed that their method of “SVM experts” achieved significant improvement above single SVMs models when apllied to the Santa Fe data set C (high-frequency exchange rates between the Swiss franc and the US dollar).
    • KIM, Kyoung-jae, 2003. Financial time series forecasting using support vector machines, Neurocomputing, Volume 55, Issues 1-2 (September 2003), Pages 307-319. [Cited by 28] (8.76/year)
      Abstract: “Support vector machines (SVMs) are promising methods for the prediction of financial time-series because they use a risk function consisting of the empirical error and a regularized term which is derived from the structural risk minimization principle. This study applies SVM to predicting the stock price index. In addition, this study examines the feasibility of applying SVM in financial forecasting by comparing it with back-propagation neural networks and case-based reasoning. The experimental results show that SVM provides a promising alternative to stock market prediction.”found that SVMs outperformed back-propagation neural networks and case-based reasoning when used to forecast the daily Korea composite stock price index (KOSPI).
    • SHIN Kyung-Shik, Taik Soo LEE and Hyun-jung KIM, 2005. An application of support vector machines in bankruptcy prediction model, Expert Systems with Applications, Volume 28, Issue 1, January 2005, Pages 127-135. [Cited by 8] (6.67/year)
      Abstract: “This study investigates the efficacy of applying support vector machines (SVM) to bankruptcy prediction problem. Although it is a well-known fact that the back-propagation neural network (BPN) performs well in pattern recognition tasks, the method has some limitations in that it is an art to find an appropriate model structure and optimal solution. Furthermore, loading as many of the training set as possible into the network is needed to search the weights of the network. On the other hand, since SVM captures geometric characteristics of feature space without deriving weights of networks from the training data, it is capable of extracting the optimal solution with the small training set size. In this study, we show that the proposed classifier of SVM approach outperforms BPN to the problem of corporate bankruptcy prediction.
      The results demonstrate that the accuracy and generalization performance of SVM is better than that of BPN as the training set size gets smaller. We also examine the effect of the variability in performance with respect to various values of parameters in SVM. In addition, we investigate and summarize the several superior points of the SVM algorithm compared with BPN.”demonstrated that SVMs perform better than back-propagation neural networks when applied to corporate bankruptcy prediction.
    • CAO, L. J. and Francis E. H. TAY, 2003. Support Vector Machine With Adaptive Parameters in Financial Time Series Forecasting, IEEE Transactions on Neural Networks, Volume 14, Issue 6, November 2003, Pages 1506-1518. [Cited by 20] (6.25/year)
      Abstract: “A novel type of learning machine called support vector machine (SVM) has been receiving increasing interest in areas ranging from its original application in pattern recognition to other applications such as regression estimation due to its remarkable generalization performance. This paper deals with the application of SVM in financial time series forecasting. The feasibility of applying SVM in financial forecasting is first examined by comparing it with the multilayer back-propagation (BP) neural network and the regularized radial basis function (RBF) neural network. The variability in performance of SVM with respect to the free parameters is investigated experimentally. Adaptive parameters are then proposed by incorporating the nonstationarity of financial time series into SVM. Five real futures contracts collated from the Chicago Mercantile Market are used as the data sets. The simulation shows that among the three methods, SVM outperforms the BP neural network in financial forecasting, and there are comparable generalization performance between SVM and the regularized RBF neural network. Furthermore, the free parameters of SVM have a great effect on the generalization performance. SVM with adaptive parameters can both achieve higher generalization performance and use fewer support vectors than the standard SVM in financial forecasting.”used an SVM, a multilayer back-propagation (BP) neural network and a regularized radial basis function (RBF) neural network to predict five real futures contracts collated from the Chicago Mercantile Market. Results showed that the SVM and the regularized RBF neural network were comparable and both outperformed the BP neural network.
    • CAO, Lijuan and Francis E. H. TAY, 2001. Financial Forecasting Using Support Vector Machines. Neural Computing & Applications, Volume 10, Number 2 (May 2001), Pages 184-192. [Cited by 26] (5.00/year)
      Abstract: “The use of Support Vector Machines (SVMs) is studied in financial forecasting by comparing it with a multi-layer perceptron trained by the Back Propagation (BP) algorithm. SVMs forecast better than BP based on the criteria of Normalised Mean Square Error (NMSE), Mean Absolute Error (MAE), Directional Symmetry (DS), Correct Up (CP) trend and Correct Down (CD) trend. S&P 500 daily price index is used as the data set. Since there is no structured way to choose the free parameters of SVMs, the generalisation error with respect to the free parameters of SVMs is investigated in this experiment. As illustrated in the experiment, they have little impact on the solution. Analysis of the experimental results demonstrates that it is advantageous to apply SVMs to forecast the financial time series.”found that SVMs forecast the S&P 500 daily price index better than a multi-layer perceptron trained by the Back Propagation (BP) algorithm.
    • MIN, Jae H. and Young-Chan LEE, 2005. Bankruptcy prediction using support vector machine with optimal choice of kernel function parameters, Expert Systems with Applications, Volume 28, Issue 4, May 2005, Pages 603-614. [Cited by 6] (5.00/year)
      Abstract: “Bankruptcy prediction has drawn a lot of research interests in previous literature, and recent studies have shown that machine learning techniques achieved better performance than traditional statistical ones. This paper applies support vector machines (SVMs) to the bankruptcy prediction problem in an attempt to suggest a new model with better explanatory power and stability. To serve this purpose, we use a grid-search technique using 5-fold cross-validation to find out the optimal parameter values of kernel function of SVM. In addition, to evaluate the prediction accuracy of SVM, we compare its performance with those of multiple discriminant analysis (MDA), logistic regression analysis (Logit), and three-layer fully connected back-propagation neural networks (BPNs). The experiment results show that SVM outperforms the other methods.”found that, when applied to bankruptcy prediction, SVMs outperformed multiple discriminant analysis (MDA), logistic regression analysis (Logit) and three-layer fully connected back-propagation neural networks (BPNs).
    • ABRAHAM, Ajith, Ninan Sajith PHILIP and P. SARATCHANDRAN, 2003. Modeling chaotic behavior of stock indices using intelligent paradigms, Neural, Parallel & Scientific Computations, Volume 11, pages 143-160. [Cited by 10] (4.55/year)
      Abstract: “The use of intelligent systems for stock market predictions has been widely established. In this paper, we investigate how the seemingly chaotic behavior of stock markets could be well represented using several connectionist paradigms and soft computing techniques. To demonstrate the different techniques, we considered Nasdaq-100 index of Nasdaq Stock MarketSM and the S&P CNX NIFTY stock index. We analyzed 7 year’s Nasdaq 100 main index values and 4 year’s NIFTY index values. This paper investigates the development of a reliable and efficient technique to model the seemingly chaotic behavior of stock markets. We considered an artificial neural network trained using Levenberg-Marquardt algorithm, Support Vector Machine (SVM), Takagi-Sugeno neurofuzzy model and a Difference Boosting Neural Network (DBNN). This paper briefly explains how the different connectionist paradigms could be formulated using different learning methods and then investigates whether they can provide the required level of performance, which are sufficiently good and robust so as to provide a reliable forecast model for stock market indices. Experiment results reveal that all the connectionist paradigms considered could represent the stock indices behavior very accurately.”applied four different techniques, an artificial neural network trained using the Levenberg-Marquardt algorithm, a support vector machine, a difference boosting neural network and a Takagi-Sugeno fuzzy inference system learned using a neural network algorithm (neuro-fuzzy model) to the prediction of the Nasdaq-100 index of Nasdaq Stock Market_{SM} and the S&P CNX NIFTY stock index. No one technique was clearly superior, but absurdly, they attempt to predict the absolute value of the indices, rather than use log returns.
    • YANG, Haiqin, Laiwan CHAN and Irwin KING, 2002. Support Vector Machine Regression for Volatile Stock Market Prediction. In: Intelligent Data Engineering and Automated Learning: IDEAL 2002, edited by Hujun Yin, et al., pages 391–396, Springer. [Cited by 19] (4.52/year)
      Abstract: “Recently, Support Vector Regression (SVR) has been introduced to solve regression and prediction problems. In this paper, we apply SVR to financial prediction tasks. In particular, the financial data are usually noisy and the associated risk is time-varying. Therefore, our SVR model is an extension of the standard SVR which incorporates margins adaptation. By varying the margins of the SVR, we could reflect the change in volatility of the financial data. Furthermore, we have analyzed the effect of asymmetrical margins so as to allow for the reduction of the downside risk. Our experimental results show that the use of standard deviation to calculate a variable margin gives a good predictive result in the prediction of Hang Seng Index.”tryed varying the margins in SVM regression in order to reflect the change in volatility of financial data and also analyzed the effect of asymmetrical margins so as to allow for the reduction of the downside risk. The former approach produced the lowest total error when predicting the daily closing price of Hong Kong’s Hang Seng Index (HSI).
    • HUANG, W., Y. NAKAMORI and S.Y. WANG, 2005. Forecasting stock market movement direction with support vector machine, Computers & Operations Research, Volume 32, Issue 10, Pages 2513-2522. (October 2005) [Cited by 5] (4.18/year)
      Abstract: “Support vector machine (SVM) is a very specific type of learning algorithms characterized by the capacity control of the decision function, the use of the kernel functions and the sparsity of the solution. In this paper, we investigate the predictability of financial movement direction with SVM by forecasting the weekly movement direction of NIKKEI 225 index. To evaluate the forecasting ability of SVM, we compare its performance with those of Linear Discriminant Analysis, Quadratic Discriminant Analysis and Elman Backpropagation Neural Networks. The experiment results show that SVM outperforms the other classification methods. Further, we propose a combining model by integrating SVM with the other classification methods. The combining model performs best among all the forecasting methods.”compared the ability of SVMs, Linear Discriminant Analysis, Quadratic Discriminant Analysis and Elman Backpropagation Neural Networks to forecast the weekly movement direction of the NIKKEI 225 index and found that the SVM outperformed all of the other classification methods. Better still was a weighted combination of the models.
    • TRAFALIS, Theodore B. and Huseyin INCE, 2000. Support Vector Machine for Regression and Applications to Financial Forecasting. In: IJCNN 2000: Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks: Volume 6 edited by Shun-Ichi Amari, et al., page 6348, IEEE Computer Society. [Cited by 19] (3.06/year)
      Abstract: “The main purpose of this paper is to compare the support vector machine (SVM) developed by Vapnik with other techniques such as Backpropagation and Radial Basis Function (RBF) Networks for financial forecasting applications. The theory of the SVM algorithm is based on statistical learning theory. Training of SVMs leads to a quadratic programming (QP) problem. Preliminary computational results for stock price prediction are also presented.”compared SVMs with Backpropagation and Radial Basis Function (RBF) Networks by predicting IBM, Yahoo and America Online daily stock prices. Oddly, using the SVM for regression they forwent a validation set, set epsilon to zero, fixed C and repeated the experiment for various fixed settings of the kernel parameter, sigma, giving rise to several results.
    • CAO, Lijuan and Qingming GU, 2002. Dynamic support vector machines for non-stationary time series forecasting, Intelligent Data Analysis, Volume 6, Number 1, Pages 67-83. [Cited by 12] (2.86/year)
      Abstract: “This paper proposes a modified version of support vector machines (SVMs), called dynamic support vector machines (DSVMs), to model non-stationary time series. The DSVMs are obtained by incorporating the problem domain knowledge — non-stationarity of time series into SVMs. Unlike the standard SVMs which use fixed values of the regularization constant and the tube size in all the training data points, the DSVMs use an exponentially increasing regularization constant and an exponentially decreasing tube size to deal with structural changes in the data. The dynamic regularization constant and tube size are based on the prior knowledge that in the non-stationary time series recent data points could provide more important information than distant data points. In the experiment, the DSVMs are evaluated using both simulated and real data sets. The simulation shows that the DSVMs generalize better than the standard SVMs in forecasting non-stationary time series. Another advantage of this modification is that the DSVMs use fewer support vectors, resulting in a sparser representation of the solution.”incorporate the prior knowledge that financial time series are nonstationary into their “dynamic support vector machines (DSVMs)” and use an exponentially increasing regularization constant and an exponentially decreasing tube size to deal with structural changes in the data on the assumption that recent data points could provide more important information than distant data points. They conclude that DSVMs generalize better than standard SVMs in forecasting non-stationary time series, whilst they also use fewer support vectors, resulting in a sparser representation of the solution.
    • TAY, Francis E. H. and L. J. CAO, 2002. ε-Descending Support Vector Machines for Financial Time Series Forecasting, Neural Processing Letters 15(2): 179-195. [Cited by 11] (2.62/year)
      Abstract: “This paper proposes a modified version of support vector machines (SVMs), called ε-descending support vector machines (ε-DSVMs), to model non-stationary financial time series. The ε-DSVMs are obtained by incorporating the problem domain knowledge – non-stationarity of financial time series into SVMs. Unlike the standard SVMs which use a constant tube in all the training data points, the ε-DSVMs use an adaptive tube to deal with the structure changes in the data.The experiment shows that the ε-DSVMs generalize better than the standard SVMs in forecasting non-stationary financial time series. Another advantage of this modification is that the ε-DSVMs converge to fewer support vectors, resulting in a sparser representation of the solution.”incorporated the problem domain knowledge of non-stationarity of financial time series into SVMs by using an adaptive tube in their so called “$\epsilon$-descending support vector machines ($\epsilon$-DSVMs)”. Experiment showed that $\epsilon$-DSVMs generalise better than standard SVMs in forecasting non-stationary financial time series and also converge to fewer support vectors, resulting in a sparser representation of the solution.
    • DEBNATH, Sandip and C. Lee GILES, 2005. A Learning Based Model for Headline Extraction of News Articles to Find Explanatory Sentences for Events, In: K-CAP ’05: Proceedings of the 3rd international conference on Knowledge capture, Pages 189–190. [Cited by 2] (1.67/year)
      Abstract: “Metadata information plays a crucial role in augmenting document organising efficiency and archivability. News metadata includes DateLine, ByLine, HeadLine and many others. We found that HeadLine information is useful for guessing the theme of the news article. Particularly for financial news articles, we found that HeadLine can thus be specially helpful to locate explanatory sentences for any major events such as significant changes in stock prices. In this paper we explore a support vector based learning approach to automatically extract the HeadLinemetadata. We find that the classification accuracy of finding the HeadLines improves if DateLines are identified first. We then used the extracted HeadLines to initiate a pattern matching of keywords to find the sentences responsible for story theme. Using this theme and a simple language model it is possible to locate any explanatory sentences for any significant price change.”devised a novel approach of extracting news metadata HeadLines using SVMs and using them to find story themes to get a sentence-based explanation for a stock price change.
    • Van GESTEL, Tony, et al., 2003. A support vector machine approach to credit scoring, Bank en Financiewezen, Volume 2, March, Pages 73-82. [Cited by 5] (1.56/year)
      Abstract: “Driven by the need to allocate capital in a profitable way and by the recently suggested Basel II regulations, financial institutions are being more and more obliged to build credit scoring models assessing the risk of default of their clients. Many techniques have been suggested to tackle this problem. Support Vector Machines (SVMs) is a promising new technique that has recently emanated from different domains such as applied statistics, neural networks and machine learning. In this paper, we experiment with least squares support vector machines (LS-SVMs), a recently modified version of SVMs, and report significantly better results when contrasted with the classical techniques.”compared four methodologies, Ordinary Least Squares (OLS), Ordinal Logistic Regression (OLR), the Multilayer Perceptron (MLP) and least squares support vector machines (LS-SVMs) when applied to credit scoring. The SVM methodology yielded significantly and consistently better results than the classical linear rating methods.
    • FAN, Alan and Marimuthu PALANISWAMI, 2000. Selecting Bankruptcy Predictors Using a Support Vector Machine Approach, IJCNN 2000: Proceedings of the IEEE-INNS-ENNS International Joint Conference on Neural Networks, Volume 6, edited by Shun-Ichi Amari et al., page 6354. [Cited by 9] (1.45/year)
      Abstract: “Conventional Neural Network approach has been found useful in predicting corporate distress from financial statements. In this paper, we have adopted a Support Vector Machine approach to the problem. A new way of selecting bankruptcy predictors is shown, using the Euclidean distance based criterion calculated within the SVM kernel. A comparative study is provided using three classical corporate distress models and an alternative model based on the SVM approach.”use SVMs to select bankruptcy predictors, and provide a comparative study.
    • TAY, Francis Eng Hock and Li Juan CAO, 2001. Improved financial time series forecasting by combining Support Vector Machines with self-organizing feature map, Intelligent Data Analysis, Volume 5, Number 4, Pages 339-354. [Cited by 7] (1.35/year)
      Abstract: “A two-stage neural network architecture constructed by combining Support Vector Machines (SVMs) with self-organizing feature map (SOM) is proposed for financial time series forecasting. In the first stage, SOM is used as a clustering algorithm to partition the whole input space into several disjoint regions. A tree-structured architecture is adopted in the partition to avoid the problem of predetermining the number of partitioned regions. Then, in the second stage, multiple SVMs, also called SVM experts, that best fit each partitioned region are constructed by finding the most appropriate kernel function and the optimal learning parameters of SVMs. The Santa Fe exchange rate and five real futures contracts are used in the experiment. It is shown that the proposed method achieves both significantly higher prediction performance and faster convergence speed in comparison with a single SVM model.”combined SVMs with a self-organizing feature map (SOM) and tested the model on the Santa Fe exchange rate and five real futures contracts. They showed that their proposed method achieves both significantly higher prediction performance and faster convergence speed in comparison with a single SVM model.
    • SANSOM, D. C., T. DOWNS and T. K. SAHA, 2003. Evaluation of support vector machine based forecasting tool in electricity price forecasting for Australian national electricity market participants, Journal of Electrical & Electronics Engineering, Australia, Vol 22, No. 3, Pages 227-234. [Cited by 5] (1.19/year)
      Abstract: “In this paper we present an analysis of the results of a study into wholesale (spot) electricity price forecasting utilising Neural Networks (NNs) and Support Vector Machines (SVM). Frequent regulatory changes in electricity markets and the quickly evolving market participant pricing (bidding) strategies cause efficient retraining to be crucial in maintaining the accuracy of electricity price forecasting models. The efficiency of NN and SVM retraining for price forecasting was evaluated using Australian National Electricity Market (NEM), New South Wales regional data over the period from September 1998 to December 1998. The analysis of the results showed that SVMs with one unique solution, produce more consistent forecasting accuracies and so require less time to optimally train than NNs which can result in a solution at any of a large number of local minima. The SVM and NN forecasting accuracies were found to be very similar.”evaluated utilising Neural Networks (NNs) and Support Vector Machines (SVM) for wholesale (spot) electricity price forecasting. The SVM required less time to optimally train than the NN, whilst the SVM and NN forecasting accuracies were found to be very similar.
    • ABRAHAM, Ajith and Andy AUYEUNG, 2003. Integrating Ensemble of Intelligent Systems for Modeling Stock Indices, In: Proceedings of 7th International Work Conference on Artificial and Natural Neural Networks, Part II, Lecture Notes in Computer Science, Volume 2687, Jose Mira and Jose R. Alverez (Eds.), Springer Verlag, Germany, pp. 774-781, 2003. [Cited by 3] (0.94/year)
      Abstract: “The use of intelligent systems for stock market predictions has been widely established. In this paper, we investigate how the seemingly chaotic behavior of stock markets could be well-represented using ensemble of intelligent paradigms. To demonstrate the proposed technique, we considered Nasdaq-100 index of Nasdaq Stock MarketSM and the S&P CNX NIFTY stock index. The intelligent paradigms considered were an artificial neural network trained using Levenberg-Marquardt algorithm, support vector machine, Takagi-Sugeno neuro-fuzzy model and a difference boosting neural network. The different paradigms were combined using two different ensemble approaches so as to optimize the performance by reducing the different error measures. The first approach is based on a direct error measure and the second method is based on an evolutionary algorithm to search the optimal linear combination of the different intelligent paradigms. Experimental results reveal that the ensemble techniques performed better than the individual methods and the direct ensemble approach seems to work well for the problem considered.”considered an artificial neural network trained using Levenberg-Marquardt algorithm, a support vector machine, a Takagi-Sugeno neuro-fuzzy model and a difference boosting neural network for predicting the NASDAQ-100 Index of The Nasdaq Stock Market and the S\&P CNX NIFTY stock index. They concluded that an ensemble of the intelligent paradigms performed better than the individual methods.
    • YANG, Haiqin, et al., 2004. Financial Time Series Prediction Using Non-fixed and Asymmetrical Margin Setting with Momentum in Support Vector Regression. In: Neural Information Processing: Research and Development, edited by Jagath Chandana Rajapakse and Lipo Wang, Springer-Verlag. [Cited by 2] (0.91/year)
      Abstract: “Recently, Support Vector Regression (SVR) has been applied to financial time series prediction. The financial time series usually contains the characteristics of small sample size, high noise and non-stationary. Especially the volatility of the time series is time-varying and embeds some valuable information about the series. Previously, we had proposed to use the volatility in the data to adaptively change the width of the margin in SVR. We have noticed that up margin and down margin would not necessary be the same, and we also observed that their choice would affect the upside risk, downside risk and as well as the overall prediction performance. In this work, we introduce a novel approach to adopt the momentum in the asymmetrical margins setting. We applied and compared this method to predict the Hang Seng Index and Dow Jones Industrial Average.”used SVMs for regression with non-fixed and asymmetrical margin settings, this time with momentum, to predict the Hang Seng Index and Dow Jones Industrial Average.
    • PAI, Ping-Feng and Chih-Sheng LIN, 2005. A hybrid ARIMA and support vector machines model in stock price forecasting, Omega, Volume 33, Issue 6, December 2005, Pages 497-505. [Cited by 1] (0.84/year) Abstract: “Traditionally, the autoregressive integrated moving average (ARIMA) model has been one of the most widely used linear models in time series forecasting. However, the ARIMA model cannot easily capture the nonlinear patterns. Support vector machines (SVMs), a novel neural network technique, have been successfully applied in solving nonlinear regression estimation problems. Therefore, this investigation proposes a hybrid methodology that exploits the unique strength of the ARIMA model and the SVMs model in forecasting stock prices problems. Real data sets of stock prices were used to examine the forecasting accuracy of the proposed model. The results of computational tests are very promising.”proposed a hybrid ARIMA and support vector machine model for stock price forecasting, and results looked very promising.
    • ABRAHAM, Ajith, et al., 2002. Performance Analysis of Connectionist Paradigms for Modeling Chaotic Behavior of Stock Indices, In: Second international workshop on Intelligent systems design and application, edited by Ajith Abraham, et al., pages 181–186. [Cited by 3] (0.71/year)
      Abstract: “The use of intelligent systems for stock market predictions has been widely established. In this paper, we investigate how the seemingly chaotic behavior of stock markets could be well represented using several connectionist paradigms and soft computing techniques. To demonstrate the different techniques, we considered Nasdaq-100 index of Nasdaq Stock MarketTM and the S&P CNX NIFTY stock index. We analyzed 7 year’s Nasdaq 100 main index values and 4 year’s NIFTY index values. This paper investigates the development of a reliable and efficient technique to model the seemingly chaotic behavior of stock markets. We considered an artificial neural network trained using Levenberg-Marquardt algorithm, Support Vector Machine (SVM), Takagi-Sugeno neuro-fuzzy model and a Difference Boosting Neural Network (DBNN). This paper briefly explains how the different connectionist paradigms could be formulated using different learning methods and then investigates whether they can provide the required level of performance, which are sufficiently good and robust so as to provide a reliable forecast model for stock market indices. Experiment results reveal that all the connectionist paradigms considered could represent the stock indices behavior very accurately.”analysed the performance of an artificial neural network trained using Levenberg-Marquardt algorithm, Support Vector Machine (SVM), Takagi-Sugeno neuro-fuzzy model and a Difference Boosting Neural Network (DBNN) when predicting the NASDAQ-100 Index of The Nasdaq Stock Market and the S&P CNX NIFTY stock index.
    • YANG, Haiqin, I. KING and Laiwan CHAN, 2002. Non-fixed and asymmetrical margin approach to stock market prediction using Support Vector Regression. In: ICONIP ’02. Proceedings of the 9th International Conference on Neural Information Processing. Volume 3, edited by Lipo Wang, et al., pages 1398–1402. [Cited by 3] (0.71/year)
      Abstract: “Recently, support vector regression (SVR) has been applied to financial time series prediction. Typical characteristics of financial time series are non-stationary and noisy in nature. The volatility, usually time-varying, of the time series is therefore some valuable information about the series. Previously, we had proposed to use the volatility to adaptively change the width of the margin of SVR. We have noticed that upside margin and downside margin do not necessary be the same, and we have observed that their choice would affect the upside risk, downside risk and as well as the overall prediction result. In this paper, we introduce a novel approach to adapt the asymmetrical margins using momentum. We applied and compared this method to predict the Hang Seng Index and Dow Jones Industrial Average.”used SVM regression with a non-fixed and asymmetrical margin, this time adapting the asymmetrical margins using momentum, and applied it to predicting the Hang Seng Index and the Dow Jones Industrial Average.
    • GAVRISHCHAKA, Valeriy V. and Supriya B. GANGULI, 2003. Volatility forecasting from multiscale and high-dimensional market data, Neurocomputing, Volume 55, Issues 1-2 (September 2003), Pages 285-305. [Cited by 2] (0.63/year)
      Abstract: “Advantages and limitations of the existing volatility models for forecasting foreign-exchange and stock market volatility from multiscale and high-dimensional data have been identified. Support vector machines (SVM) have been proposed as a complimentary volatility model that is capable of effectively extracting information from multiscale and high-dimensional market data. SVM-based models can handle both long memory and multiscale effects of inhomogeneous markets without restrictive assumptions and approximations required by other models. Preliminary results with foreign-exchange data suggest that SVM can effectively work with high-dimensional inputs to account for volatility long-memory and multiscale effects. Advantages of the SVM-based models are expected to be of the utmost importance in the emerging field of high-frequency finance and in multivariate models for portfolio risk management.”used SVMs for forecasting the volatility of foreign-exchange data. Their preliminary benchmark tests indicated that SVMs can perform significantly better than or comparable to both naive and GARCH(1,1) models.
    • PÉREZ-CRUZ, Fernando, Julio A. AFONSO-RODRÍGUEZ and Javier GINER, 2003. Estimating GARCH models using support vector machines, Quantitative Finance, Volume 3, Number 3 (June 2003), Pages 163-172. [Cited by 2] (0.63/year)
      Abstract: “Support vector machines (SVMs) are a new nonparametric tool for regression estimation. We will use this tool to estimate the parameters of a GARCH model for predicting the conditional volatility of stock market returns. GARCH models are usually estimated using maximum likelihood (ML) procedures, assuming that the data are normally distributed. In this paper, we will show that GARCH models can be estimated using SVMs and that such estimates have a higher predicting ability than those obtained via common ML methods.”used SVMs for regression to estimate the parameters of a GARCH model for predicting the conditional volatility of stock market returns and showed that such estimates have a higher predicting ability than those obtained via common maximum likelihood (ML) methods.
    • Van GESTEL, T., et al., 2003. Bankruptcy prediction with least squares support vector machine classifiers. In: 2003 IEEE International Conference on Computational Intelligence for Financial Engineering: Proceedings, pages 1-8. [Cited by 2] (0.63/year)
      Abstract: “Classification algorithms like linear discriminant analysis and logistic regression are popular linear techniques for modelling and predicting corporate distress. These techniques aim at finding an optimal linear combination of explanatory input variables, such as, e.g., solvency and liquidity ratios, in order to analyse, model and predict corporate default risk. Recently, performant kernel based nonlinear classification techniques, like support vector machines, least squares support vector machines and kernel fisher discriminant analysis, have been developed. Basically, these methods map the inputs first in a nonlinear way to a high dimensional kernel-induced feature space, in which a linear classifier is constructed in the second step. Practical expressions are obtained in the so-called dual space by application of Mercer’s theorem. In this paper, we explain the relations between linear and nonlinear kernel based classification and illustrate their performance on predicting bankruptcy of mid-cap firms in Belgium and the Netherlands.”used least squares support vector machine classifiers for predicting bankruptcy of mid-cap firms in Belgium and the Netherlands.
    • CAO, L. J. and W. K. CHONG, 2002. Feature extraction in support vector machine: a comparison of PCA, XPCA and ICA, ICONIP ’02: Proceedings of the 9th International Conference on Neural Information Processing, Volume 2, edited by Lipo Wang, et al., pages 1001-1005. [Cited by 2] (0.48/year)
      Abstract: “Recently, support vector machine (SVM) has become a popular tool in time series forecasting. In developing a successful SVM forecaster, feature extraction is the first important step. This paper proposes the applications of principal component analysis (PCA), kernel principal component analysis (KPCA) and independent component analysis (ICA) to SVM for feature extraction. PCA linearly transforms the original inputs into uncorrelated features. KPCA is a nonlinear PCA developed by using the kernel method. In ICA, the original inputs are linearly transformed into statistically independent features. By examining the sunspot data and one real futures contract, the experiment shows that SVM by feature extraction using PCA, KPCA or ICA can perform better than that without feature extraction. Furthermore, there is better generalization performance in KPCA and ICA feature extraction than PCA feature extraction.”considered the application of principal component analysis (PCA), kernel principal component analysis (KPCA) and independent component analysis (ICA) to SVMs for feature extraction. By examining the sunspot data and one real futures contract, they showed that SVM by feature extraction using PCA, KPCA or ICA can perform better than that without feature extraction. Furthermore, they found that there is better generalization performance in KPCA and ICA feature extraction than PCA feature extraction.
    • CAO, L. J. and Francis E. H. TAY, 2000. Feature Selection for Support Vector Machines in Financial Time Series Forecasting. In: Intelligent Data Engineering and Automated Learning – IDEAL 2000: Data Mining, Financial Engineering, and Intelligent Agents, edited by Kwong Sak Leung, Lai-Wan Chan and Helen Meng, pages 268-273. [Cited by 3] (0.48/year)
      Abstract: “This paper deals with the application of saliency analysis to Support Vector Machines (SVMs) for feature selection. The importance of feature is ranked by evaluating the sensitivity of the network output to the feature input in terms of the partial derivative. A systematic approach to remove irrelevant features based on the sensitivity is developed. Five futures contracts are examined in the experiment. Based on the Simulation results, it is shown that that saliency analysis is effective in SVMs for identifying important features.”dealt with the application of saliency analysis to feature selection for SVMs. Five futures contracts were examined and they concluded that saliency analysis is effective in SVMs for identifying important features.
    • ZHOU, Dianmin, Feng GAO and Xiaohong GUAN, 2004. Application of accurate online support vector regression in energy price forecast, WCICA 2004: Fifth World Congress on Intelligent Control and Automation, Volume 2, pages 1838-1842. [Cited by 1] (0.45/year)
      Abstract: “Energy price is the most important indicator in electricity markets and its characteristics are related to the market mechanism and the change versus the behaviors of market participants. It is necessary to build a real-time price forecasting model with adaptive capability. In this paper, an accurate online support vector regression (AOSVR) method is applied to update the price forecasting model. Numerical testing results show that the method is effective in forecasting the prices of the electric-power markets.”applied an accurate online support vector regression (AOSVR) to forecasting the prices of the electric-power markets, results showed that it was effective.
    • FAN, A. and M. PALANISWAMI, 2001. Stock selection using support vector machines, IJCNN’01: International Joint Conference on Neural Networks, Volume 3, Pages 1793-1798. [Cited by 2] (0.38/year)
      Abstract: “We used the support vector machines (SVM) in a classification approach to `beat the market’. Given the fundamental accounting and price information of stocks trading on the Australian Stock Exchange, we attempt to use SVM to identify stocks that are likely to outperform the market by having exceptional returns. The equally weighted portfolio formed by the stocks selected by SVM has a total return of 208% over a five years period, significantly outperformed the benchmark of 71%. We also give a new perspective with a class sensitivity tradeoff, whereby the output of SVM is interpreted as a probability measure and ranked, such that the stocks selected can be fixed to the top 25%”used SVMs for classification for stock selection on the Australian Stock Exchange and significantly outperformed the benchmark.
    • Van GESTEL, Tony, et al., 2000. Volatility Tube Support Vector Machines, Neural Network World, vol. 10, number 1, pp. 287-297. [Cited by 2] (0.32/year)
      Abstract: “In Support Vector Machines (SVM’s), a non-linear model is estimated based on solving a Quadratic Programming (QP) problem. The quadratic cost function consists of a maximum likelihood cost term with constant variance and a regularization term. By specifying a difference inclusion on the noise variance model, the maximum likelihood term is adopted for the case of heteroskedastic noise, which arises in financial time series. The resulting Volatility Tube SVM’s are applied on the 1-day ahead prediction of the DAX30 stock index. The influence of today’s closing prices of the New York Stock Exchange on the prediction of tomorrow’s DAX30 closing price is analyzed.”developed the Volatility Tube SVM and applied it to 1-day ahead prediction of the DAX30 stock index, and significant positive out-of-sample results were obtained.
    • CAO, Li Juan, Kok Seng CHUA and Lim Kian GUAN, 2003. Combining KPCA with support vector machine for time series forecasting. In: 2003 IEEE International Conference on Computational Intelligence for Financial Engineering, pages 325-329. [Cited by 1] (0.31/year)
      Abstract: “Recently, support vector machine (SVM) has become a popular tool in time series forecasting. In developing a successful SVM forecaster, the first important step is feature extraction. This paper applies kernel principal component analysis (KPCA) to SVM for feature extraction. KPCA is a nonlinear PCA developed by using the kernel method. It firstly transforms the original inputs into a high dimensional feature space and then calculates PCA in the high dimensional feature space. By examining the sunspot data and one real futures contract, the experiment shows that SVM by feature forms much better than that extraction using KPCA per without feature extraction. In comparison with PCA, there is also superior performance in KPCA.”applied kernel principal component analysis (KPCA) to SVM for feature extraction. The authors examined sunspot data and one real futures contract, and found such feature extraction enhanced performance and also that KPCA was superior to PCA.
    • YANG, Haiqin, 2003. Margin Variations in Support Vector Regression for the Stock Market Prediction, Degree of Master of Philosophy Thesis, Department of Computer Science & Engineering, The Chinese University of Hong Kong, June 2003. [Cited by 1] (0.31/year)
      Abstract: “Support Vector Regression (SVR) has been applied successfully to financial time series prediction recently. In SVR, the ε-insensitive loss function is usually used to measure the empirical risk. The margin in this loss function is fixed and symmetrical. Typically, researchers have used methods such as crossvalidation or random selection to select a suitable ε for that particular data set. In addition, financial time series are usually embedded with noise and the associated risk varies with time. Using a fixed and symmetrical margin may have more risk inducing bad results and may lack the ability to capture the information of stock market promptly.
      In order to improve the prediction accuracy and to consider reducing the downside risk, we extend the standard SVR by varying the margin. By varying the width of the margin, we can reflect the change of volatility in the financial data; by controlling the symmetry of margins, we are able to reduce the downside risk. Therefore, we focus on the study of setting the width of the margin and also the study of its symmetry property.
      For setting the width of margin, the Momentum (also including asymmetrical margin control) and Generalized Autoregressive Conditional Heteroskedasticity (GARCH) models are considered. Experiments are performed on two indices: Hang Seng Index (HSI) and Dow Jones Industrial Average (DJIA) for the Momentum method and three indices: Nikkei225, DJIA and FTSE100, for GARCH models, respectively. The experimental results indicate that these methods improve the predictive performance comparing with the standard SVR and benchmark model. On the study of the symmetry property, we give a sufficient condition to prove that the predicted value is monotone decreasing to the increase of the up margin. Therefore, we can reduce the predictive downside risk, or keep it zero, by increasing the up margin. An algorithm is also proposed to test the validity of this condition, such that we may know the changing trend of predictive downside risk by only running this algorithm on the training data set without performing actual prediction procedure. Experimental results also validate our analysis.”employs SVMs for regression and varys the width of the margin to reflect the change of volatility and controls the symmetry of margins to reduce the downside risk. Results were positive.
    • CALVO, Rafael A. and Ken WILLIAMS, 2002. Automatic Categorization of Announcements on the Australian Stock Exchange. [Cited by 1] (0.24/year)
      Abstract: “This paper compares the performance of several machine learning algorithms for the automatic categorization of corporate announcements in the Australian Stock Exchange (ASX) Signal G data stream. The article also describes some of the applications that the categorization of corporate announcements may enable. We have performed tests on two categorization tasks: market sensitivity, which indicates whether an announcement will have an impact on the market, and report type, which classifies each announcement into one of the report categories defined by the ASX. We have tried Neural Networks, a Naïve Bayes classifier, and Support Vector Machines and achieved good results.”compared the performance of neural networks, a na{\”i}ve bayes classifier, and SVMs for the automatic categorization of corporate announcements in the Australian Stock Exchange (ASX) Signal G data stream. The results were all good, but with the SVM underperforming the other two models.
    • AHMED, A.H.M.T., 2000. Forecasting of foreign exchange rate time series using support vector regression. 3rd year project. Computer Science Department, University of Manchester. [Cited by 1] (0.16/year)

 

    • used support vector regression for forecasting a foreign exchange rate time series.

 

    • GUESDE, Bazile, 2000. Predicting foreign exchange rates with support vector regression machines. MSc thesis. Computer Science Department, University of Manchester. [Cited by 1] (0.16/year)
      Abstract: “This thesis investigates how Support Vector Regression can be applied to forecasting foreign exchange rates. At first we introduce the reader to this non linear kernel based regression and demonstrate how it can be used for time series prediction. Then we define a predictive framework and apply it to the Canadian exchange rates. But the non-stationarity in the data, which we here define as a drift in the map of the dynamics, forces us to present and use the typical learning processes for catching different dynamics. Our implementation of these solutions include Clusters of Volatility and competing experts. Finally those experts are used in a financial vote trading system and substantial profits are achieved. Through out the thesis we hope the reader will be intrigued by the results of our analysis and be encouraged in other dircetions for further research.”used SVMs for regression to predict the Canadian exchange rate, wisely recognised the problem of nonstationarity, dealt with it using experts and claimed that substantial profits were achieved.
    • BAO, Yu-Kun, et al., 2005. Forecasting Stock Composite Index by Fuzzy Support Vector Machines Regression, Proceedings of 2005 International Conference on Machine Learning and Cybernetics, Volume 6, pages 3535-3540. [not cited] (0/year)
      Abstract: “Financial time series forecasting methods such as exponential smoothing are commonly used for prediction on stock composition index (SCI) and have made great contribution in practice, but efforts on looking for superior forecasting method are still made by practitioners and academia. This paper deals with the application of a novel neural network technique, fuzzy support vector machines regression (FSVMR), in SCI forecasting. The objective of this paper is not only to examine the feasibility of FSVMR in SCI forecasting but presents our efforts on improving the accuracy of FSVMR in terms of data pre-processing, kernel function selection and parameters selection. A data set from Shanghai Stock Exchange is used for the experiment to test the validity of FSVMR. The experiment shows FSVMR a better method in SCI forecasting.”used fuzzy support vector machines regression (FSVMR) to forecast a data set from the Shanghai Stock Exchange with positive results.
    • CHEN, Kuan-Yu and Chia-Hui HO, 2005. An Improved Support Vector Regression Modeling for Taiwan Stock Exchange Market Weighted Index Forecasting, ICNN&B ’05: International Conference on Neural Networks and Brain, 2005, Volume 3 [not cited] (0/year)
      Abstract: “This study applies a novel neural network technique, Support Vector Regression (SVR), to Taiwan Stock Exchange Market Weighted Index (TAIEX) forecasting. To build an effective SVR model, SVR’s parameters must be set carefully. This study proposes a novel approach, known as GA-SVR, which searches for SVR’s optimal parameters using real value genetic algorithms. The experimental results demonstrate that SVR outperforms the ANN and RW models based on the Normalized Mean Square Error (NMSE), Mean Square Error (MSE) and Mean Absolute Percentage Error (MAPE). Moreover, in order to test the importance and understand the features of SVR model, this study examines the effects of the number of input node.”used an SVM for regression for forecasting the Taiwan Stock Exchange Market Weighted Index (TAIEX). The results demonstrated that the SVR outperformed the ANN and RW models.
    • CHEN, Wun-Hwa and Jen-Ying SHIH, 2006. A study of Taiwan’s issuer credit rating systems using support vector machines, Expert Systems with Applications, Volume 30, Issue 3, April 2006, Pages 427-435. [not cited] (0/year)
      By providing credit risk information, credit rating systems benefit most participants in financial markets, including issuers, investors, market regulators and intermediaries. In this paper, we propose an automatic classification model for issuer credit ratings, a type of fundamental credit rating information, by applying the support vector machine (SVM) method. This is a novel classification algorithm that is famous for dealing with high dimension classifications. We also use three new variables: stock market information, financial support by the government, and financial support by major shareholders to enhance the effectiveness of the classification. Previous research has seldom considered these variables. The data period of the input variables used in this study covers three years, while most previous research has only considered one year. We compare our SVM model with the back propagation neural network (BP), a well-known credit rating classification method. Our experiment results show that the SVM classification model performs better than the BP model. The accuracy rate (84.62%) is also higher than previous research.”used an SVM to classify Taiwan’s issuer credit ratings and found that it performed better than the back propagation neural network (BP) model.
    • CHEN, Wun-Hua, Jen-Ying SHIH and Soushan WU, 2006. Comparison of support-vector machines and back propagation neural networks in forecasting the six major Asian stock markets, International Journal of Electronic Finance, Volume, Issue 1, pages 49-67. [not cited] (0/year)
      Abstract: “Recently, applying the novel data mining techniques for financial time-series forecasting has received much research attention. However, most researches are for the US and European markets, with only a few for Asian markets. This research applies Support-Vector Machines (SVMs) and Back Propagation (BP) neural networks for six Asian stock markets and our experimental results showed the superiority of both models, compared to the early researches.”compared SVMs and back propagation (BP) neural networks when forecasting the six major Asian stock markets. Both models perform better than the benchmark AR (1) model in the deviation measurement criteria, whilst SVMs performed better than the BP model in four out of six markets.
    • GAVRISHCHAKA, Valeriy V. and Supriya BANERJEE, 2006. Support Vector Machine as an Efficient Framework for Stock Market Volatility Forecasting, Computational Management Science, Volume 3, Number 2 (April 2006), Pages 147-160. [not cited] (0/year)
      Abstract: “Advantages and limitations of the existing models for practical forecasting of stock market volatility have been identified. Support vector machine (SVM) have been proposed as a complimentary volatility model that is capable to extract information from multiscale and high-dimensional market data. Presented results for SP500 index suggest that SVM can efficiently work with high-dimensional inputs to account for volatility long-memory and multiscale effects and is often superior to the main-stream volatility models. SVM-based framework for volatility forecasting is expected to be important in the development of the novel strategies for volatility trading, advanced risk management systems, and other applications dealing with multi-scale and high-dimensional market data.”used SVMs for forecasting stock market volatility with positive results.
    • HOVSEPIAN, K. and P. ANSELMO, 2005. Heuristic Solutions to Technical Issues Associated with Clustered Volatility Prediction using Support Vector Machines, ICNN&B’05: International Conference on Neural Networks and Brain, 2005, Volume 3, Pages 1656-1660. [not cited] (0/year)
      Abstract: “We outline technological issues and our fimdings for the problem of prediction of relative volatility bursts in dynamic time-series utilizing support vector classifiers (SVC). The core approach used for prediction has been applied successfully to detection of relative volatility clusters. In applying it to prediction, the main issue is the selection of the SVC training/testing set. We describe three selection schemes and experimentally compare their performances in order to propose a method for training the SVC for the prediction problem. In addition to performing cross-validation experiments, we propose an improved variation to sliding window experiments utilizing the output from SVC’s decision function. Together with these experiments, we show that accurate and robust prediction of volatile bursts can be achieved with our approach.”used SVMs for classification to predict relative volatility clusters and achieved accurate and robust results.
    • INCE, H. and T.B. TRAFALIS, 2004. Kernel principal component analysis and support vector machines for stock price prediction, Proceedings of the 2004 IEEE International Joint Conference on Neural Networks, Volume 3, pages 2053-2058. [not cited] (0/year)
      Abstract: “Financial time series are complex, non-stationary and deterministically chaotic. Technical indicators are used with principal component analysis (PCA) in order to identify the most influential inputs in the context of the forecasting model. Neural networks (NN) and support vector regression (SVR) are used with different inputs. Our assumption is that the future value of a stock price depends on the financial indicators although there is no parametric model to explain this relationship. This relationship comes from technical analysis. Comparison shows that SVR and MLP networks require different inputs. The MLP networks outperform the SVR technique.”found that MLP neural networks outperform support vector regression when applied to stock price prediction.
    • KAMRUZZAMAN, Joarder, Ruhul A SARKER and Iftekhar AHMAD, 2003. SVM Based Models for Predicting Foreign Currency Exchange Rates, Proceedings of the Third IEEE International Conference on Data Mining (ICDM’03), Pages 557-560. [not cited] (0/year)
      Abstract: “Support vector machine (SVM) has appeared as a powerful tool for forecasting forex market and demonstrated better performance over other methods, e.g., neural network or ARIMA based model. SVM-based forecasting model necessitates the selection of appropriate kernel function and values of free parameters: regularization parameter and \varepsilon- insensitive loss function. In this paper, we investigate the effect of different kernel functions, namely, linear, polynomial, radial basis and spline on prediction error measured by several widely used performance metrics. The effect of regularization parameter is also studied. The prediction of six different foreign currency exchange rates against Australian dollar has been performed and analyzed. Some interesting results are presented.”investigated the effect of different kernel functions and the regularization parameter when using SVMs to predict six different foreign currency exchange rates against the Australian dollar.
    • MARTENS, David, et al., 2006. Comprehensible Credit Scoring Models using Rule Extraction from Support Vector Machines, European Journal of Operational Research, Accepted for publication. [not cited] (0/year)

 

    • investigated comprehensible credit scoring models using rule extraction from SVMs.

 

    • NALBANTOV, Georgi, Rob BAUER and Ida SPRINKHUIZEN-KUYPER, 2006. Equity Style Timing Using Support Vector Regressions, to appear in Applied Financial Economics. [not cited] (0/year)
      Abstract: “The disappointing performance of value and small cap strategies shows that style consistency may not provide the long-term benefits often assumed in the literature. In this study we examine whether the short-term variation in the U.S. size and value premium is predictable. We document style-timing strategies based on technical and (macro-)economic predictors using a recently developed artificial intelligence tool called Support Vector Regressions (SVR). SVR are known for their ability to tackle the standard problem of overfitting, especially in multivariate settings. Our findings indicate that both premiums are predictable under fair levels of transaction costs and various forecasting horizons.”used SVMs for regression for equity style timing with positive results.
    • ONGSRITRAKUL, P. and N. SOONTHORNPHISAJ, 2003. Apply decision tree and support vector regression to predict the gold price, Proceedings of the International Joint Conference on Neural Networks, 2003, Volume 4, Pages 2488-2492. [not cited] (0/year)
      Abstract: “Recently, support vector regression (SVR) was proposed to resolve time series prediction and regression problems. In this paper, we demonstrate the use of SVR techniques for predicting the cost of gold by using factors that have an effect on gold to estimate its price. We apply a decision tree algorithm for the feature selection task and then perform the regression process using forecasted indexes. Our experimental results show that the combination of the decision tree and SVR leads to a better performance.”applied a decision tree algorithm for feature selection and then performed support vector regression to predict the gold price, their results were positive.
    • Van GESTEL, Tony, et al., 2005. Linear and non-linear credit scoring by combining logistic regression and support vector machines, Journal of Credit Risk, Vol. 1, No. 4, Fall 2005, Pages 31-60. [not cited] (0/year)
      Abstract: “The Basel II capital accord encourages banks to develop internal rating models that are financially intuitive, easily interpretable and optimally predictive for default. Standard linear logistic models are very easily readable but have limited model flexibility. Advanced neural network and support vector machine models (SVMs) are less straightforward to interpret but can capture more complex multivariate non-linear relations. A gradual approach that balances the interpretability and predictability requirements is applied here to rate banks. First, a linear model is estimated; it is then improved by identifying univariate non-linear ratio transformations that emphasize distressed conditions; and finally SVMs are added to capture remaining multivariate non-linear relations.”apply linear and non-linear credit scoring by combining logistic regression and SVMs.
    • YANG, Haiqin, et al., 2004. Outliers Treatment in Support Vector Regression for Financial Time Series Prediction, Neural Information Processing: 11th International Conference, ICONIP 2004, Calcutta, India, November 2004, Proceedings [not cited] (0/year)
      Abstract: “Recently, the Support Vector Regression (SVR) has been applied in the financial time series prediction. The financial data are usually highly noisy and contain outliers. Detecting outliers and deflating their influence are important but hard problems. In this paper, we propose a novel “two-phase” SVR training algorithm to detect outliers and reduce their negative impact. Our experimental results on three indices: Hang Seng Index, NASDAQ, and FSTE 100 index show that the proposed “two-phase” algorithm has improvement on the prediction.”proposed a novel two-phase SVR training procedure to detect and deflate the influence of outliers. The method was tested on the Hang Seng Index, NASDAQ and FSTE 100 index and results were positive. However, it’s not clear why the significance of outliers (such as market crashes) should be understated.
    • YU, Lean, Shouyang WANG and Kin Keung LAI, 2005. Mining Stock Market Tendency Using GA-Based Support Vector Machines, Internet and Network Economics: First International Workshop, WINE 2005, Hong Kong, China, December 15-17, 2005, Proceedings (Lecture Notes in Computer Science) edited by Xiaotie Deng and Yinyu Ye, pages 336-345. [not cited] (0/year)
      Abstract: “In this study, a hybrid intelligent data mining methodology, genetic algorithm based support vector machine (GASVM) model, is proposed to explore stock market tendency. In this hybrid data mining approach, GA is used for variable selection in order to reduce the model complexity of SVM and improve the speed of SVM, and then the SVM is used to identify stock market movement direction based on the historical data. To evaluate the forecasting ability of GASVM, we compare its performance with that of conventional methods (e.g., statistical models and time series models) and neural network models. The empirical results reveal that GASVM outperforms other forecasting models, implying that the proposed approach is a promising alternative to stock market tendency exploration.”applied a random walk (RW) model, an autoregressive integrated moving average (ARIMA) model, an individual back-propagation neural network (BPNN) model, an individual SVM model and a genetic algorithm-based SVM (GASVM) to the task of predicting the direction of change in the daily S\&P500 stock price index and found that their proposed GASVM model performed the best.
    • HARLAND, Zac, 2002. Using Support Vector Machines to Trade Aluminium on the LME., Proceedings of the Ninth International Conference, Forecasting Financial Markets: Advances For Exchange Rates, Interest Rates and Asset Management, edited by C. Dunis and M. Dempster. [not listed]
      Abstract: “This paper describes and evaluates the use of support vector regression to trade the three month Aluminium futures contract on the London Metal Exchange, over the period June 1987 to November 1999. The Support Vector Machine is a machine learning method for classification and regression and is fast replacing neural networks as the tool of choice for prediction and pattern recognition tasks, primarily due to their ability to generalise well on unseen data. The algorithm is founded on ideas derived from statistical learning theory and can be understood intuitively within a geometric framework. In this paper we use support vector regression to develop a number of trading submodels that when combined, result in a final model that exhibits above-average returns on out of sample data, thus providing some evidence that the aluminium futures price is less than efficient. Whether these inefficiencies will continue into the future is unknown.”used an ensemble of SVMs for regression to trade the three month Aluminium futures contract on the London Metal Exchange with positive results.
    • Van GESTEL, T., et al., 2005. Credit rating systems by combining linear ordinal logistic regression and fixed-size least squares support vector machines, Workshop on Machine Learning in Finance, NIPS 2005 Conference, Whistler (British Columbia, Canada), Dec. 9.[not listed]

 

    developed credit rating systems by combining linear ordinal logistic regression and fixed-size least squares SVMs.

 

Sergios Theodoridis Avatar

Leave a Reply