文献翻译-基于最小二乘支持向量回归的短期混沌时间序列预测
- 格式:doc
- 大小:296.00 KB
- 文档页数:9
时间序列预测最小二乘法引言时间序列预测是一种应用广泛的预测方法,在各个领域都有着重要的应用。
而最小二乘法则是一种常用的时间序列预测方法之一。
本文将对时间序列预测最小二乘法进行全面、详细、完整地探讨,并介绍其原理、应用范围、优缺点以及实际应用示例。
最小二乘法原理最小二乘法是一种通过最小化误差平方和来拟合数据的方法。
在时间序列预测中,我们通常希望根据过去的观测值来预测未来的数值。
最小二乘法通过构建线性回归模型,并通过最小化观测值与预测值之间的误差平方和来找到最佳拟合曲线。
应用范围最小二乘法在时间序列预测中有着广泛的应用,特别是在经济学、金融学、工程学和统计学等领域。
它可以用于预测股票价格、商品需求、销售量、交通流量等各种时间序列数据。
优点最小二乘法具有以下几个优点: 1. 直观:最小二乘法基于线性回归模型,容易理解和解释。
2. 简单:最小二乘法的计算方法相对简单,容易实现。
3. 通用:最小二乘法适用于各种类型的时间序列数据,包括稳态和非稳态的数据。
缺点最小二乘法也存在一些缺点: 1. 对异常值敏感:最小二乘法对异常值较为敏感,可能导致预测结果出现较大的误差。
2. 忽略非线性关系:最小二乘法只适用于线性关系的时间序列数据,对于非线性关系的数据,可能无法得到准确的预测结果。
3. 对数据分布要求高:最小二乘法要求数据符合正态分布,如果数据不符合该分布,可能会导致预测结果的偏差。
时间序列预测最小二乘法实例以下是一个使用最小二乘法进行时间序列预测的实际应用示例:步骤一:收集数据首先,我们需要收集要进行时间序列预测的数据。
假设我们想预测某个城市未来一年的月度销售额。
步骤二:数据预处理在进行时间序列预测之前,我们需要对数据进行预处理。
这包括去除异常值、处理缺失值、平滑数据等操作。
步骤三:拟合模型接下来,我们使用最小二乘法拟合线性回归模型。
我们可以使用Python的NumPy库来实现最小二乘法。
根据过去的销售额数据,我们可以建立一个线性回归模型,根据该模型来预测未来的销售额。
基于小波支持向量机的混沌时序预测算法陈超;蔡乐才;刘才铭【期刊名称】《计算机仿真》【年(卷),期】2012(29)5【摘要】研究混沌时间序列预测问题,混沌时间序列是一种特殊时间序列,具有高度非线性、混沌性和时变性,传统预测方法精度低.为了提高混沌时间序列预测的精度,提出一种采用小波支持向量机的混沌时间序列预测算法.首先采用相空重构对混沌时间序列进行重构,捕捉时间序列变化轨迹,然后采用小波支持向量机对非线性、混沌性变化规律进行预测,最后采用Lorenz混沌时间序列进行仿真.结果表明,相对于传统预测方法,小波支持向量机提高了混沌时间序列预测精度,降低预测误差,为混沌时间序列提供了一种有效的准确预测途径.%Research chaotic time series prediction accuracy problems. Chaotic time series is highly nonlinear and time-varying, and linear prediction method is difficult to reflect the change rule, and the accuracy of prediction is low. In order to improve the precision of predicting chaotic time series, we presented a method based on wavelet sup-port vector machine for chaotic time series prediction algorithm. First, phase space reconstruction was used for chaot-ic time series reconstruction to capture the changing path of time sequence. Then, wavelet support vector machine was used to carry out the forecast. Finally, the Lorenz chaotic time sequence was used for simulation experiments. The re-sults show that, compared with traditional forecasting methods, the wavelet support vector machine improves the pre-diction accuracy ofchaotic time series and reduces the prediction error, which provides the nonlinear chaotic time se-ries with an effective prediction approach.【总页数】4页(P223-226)【作者】陈超;蔡乐才;刘才铭【作者单位】四川理工学院计算机学院,四川自贡643000;四川理工学院计算机学院,四川自贡643000;乐山师范学院计算机科学学院,四川乐山614000【正文语种】中文【中图分类】TP274【相关文献】1.基于混沌时序最小二乘支持向量机的汽油机瞬态空燃比预测模型研究 [J], 徐东辉;李岳林;雷鸣;何剑锋;吴钢;解福泉2.基于混沌时序分析方法与支持向量机的刀具磨损状态识别 [J], 张栋梁;莫蓉;孙惠斌;李春磊;苗春生;李冀3.基于模糊时序和支持向量机的高速公路SO2浓度预测算法 [J], 岳鹏程;张林梁;马阅军4.基于小波支持向量机的企业财务长期预测算法 [J], 李珊珊; 张福泉5.基于混沌小波网络的交通流预测算法研究 [J], 杨立才;贾磊;何立琴;孔庆杰因版权原因,仅展示原文概要,查看原文内容请购买。
基于在线LS-SVM算法的变参数混沌时间序列预测
肖支才;王杰;王永生
【期刊名称】《航空计算技术》
【年(卷),期】2010(040)003
【摘要】研究利用最小二乘支持向量机(LS-SVM)预测变参数混沌时间序列.变参数混沌系统适合于描述现实中的复杂混沌现象,但由于参数的慢变导致系统动力学特性不断发生变化,基于Tankens嵌入定理的建模预测方法难以适用,其时间序列预测可以看作是小样本学习问题.最小二乘支持向量机是在二次损失函数下采用等式约束求解问题的一种支持向量机,保留支持向量机优点同时计算量大大减少.提出用一种具有遗忘机制的最小二乘支持向量机在线递推算法,并引入历史数据的高次项预测变参数混沌时间序列.对典型变参数混沌时间序列的预测结果表明,该方法具有较高预测精度,能快速跟踪预测变参数混沌时间序列.
【总页数】6页(P29-33,37)
【作者】肖支才;王杰;王永生
【作者单位】海军航空工程学院控制工程系,山东,烟台,264001;海军航空工程学院训练部,山东,烟台,264001;海军航空工程学院兵器科学与技术系,山东,烟台,264001【正文语种】中文
【中图分类】TP183
【相关文献】
1.基于PSO优化的LS-SVM的混沌时间序列预测 [J], 陈旭;刘延泉;葛建宏
2.基于微粒群算法的LS-SVM时间序列预测 [J], 林庆;白振兴
3.LS-SVM时间序列预测--免疫文化基因算法进行LS-SVM参数选优 [J], 王波;梅倩
4.基于小波消噪和LS-SVM的混沌时间序列预测模型及其应用 [J], 秦永宽;黄声享;赵卿
5.基于混沌时间序列LS-SVM的车用锂离子电池SOC预测研究 [J], 徐东辉
因版权原因,仅展示原文概要,查看原文内容请购买。
基于最小二乘支持向量机的短期负荷预测
姜妍;兰森;孙艳学
【期刊名称】《黑龙江电力》
【年(卷),期】2012(34)5
【摘要】针对当今人工智能短期负荷预测方法存在的缺陷,提出了一种最小二乘支持向量机(LS-SVM)短期负荷预测方法,即建立最小二乘支持向量机(LS-SVM)回归模型.在选取该模型训练样本时,为了提高预测精度,采用灰色关联投影法来选取相似日.同时,针对标准粒子群优化算法易陷入局部最优的缺点,提出自适应变异粒子群优化算法来选择最小二乘向量机的参数,从而提高了负荷预测精度,避免了对模型参数的盲目选择.仿真结果分析表明,该方法有效、可行.
【总页数】4页(P349-352)
【作者】姜妍;兰森;孙艳学
【作者单位】东北电力大学电气工程学院,吉林吉林132012;黑龙江省电力有限公司哈尔滨检修分公司,黑龙江哈尔滨150090;吉林省集安市云峰发电厂,吉林集安134200
【正文语种】中文
【中图分类】TM715.1
【相关文献】
1.基于 PSO 优化参数的最小二乘支持向量机短期负荷预测 [J], 吴文江;陈其工;高文根
2.基于PSO优化参数的最小二乘支持向量机短期负荷预测 [J], 吴文江;陈其工;高文根;
3.基于改进最小二乘支持向量机的短期负荷预测 [J], 孙薇;刘默涵
4.基于改进最小二乘支持向量机和预测误差校正的短期风电负荷预测 [J], 李霄;王昕;郑益慧;李立学;生西奎;吴昊
5.基于鲸鱼优化参数的最小二乘支持向量机短期负荷预测方法 [J], 陈友鹏; 陈璟华因版权原因,仅展示原文概要,查看原文内容请购买。
基于LS-SVM算法的混沌时序递推预测王永生;欧阳中辉;王建国;王昌金【期刊名称】《计算机工程与应用》【年(卷),期】2009(045)026【摘要】研究利用最小二乘支持向量机(LS-SvM)预测变参数混沌时间序列.支持向量机方法是基于结构风险最小化原理导出的,最小二乘支持向量机是一种在二次损失函数下采用等式约束求解问题的支持向量机,保留支持向量机优点的同时计算量大大减少.变参数混沌时间序列预测是典型的小样本学习问题,由于参数的慢变导致系统的动力学特性不断发生变化,全局建模预测方法很难适用,必须在线实时预测.为了快速跟踪预测变参数混沌系统的时间序列,研究了利用一种简化的最小二乘支持向量机在线递推算法进行预测.最后对典型变参数混沌时间序列的预测实验结果表明了该方法的有效性.【总页数】4页(P114-117)【作者】王永生;欧阳中辉;王建国;王昌金【作者单位】海军航空工程学院兵器科学与技术系,山东,烟台,264001;海军航空工程学院兵器科学与技术系,山东,烟台,264001;海军航空工程学院兵器科学与技术系,山东,烟台,264001;海军航空工程学院兵器科学与技术系,山东,烟台,264001【正文语种】中文【中图分类】TP391【相关文献】1.汽油机瞬态空燃比的混沌时序LS-SVM预测研究 [J], 徐东辉;代冀阳2.汽油机瞬态工况油膜参数混沌时序LS-SVM预测研究 [J], 李岳林;周喆;徐东辉;谢安平;廖伯荣3.基于在线LS-SVM算法的变参数混沌时间序列预测 [J], 肖支才;王杰;王永生4.基于混沌搜索的LS-SVM预测算法 [J], 张明玲;张润莲5.基于混沌遗传算法优化的LS-SVM边坡位移预测 [J], 杨念江;樊方涛因版权原因,仅展示原文概要,查看原文内容请购买。
外文文献翻译Short Term Chaotic Time Series Predictionusing Symmetric LS-SVM RegressionAbstract—In this article, we illustrate the effect of imposing symmetry as prior knowledge into the modelling stage, within the context of chaotic time series predictions. It is illustrated that using Least-Squares Support Vector Machines with symmetry constraints improves the simulation performance, for the cases of time series generated from the Lorenz attractor, and multi-scroll attractors. Not only accurate forecasts are obtained, but also the forecast horizon for which these predictions are obtained is expanded.1. IntroductionIn applied nonlinear time series analysis, the estimation of a nonlinear black-box model in order to produce accurate forecasts starting from a set of observations is common practice. Usually a time series model is estimated based on available data up to time t, and its final assessment is based on the simulation performance from t + 1 onwards. Due to the nature of time series generated by chaotic systems, where the series not only shows nonlinear behavior but also drastic regime changes due to local instability of attractors, this is a very challenging task. For this reason, chaotic time series have been used as benchmark in several time series competitions.The modelling of chaotic time series can be improved by exploiting some of its properties. If the true underlying system is symmetric, this information can be imposed to the model as prior knowledge ,in which case it is possible to obtain better forecasts than those obtained with a general model . In this article, short term predictions for chaotic time series are generated using Least-Squares Support Vector Machines (LS-SVM) regression. We show that LS-SVM with symmetry constraints can produce accurate predictions. Not only accurate forecasts are obtained, but also the forecast horizon for which these predictions are obtained is expanded, when compared with the unconstrained LS-SVM formulation.This paper is structured as follows. Section 2 describes the LS-SVM technique for regression, and how symmetry can be imposed in a straightforward way.Section 3 describes the applications for the cases of the x−coordinate of the Lorenz attractor, and the data generated by a nonlinear transformation of multi-scroll attractors.2. LS-SVM with Symmetry ConstraintsLeast-Squares Support Vector Machines (LS-SVM) is a powerful nonlinear black-box regression method,which builds a linear model in the so-called feature space where the inputs have been transformed by means of a (possibly infinite dimensional) nonlinear mapping . This is converted to the dual space by means of the Mercer’s theorem and the use of a positive definite kernel, without computing explicitly the mapping. The LS-SVM formulation, solves a linear system in dual space under a least-squares cost function , where the sparseness property can be obtained by e.g. sequentially pruning the support valuespectrum or via a fixed-size subset selection approach. The LS-SVM training procedure involves the selection of a kernel parameter and the regularization parameter of the cost function, which can be done e.g. by cross-validation, Bayesian techniques or others. The inclusion of a symmetry constraint (odd or even) to the nonlinearity within the LS-SVM regression framework can be formulated as follows . Given the sample of N points {},1k k N x y k =, with input vectors p k x R ∈and output values k y R ∈, the goal is to estimate a model of the form ()T y w x b e ϕ=++():hn p R R ϕ∙→ is the mapping to a high dimensional (and possibly infinite dimensional) feature space, and the residuals e are assumed to be i.i.d. with zero mean and constant (and finite) variance. The following optimization problem with a regularized cost function is formulated:2,,111min 22(),1,...,..()(),1,...,k N T k w b e k T k k k T T k k w w e y w x b e k N s t w x aw x k N γϕϕϕ=+⎧=++=⎨=-=⎩∑ where a is a given constant which can take either -1 or 1. The first restriction is the standard modelformulation in the LS-SVM framework. The second restriction is a shorthand for the cases where we want to impose the nonlinear function ()T k w x ϕ to be even (resp. odd) by using a = 1 (resp. a = −1). The solution is formalized in the KKT lemma.3. Application to Chaotic Time SeriesIn this section, the effects of imposing symmetry to the LS-SVM are presented for two cases of chaotic time series. On each example, an RBF kernel is used and the parameters σ and γ are found by 10-fold cross validation over the corresponding training sample. The results using the standard LS-SVM are compared to those obtained with the symmetry-constrained LS-SVM (S-LS-SVM) from (2). The examples are defined in such a way that there are not enough training datapoints on every region of the relevant space; thus, it is very difficult for a black-box model to ”learn” the symmetry just by using the available information. The examples are compared in terms of the performance in the training sample (cross-validation mean squared error, MSE-CV) and the generalization performance (MSE out of sample, MSE-OUT). For each case, a Nonlinear AutoRegressive (NAR) black-box model is formulated: ()((1),(2),...,())y t g y t y t y t p e t=---+where g is to be identified by LS-SVM and S-LS-SVM. The order p is selected during the cross-validation process as an extra parameter. After each model is estimated, they are used in simulation mode, where the future predictions are computed with the estimated model using past predictions: ()((1),(2),...,())y t g y t y t y t p ∧∧∧∧∧=---3.1. Lorenz attractorThis example is taken from [1]. The x−coordinate of the Lorenz attractor is used as an example of a time series generated by a dynamical system. A sample of 1000 datapoints is used for training, which corresponds to an unbalanced sample over the evolution of the system, shown on Figure 1 as a time-delay embedding.Figure 2 (top) shows the training sequence (thick line) and the future evolution of the series (test zone). Figure 2 (bottom) shows the simulations obtained from both models on the test zone. Results are presented on Table 1. Clearly the S-LS-SVM can simulate the system for the next 500 timesteps, far beyond the 100 points that can be simulated by the LS-SVM.3.2. Multi-scroll attractorsThis dataset was used for the K.U.Leuven Time Series Prediction Competition . The series was generatedby()tanh()x h x y W Vx ∙==where h is the multi-scroll equation, x is the 3-dimensional coordinate vector, and W,V are the interconnection matrices of the nonlinear function (a 3-units multilayer perceptron, MLP). This MLP function hides the underlying structure of the attractor .A training set of 2,000 points was available for model estimation, shown on Figure 3, and the goal was to predict the next 200 points out of sample. The winner of the competition followed a complete methodology involving local modelling, specialized many-steps ahead cross-validation parameters tuning, and the exploitation of the symmetry properties of the series (which he did by flipping the series around the time axis).Following the winner approach, both LS-SVM and S-LS-SVM are trained using 10-step-ahead cross-validation for hyperparameters selection. To illustrate the difference between both models, the out of sample MSE is computed considering only the first n simulation points, where n = 20, 50, 100, 200. It is important to emphasize that both models are trained using exactly the same methodology for order and hyperparameter selection; the only difference is the symmetry constraint for the S-LS-SVM case. Results are reported on Table 2. The simulations from both models are shown on Figure 4.Figure 1: The training (left) and test (right) series from the x−coordinate of the Lorenz attractorFigure 2: (Top) The series from the x−coordinate of the Lorenz attractor, part of which is used for training (thick line).(Bottom) Simulations with LS-SVM (dashed line), S-LS-SVM (thick line) compared to the actual values (thin line).Figure 3: The training sample (thick line) and future evolution (thin line) of the series from the K.U.Leuven Time Series CompetitionFigure 4: Simulations with LS-SVM (dashed line), S-LSSVM (thick line) compared to the actual values (thin line) for the next 200 points of the K.U.Leuven data.Table 1: Performance of LS-SVM and S-LS-SVM on the Lorenz data.Table 2: Performance of LS-SVM and S-LS-SVM on the K.U.Leuven data.4. ConclusionsFor the task of chaotic time series prediction, we have illustrated how to use LS-SVM regression with symmetry constraints to improve the simulation performance for the cases of series generated by Lorenz attractor and multi-scroll attractors. By adding symmetry constraints to the LS-SVM formulation, it is possible to embed the information about symmetry into the kernel level. This translates not only in better predictions for a given time horizon, but also on a larger forecast horizon in which the model can track the time series into the future.基于最小二乘支持向量回归的短期混沌时间序列预测摘要:本文解释了在混沌序列预测范围内,先验知识到模型阶段任意使用对称性的作用效果。