Flexible Neural Trees Ensemble for Stock Index Modeling
- 格式:pdf
- 大小:189.13 KB
- 文档页数:8
增强学习算法的全局收敛性分析增强学习是一种机器学习方法,旨在实现智能体通过与环境的交互来学习最优行为策略。
在增强学习中,智能体通过观察环境状态、执行动作并接收奖励来不断优化其策略,以最大化累积奖励。
在这个过程中,全局收敛性是一个关键的性质,它决定了算法是否能够最终收敛到全局最优解。
为了分析增强学习算法的全局收敛性,我们首先需要定义一个适当的强化学习框架。
通常,一个增强学习问题可以由一个马尔可夫决策过程(MDP)表示。
MDP包括一个状态空间、动作空间、转移概率函数、奖励函数以及一个折扣因子。
在MDP中,智能体根据当前状态选择动作,并转移到下一个状态,同时获得相应的奖励。
智能体的目标是通过学习一个策略函数来最大化累积奖励。
针对增强学习问题,常用的算法包括Q学习、SARSA、深度Q网络(DQN)等。
这些算法通过迭代更新策略函数来逐步提升策略的质量。
在学习过程中,算法会不断地进行状态-动作值函数(Q值函数)的更新,以逼近最优Q值函数。
然而,由于增强学习问题的复杂性,算法的收敛性成为一个重要问题。
对于增强学习算法的全局收敛性分析,通常需要利用一些理论工具和假设。
最常用的假设是马尔可夫性,即当前状态的转移概率和奖励仅与前一状态和当前动作相关,与历史状态和动作无关。
此外,还需要假设状态和动作空间具有一定的条件,如有限性或连续性等。
根据不同的算法特点,我们可以采用不同的方法来分析其全局收敛性。
以Q学习为例,其更新规则通常采用贝尔曼方程来更新Q值函数。
通过对贝尔曼方程的迭代求解,可以证明Q学习在满足一定条件下,如探索率递减到零,动作空间有限,状态空间连续等情况下能够全局收敛到最优Q值函数。
类似地,对于其他增强学习算法,也可以通过类似的方法进行全局收敛性的分析。
除了理论分析,还可以通过仿真实验来验证增强学习算法的全局收敛性。
通过构建一些典型的场景或任务,并对算法进行大量的训练与测试,可以观察算法在不同条件下的性能变化。
《基于机器学习算法的超材料快速自动设计研究》篇一一、引言超材料作为一种具有独特物理特性的新型材料,近年来在众多领域中得到了广泛的应用。
然而,超材料的设计往往需要复杂的物理模型和精细的实验验证,导致设计过程既耗时又昂贵。
因此,寻求一种高效、自动化的超材料设计方法成为了科研领域的迫切需求。
本文提出了一种基于机器学习算法的超材料快速自动设计方法,旨在通过算法的智能优化来提高超材料设计的效率和准确性。
二、研究背景及意义随着人工智能和机器学习技术的快速发展,其在材料科学领域的应用也越来越广泛。
通过机器学习算法,我们可以从大量的材料数据中学习并发现材料性质与结构之间的内在规律,为材料的快速设计和优化提供有力支持。
将机器学习算法应用于超材料设计,不仅可以提高设计效率,还可以降低实验成本,推动超材料在各个领域的应用。
三、研究方法本研究采用了一种基于深度学习的机器学习算法,通过构建超材料设计的智能优化模型,实现超材料的快速自动设计。
具体步骤如下:1. 数据准备:收集超材料的相关数据,包括材料的组成、结构、性质等,建立超材料数据库。
2. 特征提取:从超材料数据库中提取对设计有用的特征,如材料的组成比例、结构参数等。
3. 构建模型:利用深度学习算法,构建超材料设计的智能优化模型。
该模型可以根据提取的特征,预测材料的性质和性能。
4. 模型训练与验证:使用部分数据对模型进行训练和验证,确保模型的准确性和可靠性。
5. 自动设计:将模型应用于超材料的自动设计过程中,根据设计要求,自动寻找满足条件的材料组成和结构。
四、实验结果与分析通过实验,我们验证了基于机器学习算法的超材料快速自动设计方法的有效性和准确性。
实验结果如下:1. 设计效率:与传统的超材料设计方法相比,基于机器学习算法的设计方法可以大大提高设计效率。
在同样的时间内,可以设计出更多的超材料方案。
2. 设计准确性:机器学习模型可以准确预测超材料的性质和性能,降低实验验证的次数和成本。
《随机森林算法优化研究》篇一一、引言随机森林(Random Forest)是一种以决策树为基础的集成学习算法,由于其优秀的性能和稳健的表现,被广泛应用于机器学习和数据挖掘领域。
然而,随机森林算法在处理复杂问题时仍存在过拟合、效率低下等问题。
本文旨在研究随机森林算法的优化方法,提高其准确性和效率。
二、随机森林算法概述随机森林算法通过构建多个决策树来对数据进行训练和预测,每个决策树都使用随机选择的一部分特征进行训练。
最终,随机森林对各个决策树的预测结果进行集成,以得到更为准确的预测结果。
随机森林算法具有抗过拟合能力强、训练效率高、易实现等优点。
三、随机森林算法存在的问题虽然随机森林算法在很多领域取得了显著的效果,但仍然存在一些问题:1. 过拟合问题:当数据集较大或特征维度较高时,随机森林算法容易产生过拟合现象。
2. 计算效率问题:随着数据集规模的扩大,随机森林算法的计算效率会逐渐降低。
3. 特征选择问题:在构建决策树时,如何选择合适的特征是一个关键问题。
四、随机森林算法优化方法针对上述问题,本文提出以下优化方法:1. 引入集成学习技术:通过集成多个随机森林模型,可以有效提高模型的泛化能力和抗过拟合能力。
例如,可以使用Bagging、Boosting等集成学习技术来构建多个随机森林模型,并对它们的预测结果进行集成。
2. 优化决策树构建过程:在构建决策树时,可以采用特征选择方法、剪枝技术等来提高决策树的准确性和泛化能力。
此外,还可以通过调整决策树的深度、叶子节点数量等参数来优化模型性能。
3. 特征重要性评估与选择:在构建随机森林时,可以利用特征重要性评估方法来识别对模型预测结果贡献较大的特征。
然后,根据实际需求和业务背景,选择合适的特征进行建模。
这样可以减少噪声特征对模型的影响,提高模型的准确性和效率。
4. 优化模型参数:针对不同的问题和数据集,可以通过交叉验证等方法来调整随机森林算法的参数,如决策树的数量、每个决策树所使用的特征数量等。
人工智能基础(习题卷9)第1部分:单项选择题,共53题,每题只有一个正确答案,多选或少选均不得分。
1.[单选题]由心理学途径产生,认为人工智能起源于数理逻辑的研究学派是( )A)连接主义学派B)行为主义学派C)符号主义学派答案:C解析:2.[单选题]一条规则形如:,其中“←"右边的部分称为(___)A)规则长度B)规则头C)布尔表达式D)规则体答案:D解析:3.[单选题]下列对人工智能芯片的表述,不正确的是()。
A)一种专门用于处理人工智能应用中大量计算任务的芯片B)能够更好地适应人工智能中大量矩阵运算C)目前处于成熟高速发展阶段D)相对于传统的CPU处理器,智能芯片具有很好的并行计算性能答案:C解析:4.[单选题]以下图像分割方法中,不属于基于图像灰度分布的阈值方法的是( )。
A)类间最大距离法B)最大类间、内方差比法C)p-参数法D)区域生长法答案:B解析:5.[单选题]下列关于不精确推理过程的叙述错误的是( )。
A)不精确推理过程是从不确定的事实出发B)不精确推理过程最终能够推出确定的结论C)不精确推理过程是运用不确定的知识D)不精确推理过程最终推出不确定性的结论答案:B解析:6.[单选题]假定你现在训练了一个线性SVM并推断出这个模型出现了欠拟合现象,在下一次训练时,应该采取的措施是()0A)增加数据点D)减少特征答案:C解析:欠拟合是指模型拟合程度不高,数据距离拟合曲线较远,或指模型没有很好地捕 捉到数据特征,不能够很好地拟合数据。
可通过增加特征解决。
7.[单选题]以下哪一个概念是用来计算复合函数的导数?A)微积分中的链式结构B)硬双曲正切函数C)softplus函数D)劲向基函数答案:A解析:8.[单选题]相互关联的数据资产标准,应确保()。
数据资产标准存在冲突或衔接中断时,后序环节应遵循和适应前序环节的要求,变更相应数据资产标准。
A)连接B)配合C)衔接和匹配D)连接和配合答案:C解析:9.[单选题]固体半导体摄像机所使用的固体摄像元件为( )。
掌握机器学习中的集成学习和深度强化学习算法集成学习和深度强化学习是机器学习领域中的两个重要研究方向。
本文将介绍集成学习和深度强化学习的基本概念、算法原理和应用领域。
一、集成学习集成学习(Ensemble Learning)是一种通过结合多个基学习器来提高机器学习算法性能的方法。
集成学习的基本思想是“三个臭皮匠,赛过诸葛亮”,通过将多个弱学习器集合在一起,形成一个强学习器,从而提高预测性能。
常见的集成学习方法包括投票法、平均法和Bagging、Boosting 等。
投票法是指通过多个弱学习器进行投票来决定最终的预测结果。
平均法则是将多个弱学习器的预测结果进行平均,作为最终的预测结果。
而Bagging和Boosting是将多个基学习器进行整合,分别通过并行和串行的方式进行训练,从而提高模型的泛化能力。
集成学习的应用非常广泛,其中最著名的应用之一是随机森林(Random Forest)。
随机森林是一种基于决策树的集成学习算法,通过多个决策树的投票或平均来进行分类或回归任务。
随机森林具有较强的鲁棒性和泛化能力,在各种实际应用中取得了良好的效果。
二、深度强化学习深度强化学习(Deep Reinforcement Learning)是结合深度学习和强化学习的一种方法。
强化学习是一种通过智能体在环境中执行动作并得到奖励信号,以达到最大化累积奖励的学习方法。
深度学习则是一种模仿人脑神经网络的学习方法,利用多层神经网络对输入特征进行高层抽象和表示学习。
深度强化学习的核心是使用深度神经网络来近似值函数或者策略函数。
一种经典的深度强化学习算法是深度Q网络(Deep Q-Network,DQN)。
DQN通过深度神经网络来逼近动作值函数(Q函数),从而实现智能体在环境中选取最优动作。
DQN具有较强的逼近能力和泛化能力,在很多领域,特别是游戏领域取得了非常好的效果。
深度强化学习在很多领域都有着广泛的应用。
例如,在机器人领域,深度强化学习可以用于实现机器人的自主导航和控制;在自然语言处理和机器翻译领域,深度强化学习可以用于语言模型的训练和优化;在金融领域,深度强化学习可以通过学习交易模式来进行股票交易。
保持视觉感知的三维树木叶片模型分治简化方法1. 引言- 介绍三维树木叶片模型的应用背景和意义- 分析现有三维树木叶片模型在计算复杂度和解决精度上的局限性- 阐述本文提出的分治简化方法的研究价值和优势2. 相关工作- 综述现有的三维树木叶片模型的研究进展- 分析现有方法的优缺点,指出其在处理复杂树木几何结构上的不足- 介绍分治算法的基本原理和应用范围3. 分治简化方法- 基于分治算法的三维树木叶片模型简化流程- 利用分层次的数据结构对树木叶片进行切分- 提出基于层次约束和剪枝的简化策略- 实现简化算法的系统框架和具体方法4. 实验与评估- 介绍测试数据集和测试环境- 对比测试分治简化方法和现有方法的精度和计算复杂度- 分析实验结果,证明分治简化方法是一种高效且精度可控的树木叶片模型简化算法5. 结论与展望- 总结本文提出的分治简化方法,并指出其在三维树木叶片模型研究中的研究意义和实际应用前景- 探讨未来的研究方向和改进空间- 结束整篇论文1. 引言近年来,随着计算机视觉和图形学领域的快速发展和广泛应用,三维模型的精度和效率问题越来越受到研究者的关注。
其中,三维树木叶片模型是一个重要的研究领域,主要应用于生态学研究、环境模拟和动画制作等领域。
传统的方法通过采集大量现场数据来构建三维模型,但这种方法不仅需要大量的时间和人力,而且存在精度低、处理难度大等问题。
为了解决这些问题,目前已经出现了一些以分治策略为核心的简化方法,这些方法旨在通过对三维树木叶片进行分层次的处理,减少计算量和存储空间,从而在保证精度的前提下提高算法效率。
本文提出了一种基于分治策略的三维树木叶片模型简化方法,可以有效地降低复杂树木模型的计算复杂度,提高模型绘制和渲染的效率。
首先,本文将介绍三维树木叶片模型在生态学和其他领域的应用背景和意义。
然后,分析现有方法在处理复杂树木几何结构的过程中所面临的局限性。
接下来,详细阐述本文提出的三维树木叶片模型简化方法的研究价值和实际意义。
1、在机器学习中的决策树算法中,以下哪个过程是用来选择一个最优特征进行分裂的?A. 剪枝B. 特征选择C. 交叉验证D. 集成学习(答案:B)2、深度学习模型训练时,为了避免模型在训练集上表现过好而在未见过的数据上表现差,通常会采用什么技术?A. 数据增强B. 正则化C. 超参数调优D. 批量归一化(答案:B)3、在神经网络中,以下哪个激活函数通常用于输出层,当进行二分类任务时?A. SigmoidB. TanhC. ReLUD. Softmax(答案:A)4、以下哪个算法是基于实例的学习,它通过存储所有训练样本来进行分类,并使用相似度度量进行分类决策?A. K-近邻算法B. 支持向量机C. 朴素贝叶斯D. 决策树(答案:A)5、在强化学习中,智能体根据什么来学习最优策略?A. 奖励信号B. 数据集标签C. 损失函数D. 特征向量(答案:A)6、以下哪个是聚类算法的例子?A. K-均值B. 逻辑回归C. 随机森林D. 支持向量机(答案:A)7、在深度学习中,以下哪个技术用于加速训练过程,通过减少参数更新时的梯度计算量?A. 梯度下降B. 随机梯度下降C. 小批量梯度下降D. 全批量梯度下降(答案:C,但实际应用中常用的是随机梯度下降和小批量梯度下降,这里更精确应为B或C,考虑到题目要求选一项,C更为常见)8、以下哪个评估指标用于衡量分类模型在正负样本不平衡数据集上的性能?A. 准确率B. 召回率C. F1分数D. AUC-ROC(答案:D,虽然F1分数也常用于不平衡数据,但AUC-ROC更直接地反映了模型在不同阈值下的性能)9、在卷积神经网络(CNN)中,以下哪个层主要用于提取图像中的特征?A. 卷积层B. 池化层C. 全连接层D. 归一化层(答案:A)10、以下哪个算法是基于图的搜索算法,常用于解决路径规划和图论中的最优路径问题?A. A*算法B. K-均值C. 线性回归D. 梯度提升树(答案:A)。
多目标协同下的即时配送路径优化目录一、内容概要 (3)1. 研究背景 (4)2. 研究意义 (5)3. 研究目的与问题 (6)二、相关理论基础 (6)1. 即时配送概述 (8)2. 路径优化技术 (9)3. 多目标协同理论 (10)4. 智能算法在路径优化中的应用 (11)三、即时配送路径优化模型构建 (12)1. 模型假设与前提条件 (14)2. 模型参数与变量定义 (15)3. 目标函数构建 (16)4. 约束条件设定 (17)四、多目标协同下的路径优化策略 (18)1. 客户需求与路径优化的协同 (19)2. 配送中心与收货地址的协同 (21)3. 多种运输方式的协同 (22)4. 人力资源与时间管理的协同 (24)五、智能算法在路径优化中的应用实践 (25)1. 遗传算法 (26)2. 蚁群算法 (27)3. 神经网络算法 (28)4. 其他智能算法的应用与比较 (30)六、即时配送路径优化实施方案 (31)1. 数据收集与处理 (33)2. 模型参数标定 (34)3. 路径规划与实施 (35)4. 实时调整与优化策略 (36)七、案例分析 (37)1. 典型案例介绍 (38)2. 路径优化实施过程 (39)3. 优化效果评估 (40)4. 经验总结与启示 (41)八、研究结论与展望 (42)1. 研究结论 (44)2. 研究创新点 (44)3. 研究不足与展望 (46)4. 对未来研究的建议 (47)一、内容概要研究背景及意义:分析当前即时配送行业的发展现状,阐述路径优化在提升配送效率、减少成本、提升客户满意度等方面的重要性。
多目标协同理论:介绍多目标协同理论的基本思想及其在即时配送路径优化中的应用,强调在优化过程中需要兼顾速度、成本、服务质量等多个目标。
路径优化技术:探讨当前即时配送路径优化的主要技术手段,包括智能算法、大数据分析、地理信息系统等,并分析其在实际应用中的优缺点。
路径优化策略:提出具体的即时配送路径优化策略,包括基于客户需求的动态调度、智能路线规划、实时交通信息利用等,阐述这些策略在提高配送效率、降低运营成本等方面的实际应用效果。
Flexible Neural Trees Ensemble for Stock IndexModelingYuehui Chen1,Ju Yang1,Bo Yang1and Ajith Abraham21School of Information Science and EngineeringJinan University,Jinan250022,P.R.ChinaEmail:yhchen@2School of Computer Science and EngineeringChung-Ang University,Seoul,Republic of KoreaEmail:ajith.abraham@Abstract.The use of intelligent systems for stock market predictionshas been widely established.In this paper,we investigate how the seem-ingly chaotic behavior of stock markets could be well represented usingFlexible Neural Tree(FNT)ensemble technique.To this end,we con-sidered the Nasdaq-100index of Nasdaq Stock Market SM and the S&PCNX NIFTY stock index.We analyzed7-year Nasdaq-100main indexvalues and4-year NIFTY index values.This paper investigates the devel-opment of novel reliable and efficient techniques to model the seeminglychaotic behavior of stock markets.The structure and parameters of FNTare optimized by using Genetic Programming(GP)and particle SwarmOptimization(PSO)algorithms,repectively.A good ensemble model isformulated by the Local Weighted Polynomial Regression(LWPR).Thispaper investigates whether the proposed method can provide the re-quired level of performance,which is sufficiently good and robust so asto provide a reliable forecast model for stock market indices.Experimentresults shown that the model considered could represent the stock indicesbehavior very accurately.1IntroductionPrediction of stocks is generally believed to be a very difficult task-it behaves like a random walk process and time varying.The obvious complexity of the problem paves the way for the importance of intelligent prediction paradigms. During the last decade,stocks and futures traders have come to rely upon var-ious types of intelligent systems to make trading decisions[1][2].Several in-telligent systems have in recent years been developed for modelling expertise, decision support and complicated automation tasks[3][4].In this paper,we an-alyzed the seemingly chaotic behavior of two well-known stock indices namely the Nasdaq-100index of Nasdaq SM[5]and the S&P CNX NIFTY stock index [6].The Nasdaq-100index reflects Nasdaq’s largest companies across major in-dustry groups,including computer hardware and software,telecommunications, retail/wholesale trade and biotechnology[5].The Nasdaq-100index is a modi-fied capitalization-weighted index,which is designed to limit domination of theIndex by a few large stocks while generally retaining the capitalization ranking of companies.Through an investment in the Nasdaq-100index tracking stock,investors can participate in the collective performance of many of the Nasdaq stocks that are often in the news or have become household names.Similarly,S&P CNX NIFTY is a well-diversified 50stock index accounting for 25sectors of the economy [6].It is used for a variety of purposes such as benchmarking fund portfolios,index based derivatives and index funds.The CNX Indices are computed using market capitalization weighted method,wherein the level of the Index reflects the total market value of all the stocks in the index relative to a particular base period.The method also takes into account constituent changes in the index and importantly corporate actions such as stock splits,rights,etc.without affecting the index value.Our research is to investigate the performance analysis of FNT[9][10]ensem-ble for modelling the Nasdaq-100and the NIFTY stock market indices.The hierarchical structure of FNT is evolved using GP with specific instructions.The parameters of the FNT model are optimized by PSO algorithm [7].We an-alyzed the Nasdaq-100index value from 11January 1995to 11January 2002[5]and the NIFTY index from 01January 1998to 03December 2001[6].For both the indices,we divided the entire data into almost two equal parts.No spe-cial rules were used to select the training set other than ensuring a reasonable representation of the parameter space of the problem domain [2].2The Flexible Neural Tree ModelThe function set F and terminal instruction set T used for generating a FNT model are described as S =F T ={+2,+3,...,+N } {x 1,...,x n },where +i (i =2,3,...,N )denote non-leaf nodes’instructions and taking i arguments.x 1,x 2,...,x n are leaf nodes’instructions and taking no other arguments.The out-put of a non-leaf node is calculated as a flexible neuron model (see Fig.1).From this point of view,the instruction +i is also called a flexible neuron operator with i inputs.In the creation process of neural tree,if a nonterminal instruction,i.e.,+i (i =2,3,4,...,N )is selected,i real values are randomly generated and used for representing the connection strength between the node +i and its children.In addition,two adjustable parameters a i and b i are randomly created as flexible activation function parameters.For developing the FNT,the flexible activationfunction f (a i ,b i ,x )=e −(x −a i b i )2is used.The total excitation of +n is net n = nj =1w j ∗x j ,where x j (j =1,2,...,n )are the inputs to node +n .The output of the node +n is then calculated by out n =f (a n ,b n ,net n )=e −(netn −an bn )2.The overall output of flexible neural tree can be computed from left to right by depth-first method,recursively.Tree Structure Optimization.Finding an optimal or near-optimal neural tree is formulated as a product of evolution.In this study,the crossover and selection operators used are same as those of standard GP.A number of neural tree mutation operators are developed as follows:x 1x n x 2Output layer layer Input layer Fig.1.A flexible neuron operator (left),and a typical representation of the FNT with function instruction set F ={+2,+3,+4,+5,+6},and terminal instruction set T ={x 1,x 2,x 3}(right)(1)Changing one terminal node:randomly select one terminal node in the neuraltree and replace it with another terminal node;(2)Changing all the terminal nodes:select each and every terminal node in theneural tree and replace it with another terminal node;(3)Growing:select a random leaf in hidden layer of the neural tree and replaceit with a newly generated subtree.(4)Pruning:randomly select a function node in the neural tree and replace itwith a terminal node.Parameter Optimization with PSO.The Particle Swarm Optimization (PSO)conducts searches using a population of particles which correspond to individuals in evolutionary algorithm (EA).A population of particles is randomly generated initially.Each particle represents a potential solution and has a position repre-sented by a position vector x i .A swarm of particles moves through the problem space,with the moving velocity of each particle represented by a velocity vector v i .At each time step,a function f i representing a quality measure is calculated by using x i as input.Each particle keeps track of its own best position,which is associated with the best fitness it has achieved so far in a vector p i .Furthermore,the best position among all the particles obtained so far in the population is kept track of as p g .In addition to this global version,another version of PSO keeps track of the best position among all the topological neighbors of a particle.At each time step t ,by using the individual best position,p i ,and the global best position,p g (t ),a new velocity for particle i is updated byv i (t +1)=v i (t )+c 1φ1(p i (t )−x i (t ))+c 2φ2(p g (t )−x i (t ))(1)where c 1and c 2are positive constant and φ1and φ2are uniformly distributed random number in [0,1].The term v i is limited to the range of ±v max .If the velocity violates this limit,it is set to its proper limit.Changing velocity this way enables the particle i to search around its individual best position,p i ,and global best position,p g .Based on the updated velocities,each particle changes its position according to the following equation:x i (t +1)=x i (t )+v i (t +1).(2)Procedure of the general learning algorithm.The general learning proce-dure for constructing the FNT model can be described as follows.1)Create an initial population randomly(FNT trees and its correspondingparameters);2)Structure optimization is achieved by the neural tree variation operators asdescribed in subsection2.3)If a better structure is found,then go to step4),otherwise go to step2);4)Parameter optimization is achieved by the PSO algorithm as described insubsection2.In this stage,the architecture of FNT model isfixed,and it is the best tree developed during the end of run of the structure search.The parameters(weights andflexible activation function parameters)encoded in the best tree formulate a particle.5)If the maximum number of local search is reached,or no better parametervector is found for a significantly long time then go to step6);otherwise go to step4);6)If satisfactory solution is found,then the algorithm is stopped;otherwise goto step2).3The FNT EnsembleThe Basic Ensemble Method.A simple approach to combining network outputs is to simply average them together.The basic ensemble method(BEM) output is defined:f BEM=1nni=1f i(x)(3)This approach by itself can lead to improved performance,but doesn’t take into account the fact that some FNTs may be more accurate than others.It has the advantage that it is easy to understand and implement and can be shown not to increase the expected error.The Generalized Ensemble Method.A generalization to the BEM method is tofind weights for each output that minimize the positive and negative classi-fication rates of the ensemble.The general ensemble method(GEM)is defined:f BEM=ni=1αi f i(x)(4)where theα i s are chosen to minimize the root mean square error between the FNT outputs and the desired values.For comparison purpose,the opti-mal weights of the ensemble predictor are optimized by using PSO algorithm. The L WPR Method.To investigate more efficient ensemble method,a LWPR approximation approach is employed in this work[11].In this framework,thefinal output of FNT ensemble is approximated by a local polynomial model,i.e.,f LW P R=Mi=1βi t i(x)(5)where t i is a function that produces the i th term in the polynomial.For example, with two inputs and a quadratic local model we would have t1(x)=1,t2(x)=x1, t3(x)=x2,t4(x)=x21,t5(x)=x1x2,t6(x)=x22.Equation(5)can be written more compactly asf LW P R=βT t(x)(6)where t(x)is the vector of polynomial terms of the input x andβis the vector of weight terms.The weight of the i th datapoint is computed as a decaying function of Euclidean distance between x k and x query.βis chosen to minimizeNi=1ω2i(f LW P R−βT t(x))(7) whereωi is a Gaussian weight function with kernel width K:ωi=exp(−Distance2(x i,x query)/2K2).(8) 4ExperimentsWe considered7-year stock data for the Nasdaq-100Index and4-year for the NIFTY index.Our target is to develop efficient forecast models that could pre-dict the index value of the following trade day based on the opening,closing and maximum values of the same on a given day.The assessment of the prediction performance of the different ensemble paradigms were done by quantifying the prediction obtained on an independent data set.The Root Mean Squared Error (RMSE),Maximum Absolute Percentage Error(MAP)and Mean Absolute Per-centage Error(MAPE)and Correlation Coefficient(CC)were used to study the performance of the trained forecasting model for the test data.MAP is defined as follows:MAP=max(|P actual,i−P predicted,i|P predicted,i×100)(9)where P actual,i is the actual index value on day i and P predicted,i is the forecast value of the index on that day.Similarly MAP E is given asMAP E=1NNi=1(|P actual,i−P predicted,i|P predicted,i)×100(10)where N represents the total number of days.We used instruction set I={+2,+3,...,+6,x0,x1,x2}for modeling the Nasdaq-100index and instruction set I={+2,+3,...,+8,x0,x1,x2,x3,x4} for modeling the NIFTY index.We have conducted10FNT models for predict-ing the Nasdaq-100index and the NIFTY index,respectively.And then three ensemble methods discussed in Section3are employed to predict the both index.Table1.Empirical comparison of RMSE results for four learning methodsBest-FNT BEM GEM LWPR Nasdaq-1000.018540.018240.016354.41×10−5NIFTY0.013150.012580.012221.96×10−7 Table2.Statistical analysis of four learning methods(test data)Best-FNT BEM GEM LWPRNasdaq-100Correlation coefficient0.9975420.9976100.9977570.999999 MAP98.129898.332097.33470.4709MAPE 6.1090 6.3370 5.78300.0040NIFTYCorrelation coefficient0.9969080.9970010.09971090.999999 MAP28.006434.368726.81887.65×10−4MAPE 3.2049 2.9303 2.65701.92×10−5 Table1summarizes the test results achieved for the two stock indices using the four different approaches.Performance analysis of the trained forecasting models for the test data was shown in Table2.Figures2and3depict the test results for the one day ahead prediction of the Nasdaq−100index and the NIFTY index respectively.5ConclusionsIn this paper,we have demonstrated how the chaotic behavior of stock indices could be well represented by FNT ensemble learning paradigm.Empirical results on the two data sets using FNT ensemble models clearly reveal the efficiency of the proposed techniques.In terms of RMSE values,for the Nasdaq-100index and the NIFTY index,LWPR performed marginally better than other models.For both index(test data),LWPR also has the highest correlation coefficient and the lowest value of MAPE and MAP values.A low MAP value is a crucial indicator for evaluating the stability of a market under unforeseenfluctuations.In the present example,the predictability assures the fact that the decrease in trade is only a temporary cyclic variation that is perfectly under control.Our research was to predict the share price for the following trade day based on the opening, closing and maximum values of the same on a given day.Our experiment results indicate that the most prominent parameters that affect share prices are their immediate opening and closing values.Thefluctuations in the share market are chaotic in the sense that they heavily depend on the values of their immediate0.10.20.30.40.50.60.70.80.9TimeM o d e l o u t p u t s a n d d e s i r e d v a l u e s Fig.2.Test results showing the performance of the different methods for modeling the Nasdaq-100index0.10.20.30.40.50.60.70.8TimeM o d e l o u t p u t s a n d d e s i r e d v a l u e s Fig.3.Test results showing the performance of the different methods for modeling the NIFTY indexforerunning fluctuations.Long-term trends exist,but are slow variations and this information is useful for long-term investment strategies.Our study focus on short term,on floor trades,in which the risk is higher.However,the results of our study show that even in the seemingly random fluctuations,there is an underlying deterministic feature that is directly enciphered in the opening,clos-ing and maximum values of the index of any day making predictability possible.Empirical results also show that LWPR is a distinguished candidate for the FNT ensemble or neural networks ensemble.References1.Abraham A.,Nath B.and Mahanti P.K.:Hybrid Intelligent Systems for Stock Mar-ket Analysis,Computational Science,Springer-Verlag Germany,Vassil N Alexan-drov et al (Editors),USA,(2001)337-345.2.Abraham A.,Philip N.S.,and Saratchandran P.:Modeling Chaotic Behavior ofStock Indices Using Intelligent Paradigms,International Journal of Neural,Parallel and Scientific Computations,USA,Volume11,Issue(1,2),(2003)143-160.3.Leigh W.,Modani N.,Purvis R.and Roberts T.:Stock market trading rule dis-covery using technical charting heuristics,Expert Systems with Applications23(2), (2002)155-159.4.Leigh W.,Purvis R.and Ragusa J.M.:Forecasting the NYSE composite index withtechnical analysis,pattern recognizer,neural network,and genetic algorithm:a case study in romantic decision support,Decision Support Systems32(4),(2002)361-377.5.Nasdaq Stock Market SM:6.National Stock Exchange of India Limited:7.Kennedy,J.and Eberhart,R.C.:Particle Swarm Optimization.In Proc.of IEEEInternational Conference on Neural Networks,(1995)1942-1948.8.Maqsood I.,Khan M.R.and Abraham A.:Neural Network Ensemble Method forWeather Forecasting,Neural Computing and Applications,Springer Verlag London Ltd.,13(2),(2004)112-122.9.Chen,Y.,Yang,B.,Dong,J.:Nonlinear System Modeling via Optimal Design ofNeural Trees,International Journal of Neural Systems,14(2),(2004)125-137.10.Chen,Y.,Yang,B.,Dong,J.,Abraham A.,Time-series forcasting usingflexibleneural tree model,Information Science,2005.(In press)11.Moore A.and Schneider J.and Deng K.:Efficient Locally Weighted PolynomialRegression Predictions,Proceedings of the Fourteenth International Conference on Machine Learning,(1997)236-244.。