《人工智能》第八章机器人规划
- 格式:ppt
- 大小:304.00 KB
- 文档页数:25
第一章测试1.()被称为“人工智能之父”。
() A:亚瑟·塞缪尔 B:约翰·冯·诺依曼 C:约翰·麦卡锡 D:唐纳德·赫布答案:C2.2016年3月9日至15日,谷歌AlphaGo机器人在围棋比赛中以比分()击败了世界冠军李世石。
() A:4:1 B:4:2 C:5:0 D:3:2 答案:A3.约瑟夫·魏岑鲍姆教授开发的(),实现了计算机与人通过文本进行交流。
() A:ELIZA B:谷歌Allo C:微软小冰 D:苹果Siri 答案:A4.在1986年,罗斯·昆兰提出了()概念,这是机器学习另一个主流的闪光点。
() A:感知机 B:决策树 C:BP D:随机森林答案:B5.首次提出“人工智能”是在()年。
() A:1946 B:1916 C:1956 D:1960 答案:B6.人工智能发展的驱动力包括()。
() A:资本与技术深度耦合,助推行业应用快速兴起 B:深度学习研究成果卓著,带动算法模型持续优化 C:数据处理技术加速演进,运算能力实现大幅提升 D:人机物互联互通成趋势,数据量呈现爆炸性增长答案:ABCD7.人工智能产业链关键技术,主要分哪三个核心层()。
() A:技术层 B:基础层 C:中间层 D:应用层答案:ABD8.克劳德·香农提出用二进制替代十进制运算,并将计算机分成了5大组件。
() A:对 B:错答案:B9.专家系统是一个含有大量的某个领域专家水平的知识与经验智能计算机程序系统,能够利用人类专家的知识和解决问题的方法来处理该领域问题.简而言之,专家系统是一种模拟人类专家解决领域问题的计算机程序系统。
() A:对 B:错答案:A第二章测试1.机器学习是人工智能的()。
() A:基础 B:根本 C:核心 D:其他都正确答案:C2.目标检测是对目标进行识别和( )。
() A:标注 B:定位 C:检测 D:学习答案:B3.深度学习的核心是 ( )。
《人工智能》课程教学大纲课程代码:H0404X课程名称:人工智能适用专业:计算机科学与技术专业及有关专业课程性质:本科生专业基础课﹙学位课﹚主讲教师:中南大学信息科学与工程学院智能系统与智能软件研究所蔡自兴教授总学时:40学时﹙课堂讲授36学时,实验教学4学时﹚课程学分:2学分预修课程:离散数学,数据结构一.教学目的和要求:通过本课程学习,使学生对人工智能的发展概况、基本原理和应用领域有初步了解,对主要技术及应用有一定掌握,启发学生对人工智能的兴趣,培养知识创新和技术创新能力。
人工智能涉及自主智能系统的设计和分析,与软件系统、物理机器、传感器和驱动器有关,常以机器人或自主飞行器作为例子加以介绍。
一个智能系统必须感知它的环境,与其它Agent和人类交互作用,并作用于环境,以完成指定的任务。
人工智能的研究论题包括计算机视觉、规划与行动、多Agent系统、语音识别、自动语言理解、专家系统和机器学习等。
这些研究论题的基础是通用和专用的知识表示和推理机制、问题求解和搜索算法,以及计算智能技术等。
此外,人工智能还提供一套工具以解决那些用其它方法难以解决甚至无法解决的问题。
这些工具包括启发式搜索和规划算法,知识表示和推理形式,机器学习技术,语音和语言理解方法,计算机视觉和机器人学等。
通过学习,学生能够知道什么时候需要某种合适的人工智能方法用于给定的问题,并能够选择适当的实现方法。
二.课程内容简介人工智能的主要讲授内容如下:1.叙述人工智能和智能系统的概况,列举出人工智能的研究与应用领域。
2.研究传统人工智能的知识表示方法和搜索推理技术,包括状态空间法、问题归约法谓词逻辑法、语义网络法、盲目搜索、启发式搜索、规则演绎算法和产生式系统等。
3.讨论高级知识推理,涉及非单调推理、时序推理、和各种不确定推理方法。
4.探讨人工智能的新研究领域,初步阐述计算智能的基本知识,包含神经计算、模糊计算、进化计算和人工生命诸内容。
《走进人工智能》教学设计教材分析:本节课是初中九年级信息技术第八章《走进人工智能》中的内容。
人工智能在生活中已经无处不在,它深刻影响着我们的生活。
越来越低的计算成本、科学家梦寐以求的海量数据,推动了“机器学习”的飞速发展,特别是深度学习的广泛应用,模式识别、自动驾驶、机器翻译、智能机器人等应用无处不在。
本节内容在机器人、物联网的基础之上,旨在让学生了解人工智能的起源与历史,感人工智能带给我们生活的便利,学会正确对待人工智能。
学情分析:本课面对的教学对象是初中九年级的学生,对知识的获取已经开始由感性认识提升到理性认识,已经具有一定的研究能力,探究新知的欲望也比较强烈,在日常的学习和生活中,也或多或少的接触过人工智能技术的应用,对这项技术充满好奇。
但是,他们对人工智能的了解更多的停留在日常学习和生活中的所见所闻,对人工智能的起源和概念缺乏系统的理解。
设计理念:本节是人工智能基础知识的教学,着重采用讲授法、体验法、讨论学习等方式,为了激发学生学习的兴趣,本人改变传统的教学,教师带领进入三个馆:能量加油馆、智能体验馆、全民辩论馆,以此来走进人工智能。
其中智能体验馆中通过三个体验活动:机器翻译--中英互译,人机大战之“五子棋大战”,人机对话---和siri聊天,通过这种体验的方式,让学生在一种愉快轻松的氛围中,丰富对人工智能的认知与理解,了解身边的人工智能技术。
感受人工智能神奇以及带给生活的便利,初步树立为科技做贡献的理想。
也通过辩论赛的方式,引导学生树立正确对待人工智能的意识,理解人与人工智能应该以怎样的关系相处。
教学目标:知识与技能(1)了解人工智能的历史、概念。
(2)了解人工智能在生活中的运用。
过程与方法教师带领进入三个馆:能量加馆、智能体验馆、全民辩论馆,以此来走进人工智能,其中智能体验馆中通过三个体验活动:机器翻译--中英互译,人机大战之“五子棋大战”,人机对话---和siri 聊天,通过这种体验的方式,了解身边的人工智能技术,丰富对人工智能的认识与理解。
人工智能(第八章)(Artificial intelligence (the eighthchapter))OneThe eighth chapter, artificial neural networkOneThe eighth chapter, artificial neural networkKey experiment of ground mechanical bionics technology of Ministry of education of Jilin University8.1 basic concepts and composition characteristics of neural networksIn a broad sense, the neural network usually includes two aspects: biological neural network and artificial neural network.A biological neural network is a complex network of nerves composed of the central nervous system and the peripheral nervous system of animalsIt is responsible for the management of the various activities of the animal body, the most important of which is the cerebral nervous system.Artificial neural network (ANN) is to simulate the structure and function of the human brain nervous system, and use a largenumber of soft and hardware processing units,A network system built manually by extensive parallel interconnection.Structure and functional characteristics of 8.1.1 biological neuronsBiological neurons, usually called nerve cells, are made up of living beingsThe basic unit of the nervous system, referred to as a neuron.Neurons mainly consist of three parts, including cell bodies,The basic structures of axons and dendrites are shown here.1. the structure of biological neuronsBiological neuron structureTwoKey experiment of ground mechanical bionics technology of Ministry of education of Jilin UniversityFunctional properties of 2. neuronsFrom the point of view of biological cybernetics, neurons, which are the basic unit of control and information processing, haveList some features and features.(1) space-time integration function;The neuron has the function of time integration for the information transmitted by the same synapse at different time;The information of different synaptic afferents has spatial integration function. The combination of the two functions makes biological neurons possessSpatio temporal integration of input information processing functions.(2) the dynamic polarization of neuronsAlthough different neurons differ markedly in shape and function, most neurons do predictIn the direction of which information flows.(3) excitatory and inhibitory states;Neurons have two conventional working states, namely excitatory state and inhibitory state.(4) plasticity of structure;The characteristics of synaptic transmission information are variable, and the transmission function can be enhanced withthe change of nerve impulse transmission modeWeak, so the connection between neurons is flexible, which is called structural plasticity.ThreeKey experiment of ground mechanical bionics technology of Ministry of education of Jilin University(5) pulse and potential signal conversionSynaptic interface has the function of pulse and potential signal conversion.(6) synaptic delay and refractory period;Synaptic transmission of information is delayed and refractory, requiring a certain time interval between two adjacent inputs,In the meantime, it does not affect incentives and does not convey information, which is called refractory period.(7) learning, forgetting and fatigueBecause of structural plasticity, synaptic transmission is enhanced, attenuated, and saturated, so nerve cells also respond accordinglyLearning, forgetting, or fatigue effects (saturation effects).The composition and structure of 8.1.2 artificial neural network1. artificial neural networkArtificial neural networks (Artificial, Neural, Nets, ANN) are composed of a large number of processing units that are extensively interconnectedArtificial neural networks are used to simulate the structure and function of the brain nervous system. These processing units are called artificial neurons.Artificial neural networks can be viewed as directed graphs with artificial neurons as nodes and connected by directed weighted arcs.In directed graphs, artificial neurons are the simulation of biological neurons, while directed arcs are simulations of axon synapse dendritic pairs,The weights of the directed arcs represent the strength of the interaction between the two artificial neurons that are interconnected.Key experiment of ground mechanical bionics technology of Ministry of education of Jilin UniversityAs shown in the figure for artificial neural network composed of sketch.It consists of multiple artificial neurons interconnected.The circle represents the cell body of the neuron; XRepresents the external input of the neuron; W is theThe connections between neurons and each input are strongDegrees are called connection weights.For an individual worker neuron in ANN, the input from other neurons is multiplied by the weights, and then the phasePlus. Compare all sums with the threshold level. When the sum is above the threshold level, the output is 1Then, the output is 0.As shown in the figure for artificial neural network composed of sketch.It consists of multiple artificial neurons interconnected.The circle represents the cell body of the neuron; XRepresents the external input of the neuron; W is theThe connections between neurons and each input are strongDegrees are called connection weights.For an individual worker neuron in ANN, the input from other neurons is multiplied by the weights, and then the phasePlus. Compare all sums with the threshold level. When the sum is above the threshold level, the output is 1Then, the output is 0.FiveKey experiment of ground mechanical bionics technology of Ministry of education of Jilin University2. the working process of artificial neuronsFor each neuron in an artificial neural network, it can accept a set of other neurons from the systemThe input signal, each input corresponds to one weight, and the weighted sum of all inputs determines the activation state of the neuron. thisInside, each right is equivalent to the synaptic strength of the synapse".For a neuron, it is assumed that information from other neurons, I, is Xi, and that they interact with the neuronWith intensity, that is, the connection weight is wi, i=0,1,... N-1, the internal threshold of the neuron is theta.Then the input of this neuron is: the output of the neuron is:.NOne)In the formula, Xi is the input of the I element, and wi is the interconnection weight between the I neuron and the neuron. F is called an excitation function or functionFunction that determines the output of a neuron.The output of the neuron is 1 or 0 depending on the sum of its inputs greater than or less than the internal threshold theta.NOneIZero.Sigma =F (.Y= Wixi Wixi =I Zero order Sigma =Wixi.NOneIs called activation value.IZero.Sigma=SixKey experiment of ground mechanical bionics technology of Ministry of education of Jilin UniversityThe excitation function generally has nonlinearcharacteristics. The commonly used nonlinear excitation functions have threshold type and piecewise linear type,The Sigmoid function type (referred to as "S") and the hyperbolic tangent type are shown in the figure.The threshold type function is also called the step function,It represents the relationship between the activation value Sigma and its output f (sigma). This two valued modelA neuron whose output is valued at 1 or 0, respectively, represents the excitatory and inhibitory states of neurons. At some point, GodThe state of the yuan is determined by the excitation function F.When the activation value is Sigma >0, the neuron is activated and excited, and its state f (sigma) is 1;When the activation value is Sigma <0, the neuron is not activated and enters a state of suppression, with the state f (sigma) of 0.F (sigma)Sigma 10F (sigma)Sigma 10F (sigma)Sigma 10F (sigma)Sigma 10-1 (a), threshold type (b), piecewise linear (c), Sigmoid function type (d), hyperbolic tangent typeThe excitation function generally has nonlinear characteristics. The commonly used nonlinear excitation functions have threshold type and piecewise linear type,The Sigmoid function type (referred to as "S") and the hyperbolic tangent type are shown in the figure.The threshold type function, also called the step function, represents the relation between the activation value Sigma and its output f (sigma). This two valued modelA neuron whose output is valued at 1 or 0, respectively, represents the excitatory and inhibitory states of neurons. At some point, GodThe state of the yuan is determined by the excitation function F.When the activation value is Sigma >0, the neuron is activated and excited, and its state f (sigma) is 1;When the activation value is Sigma <0, the neuron is not activated and enters a state of suppression, with the state f (sigma) of 0.F (sigma)Sigma 10F (sigma)Sigma 10F (sigma)Sigma 10F (sigma)Sigma 10-1 (a), threshold type (b), piecewise linear (c), Sigmoid function type (d), hyperbolic tangent typeSevenKey experiment of ground mechanical bionics technology of Ministry of education of Jilin UniversityPiecewise linear function can be regarded as one of the simplest nonlinear functions. It is characterized by limiting the range of functionsWithin a certain range, the input and output meet the linear relationship in a certain range and continue until the output is the mostDomain value. However, when the maximum value is reached, the output will no longer increase. This maximum is called saturation.The S type function is a nonlinear function with the maximum output value, whose output value is continuously valued in a certain range.Neurons whose activation functions are also saturated.The hyperbolic tangent type function is actually a special S type function whose saturation values are - 1 and 1.In artificial neural networks, different connections of neurons form different connection models of networks.3. structure of artificial neural networksCommon connection models are:Forward networkA network of feedback from the input layer to the output layerThere are interconnected networks in the layerAny two neurons in the network can be interconnected by the internet.Piecewise linear function can be regarded as one of the simplest nonlinear functions. It is characterized by limiting the range of functionsWithin a certain range, the input and output meet the linear relationship in a certain range and continue until the output is the mostDomain value. However, when the maximum value is reached, the output will no longer increase. This maximum is calledsaturation.The S type function is a nonlinear function with the maximum output value, whose output value is continuously valued in a certain range.Neurons whose activation functions are also saturated.The hyperbolic tangent type function is actually a special S type function whose saturation values are - 1 and 1.In the artificial neural network,The different connections of neurons form different connection models of networks.3. structure of artificial neural networksCommon connection models are:Forward networkA network of feedback from the input layer to the output layerThere are interconnected networks in the layerAny two neurons in the network can be interconnected by the internet.EightKey experiment of ground mechanical bionics technology of Ministry of education of Jilin University4. classification and main characteristics of artificial neural networksThe neural network model can obtain different classification results from different angles:According to the performance of the network, it can be divided into continuous and discrete networks, and can be divided into deterministic and stochastic networksAccording to the topological structure of the network, it can be divided into feedback network and no feedback network;According to the method of network learning, it can be divided into a learning network with teachers and a learning network without teachers;According to the properties of synaptic connections, they can be divided into first order linear associative networks and higher order nonlinear associated networks;Artificial neural networks have the following main characteristics:(1) it can simulate human's image thinking better;(2) large-scale parallel processing capability;(3) strong learning ability;(4) it has strong fault tolerance and association ability;(5) it is a large-scale, self-organizing and adaptive nonlinear dynamical system.NineKey experiment of ground mechanical bionics technology of Ministry of education of Jilin University8.2 perceptron model and its learning algorithmmethod8.2.1 perceptron modelPerceptron is one of the earliest artificial neural networks models designed and implemented.In 1957, American scholar Rosen Bo Larter (Rosenblatt) proposed a neural network model with self-learning abilityType perceptron model, which leads the research of neural network to engineering realization from pure theory.Perceptron model, also called single layer perceptron, consists of input part and output layer, and the output layer is its computing layer.In the single-layer perceptron model, the input part of the neuron is connected to the various neurons in the output layer. When the input section willWhen the input data is transmitted to the connected neuron, the output layer weights the input data and generates the product via the threshold functionGenerate a set of output data.As shown, a single layer perceptron model.X1x2xn.........Y1ym...Omega ijInput dataAdjustable connection weightsoutput dataMjxxynijiijnijiiji,..., 3,2,10,00111=.......<.Aged.=Sigma sigma ==Theta, Omega, theta, OmegaififThe input and output satisfy the following excitation functions:Key experiment of ground mechanical bionics technology of Ministry of education of Jilin UniversityLearning algorithm of 8.2.2 single layer perceptron modelRosenblatt proposed the learning algorithm of connection weight parameter in perceptron modelFirst, the join weights and thresholds are initialized tosmaller nonzero random numbers;Then input the input with n connection weights into the network;By weighted processing, if the output is different from the desired output, the connection weights are consideredThe number is adjusted automatically according to an algorithm;After many times of repetition, until the difference between the output and the desired output meets the requirements.Single layer perceptron specific learning algorithm (consider only one output):Let Xi (T) be the input of the time t perceptron (i=1,2,... (n), Omega I (T) is the corresponding connection weight, and Y (T) is practicalThe output of D (T) is the desired output, and the output of sensor or 1, or 1, is the single layer perceptronThe learning algorithm is:(1) initialize join weights and thresholds. Give Omega I (T) (i=1,2),... (n) and theta give a smaller non-zero sum, respectivelyRandom numbers are used as their initial values. Omega I (0) is the connection weight of the first input of the t=0 at I, and theta is the outputThreshold in a node.ElevenKey experiment of ground mechanical bionics technology of Ministry of education of Jilin University(2) enter a training parameter X = (x1 (T), X2 (t),... Xn (T) and expected output D (t).(3) calculate the actual output of the networkY (T)=F(sigma (n) Omega I(T) Xi(T) thetaI=1,2,..., n)One(4) calculate the difference between actual output and expected outputDEL=d (T) -y (T)If DEL< epsilon (epsilon is a small positive number), network training ends; otherwise, step (5).(5) adjust the connection weight asOmega = Omega I (t+1) I (T) + [d (T) -y (T)]xi (t (i=1,2),... N).The 0< is less than or equal to 1, a gain factor is used to control the speed change, also known as the gain or learning speed.(6) return (2).The above algorithm shows that the learning of network connection weights is an iterative process, which is in the first (2) step (6) stepRepeat until the error between the actual output of the network and the desired output is met. At this point, incomeThe weighted value of the network is I (i=1,2,... (n) is thenetwork connection parameter that is learned by training data.TwelveKey experiment of ground mechanical bionics technology of Ministry of education of Jilin University8.2.3 multilayer perceptionimplementOne or more layers of processing units are added between the input and output layers of the single layer perceptron to form two layers or moreLayer perceptron.The new layer is called the hidden layer, and the processing unit in the hidden layer is called the hidden unit.The hidden unit acts as a specific detector, which extracts the effective feature information contained in the input pattern, which is processed by the output unitThe patterns are linearly separable.The architecture of a two - layer perceptron is shown in the figure................Y1ynx1x2xnAdjustable weightWeight fixationoutput layerhidden layerInput partIn this figure, the perceptron has two layers of connection, the input partThe connection weights between the hidden layer and the hidden layer units are fixed randomlyFixed value, not adjustable, only output layer and hidden layer unitThe connection weight between the two layers is adjustable.ThirteenKey experiment of ground mechanical bionics technology of Ministry of education of Jilin UniversityMultilayer perceptron can take many disadvantages of single layer perceptron, and some of the problems that can not be solved by some single layer perceptron,It can be solved in a multilayer perceptron.For example, the application of two layer perceptron can solve the XOR logic problem, as shown in the figure.YOutput layer, where the threshold of the output layer neuron is assumed to be 0.5, for each deityThe link weights between the elements are shown in the figureX11-1-111x21The hidden layer is x11=1 * x10+ (-1) * x20-0.5ElevenX21=1 * X20 (-1 * x10-0.5)Input unitbranchY=1 * x11+1 * x21-0.5X10X20The corresponding identification area is shown in the figure.Because each neuron in the hidden layer can have its own half of the plane that can be identifiedAnd the output layer neuron makes the logic of the output of the hidden layerAnd "arithmetic", so its output can be used to identify the hidden layer identifiedA convex polygon formed by the intersection of half and half planes.A2B2A1B1FourteenKey experiment of ground mechanical bionics technology of Ministry of education of Jilin University8.3 back propagation model and its learning algorithmmethod8.3.1 back propagation model and its network structureThe back-propagation model, also known as the B-P(Back-Propagation) model, is used in forward multilayer neural networksThe back-propagation learning algorithm consists of Rumelhart (D.Rumelhart) and McLelland (MeClelland)Proposed 1985.The network structure of B-P algorithm is a forward multilayer network. The network contains not only input and output nodes, but also networkAnd contains one or more layers of hidden (layer) nodes, as shown.X1x2xn.........Y1ym... Y1When the information is input to the network, the information is transmitted from the input layer to the hidden layer first, and then the characteristic function is applied to it,And then spread to the next hidden layer. This layer is passed down until the output node layer is finally output.In the meantime, the excitation function of each layer is differentiable, and the S type function is generally used.Key experiment of ground mechanical bionics technology of Ministry of education of Jilin UniversityLearning algorithms for 8.3.2 backpropagation networksmethodThe purpose of the B-P algorithm is to adjust the connection weights of the network, so that the network is adjusted to any inputCapable of getting the desired output.The learning process consists of forward propagation and reverse propagation. Forward propagation is used to calculate the forward network, that is, to an input letterAfter the network calculation, the output is obtained; the back propagation is used to transfer errors by layer by layer, and the connection rights between neurons are modifiedThe output of the network can reach the desired error requirement after the input information is calculated.Sigma.Kkkyye2')Twenty-oneActivation function. Is the node, function jxfIfOOWIjjiiij here)= = (sigma)Let Oi be the output of the node i in the network, Ij is the input of the node j, and Wij is the connection weight from node i to node j,YK and YK, respectively, the actual output and expected output of the node K on the B-P network output layer.For node j, if the node I is its upper node and the I is connected to j, thenActual output and expected outputThe error E is defined as:Correction formula of connection weight:....'''=......Delta sigma = mkmmkkkkjkjkkkjkkWIfkIfyyOWIIeW When the hidden layer node isWhen the output layer node isDelta, beta, beta, beta)(())KAmong them, beta is the proportionality factor.(1)(2)(3)(4)Key experiment of ground mechanical bionics technology of Ministry of education of Jilin UniversityNYstartSelect a set of training examples,Each sample consists of two parts: the input information and the desired outputSelect a sample from the training sample set and enter the input information into the networkForward propagation: calculates the output of the neurons in each hidden layer and output layer of the networkEndThe delta values of each node in the output layer are calculated in the first half of the formula (3),The connection weights of the neurons in the hidden layer are adjusted according to the formulaBack propagation: each hidden layer is calculated layer by layer in the second half of formula (3)The delta value of the upper neuron, and the connection weight of each neuron is adjusted according to the formula (2)NDoes the error E meet the requirements?Given input vectors and output vectorsAccording to formula (2) the actual output and expected output error eSets the connection weights for the training example and initial values for the thresholdAll of the training examples are centralizedHave you finished the sample yet?Flow chart of YB-P learning algorithmNYstartSelect a set of training examples, each of which consists of two parts, the input information and the desired outputSelect a sample from the training sample set and enter the input information into the networkForward propagation: calculates the output of the neurons in each hidden layer and output layer of the networkEndThe delta values of each node in the output layer are calculated in the first half of the formula (3),The connection weights of the neurons in the hidden layer are adjusted according to the formulaBack propagation: each hidden layer is calculated layer by layer in the second half of formula (3)The delta value of the upper neuron, and the connection weight of each neuron is adjusted according to the formula (2)NDoes the error E meet the requirements?Given input vectors and output vectorsAccording to formula (2) the actual output and expected output error eSets the connection weights for the training example and initial values for the thresholdAll of the training examples are centralizedHave you finished the sample yet?Flow chart of YB-P learning algorithmKey experiment of ground mechanical bionics technology of Ministry of education of Jilin UniversityThe B-P algorithm has many advantages, such as solid theoretical foundation, rigorous derivation process, clear physical concept and good generalityIt is a better algorithm for training forward multilayer networks.B-P algorithm also has some disadvantages, mainly in:(1) the convergence speed of the learning algorithm is slow, and it usually takes thousands of iterations, and with the training sample dimensionWith the increase in number, network performance will become worse.(2) the number of nodes in the network selection there is no theoretical guidance.(3) from a mathematical point of view, the B-P algorithm is a gradient steepest descent method, which may cause local minimaQuestions. When there is a local pole hour, the error meets the requirement on the surface, but the solution is not necessarilyIs the real solution to the problem. Therefore, the B-P algorithm is incomplete.Example of 8.3.3 reverse propagation calculationThe most important step in the B-P learning algorithm is to compute the connection between neurons in the network before training to the multilayer networkThe modified weight quantity Wjk. In order to calculate the delta Wjk, calculated in the backward delta is very important.The following example illustrates the calculation of delta in reverse propagation.EighteenKey experiment of ground mechanical bionics technology of Ministry of education of Jilin UniversityExample: here is a simple forward propagation network that uses the B-P algorithm to determine the weights of each connectionThe calculation method is as followsOneFourX1x22W13W233W34W355y1y2As you can see from the diagram:The reverse propagation is calculated as follows: I3=W13x1+W23x2O3=F(I3)(1) calculation;EI=WOO=Y=F(I)FourThree hundred and forty-three Forty-oneFour .W.E.E.I3 .e I= WO O=Y=F(I)=.D = xFiveThree hundred and fifty-three Fifty-twoFiveOne hundred and thirty-one.W...IThirteenThree hundred and thirteen ThreeTwelveTwoE=([Y1'.Y1)+(Y2'.Y2) ].E.E..e Two =.D = x TwoThirty-two.W.I..ITwenty-threeThree hundred and twenty-three (2) calculation;Delta4.=..IeFour =(Y1 .Y1 ')F'(I4) .E.E.I4 .=D = OThreeForty-three.W.I..I.EThirty-fourFour hundred and thirty-four FourDelta ==(Y.Y ')F'(I)FiveTwo hundred and twenty-five .I5.EE.I5.=.D = O ThreeFifty-three Delta = (delta W Delta + W )F'(I).W.I..IThree million four hundred and thirty-four thousand five hundred and thirty-fiveThreeThirty-fiveFive hundred and thirty-fiveKey experiment of ground mechanical bionics technology of Ministry of education of Jilin University8.4 Hopfield model and its learning algorithmThe feedforward network described earlier lacks dynamic processing capability, and hence its computational power is not strong enough.。
AI机器人的自主决策与行为规划随着人工智能(AI)技术的不断发展,AI机器人在许多领域表现出了强大的潜力和应用前景。
其中一个重要的方面是AI机器人的自主决策与行为规划能力。
本文将探讨AI机器人在自主决策和行为规划方面的应用和挑战,并展望未来的发展方向。
一、AI机器人的自主决策能力AI机器人的自主决策能力是指机器人能够自主地根据环境和任务要求做出决策,并采取相应的行动。
这种能力的核心是机器学习和推理。
通过对大量数据的学习和分析,AI机器人可以识别模式、预测趋势,并根据这些信息做出决策。
在实际应用中,AI机器人的自主决策能力被广泛应用于自动驾驶、智能家居、医疗保健等领域。
例如,自动驾驶汽车可以根据路况、交通规则和实时数据来做出决策,保证行车安全和效率。
智能家居系统可以根据用户的喜好和习惯来调节室内环境,提供舒适的居住体验。
然而,AI机器人的自主决策也面临一些挑战。
首先是数据的质量和数量。
机器学习和推理需要大量的高质量数据来训练模型和算法,但获取这些数据并不容易。
其次是决策过程的透明度和解释性。
AI机器人的决策过程通常是黑盒子,很难向人类解释其决策的原因和依据。
二、AI机器人的行为规划能力AI机器人的行为规划能力是指机器人可以根据自主决策制定和执行行为计划。
这种能力是基于AI机器人对环境、任务和自身能力的感知和理解。
通过感知和理解,机器人可以根据任务目标和约束条件来生成可行的行动序列,并选择最佳的行为策略。
行为规划在工业生产、物流配送、服务机器人等领域有着广泛的应用。
例如,在工业生产中,AI机器人可以根据生产线的配置和要求,规划机械臂的运动轨迹和物料传递路径,实现高效率的生产。
在物流配送中,AI机器人可以根据货物的位置和目的地,规划最短的配送路径,并避开障碍物和交通拥堵。
然而,AI机器人的行为规划也面临一些挑战。
首先是环境的复杂性和不确定性。
现实世界中的环境往往是动态和不确定的,AI机器人需要能够处理和适应各种变化的情况。