数学 外文翻译 外文文献 英文文献 矩阵
- 格式:doc
- 大小:294.00 KB
- 文档页数:11
Matrix Analysis: Unlocking the Power ofLinear AlgebraIn the realm of mathematics, matrix analysis stands as a towering edifice, bridging the gap between abstract concepts and practical applications. At its core, matrix analysis is the study of matrices—rectangular arrays of numbers or symbols—and their properties, operations, and transformations. This branch of mathematics finds its roots in linear algebra and has evolved to become a crucial tool in various fields, including physics, engineering, computer science, and economics.Matrices are ubiquitous in modern science and technology. They serve as compact representations of systems of linear equations, allowing us to manipulate and solve them efficiently. Matrix analysis provides a robust framework for understanding the behavior of these systems, enabling us to predict their outcomes and design optimal solutions.One of the fundamental operations in matrix analysis is matrix multiplication. This operation not only extends the algebraic structure of matrices but also underlies manycomplex computations in various fields. Matrix multiplication finds applications in image processing, where it is used to perform transformations such as rotation, scaling, and translation on images. In computer graphics, matrices are employed to represent 3D objects and their movements in space.Another cornerstone of matrix analysis is matrix inversion. The inverse of a matrix plays a pivotal role in solving systems of linear equations and inverting linear transformations. It also finds applications in statistical analysis, where it is used to compute the covariance matrix of a dataset or to estimate the parameters of a linear regression model.Eigenvalues and eigenvectors are yet another vital concept in matrix analysis. They provide insights into the inherent properties of matrices and the behavior of linear transformations. Eigenvalues represent the scaling factors of the eigenvectors under the transformation, revealing information about stability, periodicity, and other dynamical properties of the system. These concepts arecrucial in areas such as quantum mechanics, control systems, and network analysis.Moreover, matrix analysis also deals with matrix decompositions, which involve expressing a matrix as a product of simpler matrices. These decompositions, such as the LU decomposition, the Cholesky decomposition, and the eigenvalue decomposition, provide efficient methods for solving linear systems, computing matrix inverses, and performing other matrix operations.In conclusion, matrix analysis stands as a powerfultool in the arsenal of mathematicians and scientists. It unlocks the potential of linear algebra, enabling us to understand and manipulate complex systems with ease. From physics to engineering, from computer science to economics, matrix analysis continues to play a pivotal role in advancing our understanding of the world and shaping the future of technology.**矩阵分析:解锁线性代数的力量**在数学领域,矩阵分析如同一座高耸入云的建筑,架起了抽象概念与实际应用之间的桥梁。
国内外主要参考文献格式英文参考文献在前,中文参考文献在后,均按字母顺序排列。
常用参考文献的类型和标识代码:专著[M]、论文集[C]、期刊文章[J]、报纸文章[N]、学位论文[D]、报告[R]、汇编[G]、标准[S]、专利[P]、数据库[DB]、计算机程序[CP]、电子公告[EB/OL]、磁带[MT]、磁盘[DK]、光盘[CD]、联机网络[OL]。
主要相关格式为:1)专著格式[序号]主要责任者.文献题名[文献类型标识].出版地:出版者,出版年.示例:Krashen, S.T.Input Hypothesis: Issues and Implications[M].London: Longman,1985.文秋芳.英语学习策略论[M].上海:上海外语教学出版社,1995.2)论文集格式[序号]主要责任者.文献题名[文献类型标识].原文献主要责任者.文集名.出版地:出版者,出版年.示例:Calder,Alex. My Katherine Mansfield[A]. Robinson,Roger (Ed).Katherine Mansfield—in from the Margin[C]. Baton Rouge: Louisiana State Press, 1994.冯雪峰.谈士节兼论周作人[A].孙郁,黄乔生.国难声中回望周作人[C].洛阳:河南大学出版社,2004.3)期刊文章[序号]主要责任者.文献题名[文献类型标识].刊名,年,卷(期):起止页码.示例:Dickinson, L. Autonomy and Motivation: a Literature Review[J].System,1995,23(2):165-174.朱永生.搭配的语义基础和搭配研究的实际意义[J].外国语,1996(1):14-18.4)电子文献[序号]主要责任者.电子文献题名[文献类型标识].电子文献的出处或可获得地址,发表或更新日期/引用日期(任选).示例:王明亮.关于中国学术期刊网标准化数据库系统工程的进展[EB/OL]./pub/wml.txt/980810-2.html,1998-08-16/1998-10-04.注意:英文参考文献中专著名、文集名和刊名需斜体。
毕业设计(论文)外文资料翻译系部:机械工程专业:机械工程及自动化姓名:学号:外文出处:Control and(用外文写)Robotics(CRB) Technical Report 附件:1.外文资料翻译译文;2.外文原文。
附件1:外文资料翻译译文轮式移动机器人的导航与控制摘要:本文研究了把几种具有导航功能的方法运用于不同的控制器开发,以实现在一个已知障碍物前面控制一个开环系统(例如:轮式移动机器人)执行任务。
第一种方法是基于三维坐标路径规划的控制方法。
具有导航功能的控制器在自由配置的空间中生成一条从初始位置到目标位置的路径。
位移控制器控制移动机器人沿设置的路径运动并停止在目标位置。
第二种方法是基于二维坐标路径规划的控制方法。
在二维平面坐标系中建立导航函数,基于这种导航函数设计的微控制器是渐进收敛控制系统。
仿真结果被用来说明第二种控制方法的性能。
1介绍很多研究者已经提出不同算法以解决在障碍物杂乱的环境下机器人的运动控制问题。
对与建立无碰撞路径和传统的路径规划算法,参考文献[19]的第一章第九部分中提供了的全面总结。
从Khatib在参考文献[13]的开创性工作以来,很显然控制机器人在已知障碍物下执行任务的主流方法之一依然是构建和应用位函数。
总之,位函数能够提供机器人工作空间、障碍位置和目标的位场。
在参考文献[19]中提供对于位函数的全面研究。
应用位函数的一个问题是局部极小化的情况可能发生以至于机器人无法到达目标位置。
不少研究人士提出了解决局部极小化错误的方法(例如参考文献[2], [3],[5], [14], [25])。
其中Koditschek在参考文献[16]中提供了一种解决局部极小化错误的方法,那是通过基于一种特殊的位函数的完整系统构建导航函数,此函数有精确的数学结构,它能够保证存在唯一最小值。
在针对标准的 (完整的)系统的先前的结果的影响下, 面对更多的具有挑战性的非完整系统,越来越多的研究集中于位函数方法的发展(例如.,机器人)。
高等数学外文教材推荐在学习高等数学的过程中,选择一本优质的教材对于提高学习效果和理解能力至关重要。
除了中文教材外,外文教材也是一种很好的选择。
它们提供了不同的视角和方法,拓宽了学生的数学思维,增强了他们的语言表达能力。
以下是一些值得推荐的高等数学外文教材:1. "Calculus" by Michael Spivak《微积分学》迈克尔·斯皮瓦克这是一本经典的高等数学教材,适合有一定数学基础的学生。
斯皮瓦克书中的证明严谨而精炼,概念讲解清晰,推导过程详细,很好地帮助了解微积分的基本概念和方法。
此外,该教材还通过习题和练习提供了大量的实践机会,帮助学生巩固所学内容。
2. "Advanced Engineering Mathematics" by Erwin Kreyszig《高级工程数学》埃温·克雷西格这本教材适用于工程和科学专业的学生,涵盖了数学分析、线性代数、概率论等方面的知识。
克雷西格书中的内容既深入又全面,结合实际应用,将数学概念与工程问题相结合,使学生更容易理解和应用数学方法。
3. "Linear Algebra and Its Applications" by Gilbert Strang《线性代数及其应用》吉尔伯特·斯特朗斯特朗的教材深入浅出,以直观的方式介绍了线性代数的概念和技巧。
书中提供了许多实际应用的例子和练习题,帮助学生将抽象的线性代数理论与实际问题联系起来。
此外,斯特朗还通过图形和直观的几何解释使数学内容更易于理解。
4. "Introduction to Probability" by Joseph K. Blitzstein and Jessica Hwang《概率论导论》约瑟夫·B.布利茨坦和黄嘉丽这本教材是一本适合初学者的概率论教材。
布利茨坦和黄嘉丽以亲切和易读的方式介绍了概率论的基本概念和技巧。
毕业论文外文文献翻译要求
一、翻译的外文文献可以是一篇,也可以是两篇,但英文字符要求不少于2万
二、翻译的外文文献应主要选自学术期刊、学术会议的文章、有关著作及其他相关材料,应与毕业论文(设计)主题相关,并在中文译文首页用“脚注”形式注明原文作者及出处,外文原文后应附中文译文。
三、中文译文的基本撰写格式为:
1.题目:采用三号、黑体字、居中打印;
2.正文:采用小四号、宋体字,行间距一般为固定值20磅,标准字符间距。
页边距为左3cm,右2.5cm,上下各2.5cm,页面统一采用A4纸。
四、英文的基本撰写格式为:
1.题目:采用三号、Times New Roman字、加黑、居中打印
2.正文:采用小四号、Times New Roman字。
行间距一般为固定值20磅,标准字符间距。
页边距为左3cm,右2.5cm,上下各2.5cm,页面统一采用A4纸.
3.脚注:五号,Times New Roman,顺序为作者.题目.出处,
五、封面格式由学校统一制作(注:封面上的“翻译题目”指中文译文的题目,封面中文小四号宋体,英文小四号Times New Roman),
六、装订:左侧均匀装订,上下共两个钉,并按“封面、外文原文、译文”的顺序统一装订。
七、忌自行更改表格样式
大连工业大学艺术与信息工程学院
毕业设计(论文)外文文献
外文题目
翻译题目
系别
专业班级
学生姓名
指导教师
大连工业大学艺术与信息工程学院
毕业设计(论文)开题报告
题目名称
系别
专业班级
学生姓名
指导教师
开题报告日期年月日。
关于直流调速器的集成容错方案D. U. Campos-Delgado, Member, IEEE, S. Martínez -Martínez,and K. Zhou, Fellow, IEEE译者:张进指导老师:曾孟雄教学单位:机械与材料学院摘要——本文呈现了一个带有干扰补偿的容错控制(FTC)方案。
故障检测和干扰补偿融为一体,对模型系统的不稳定性提出了一种鲁棒算法。
在现行容错方案中,GIMC控制结构[23]被用作反馈装置。
利用鲁棒控制理论可获得容错方案的参数生成。
考虑到数学模型系统的不稳定性和动荡性,设计了一个检测滤波器用于故障分离。
最后,故障补偿机制包括对干扰的评估,在检测到故障后,该评估应用于系统可以提高闭环系统的性能。
为了说明这些想法,选择直流电动机的速度调节作为个案研究,实验成果如下报告。
设计,鲁棒控制。
关键术语——直流电动机,容错控制,H∞Ⅰ. 导言在许多工业应用中,有昂贵设备的管理及操作人员的参与。
在这些情况下,是需要对自动化加工过程提供一些安全程度的。
因此,操作者必须接受一个加入这一加工过程用于指示故障可能产生的指示器,以便采取适当的行动。
对于某些类型的故障,通常可以设计标称控制系统来处理,该系统可用于接受那些故障或者保留故障情况下的一些性能,这是可能的(被动的方法)[1]-[3]。
但是,这一策略在实践中往往是保守的,因为控制器的设计上必须考虑到最坏的情况。
产生这些容错控制器的方法之一,就是采用H∞鲁棒设计技术[4]-[6]。
用于容错控制(FTC)的另一种方法依赖于控制过程中故障实例的检测,目的是把一个合适的补偿机制应用于反馈系统中(主动的方法)[7]。
在这个方案中,首先要检测一种故障情况,这是必须的,其次,需要设计一种算法,来确定所发生的故障类型(故障隔离)。
基于故障分离模块,需要引入一个用于标称控制信号的外部补偿信号,或者更新控制器是参数[8],[9]。
英文中matrix常用意思
在英文中,"matrix"这个词有几种常用的意思。
首先,它可以指代数学中的矩阵,即由数字排成行和列组成的矩形数组。
矩阵在线性代数和计算机图形学等领域中被广泛应用。
其次,"matrix"也可以表示一种复杂而密集的环境或结构,比如"social matrix"(社会结构)或"political matrix"(政治环境)。
这种用法表示了一个由多个交织因素构成的复杂系统。
此外,"matrix"还可以指代生物学中的基质,即细胞外基质或细胞内基质,它们对细胞的结构和功能起着重要作用。
最后,"matrix"还有一个非正式用法,指代电影《黑客帝国》中的虚拟现实世界。
这个用法通常用于描述类似虚拟世界的概念或技术。
总的来说,"matrix"这个词在英文中有多种常用的意思,涵盖了数学、科学、社会和文化等多个领域。
毕业设计外文资料翻译题目: CCD stereo vision measurement system theory院系名称:电气工程学院专业班级:自动化学生姓名:学号:指导教师:教师职称:起止日期:2011-2-26~2011-3-14地点:附件: 1.外文资料翻译译文;2.外文原文。
附件1:外文资料翻译译文CCD双目立体视觉测量系统的理论研究摘要:利用几何成像原理建立起CCD 双目立体视觉测量系统的数学模型,从提高系统测量精度出发,在理论上重点对系统结构参数、图像识别误差与系统测量精度的关系进行了深入的分析和探讨,并通过实验对结论进行了验证。
研究内容对实际建立该测量系统具有很强的指导作用。
关键词: 立体视觉; CCD ; 测量精度; 图像识别; 系统测量引言双目立体视觉测量技术是计算机视觉中的一个重要分支,一直是计算机视觉研究的重点和热点之一。
由于其近似于人眼视觉系统,具有较高的测量精度和速度,并具有结构简单,便于使用等优点,所以被广泛应用于工业检测、物体识别、工件定位、机器人自导引等诸多领域。
近年来许多学者对此进行了大量的研究工作[1 - 4 ]。
其中大量的工作集中在对视觉测量系统的数学模型、系统的定标方法[5 - 7 ]以及目标特征点匹配算法[8 - 9 ]的研究上,而对系统的结构参数(两个CCD之间的距离、光轴夹角等)研究得却很少。
文献[10 ]对立体视觉结构参数进行了相应的理论研究,但它是从观看物体时的深度感出发研究CCD与物体之间的距离、两个CCD间距和观看距离3个参数之间的关系,没有涉及到结构参数对系统测量精度的影响。
而实践证明系统的结构参数设置在实际应用中对于系统的测量精度是至关重要的。
此外,从立体视觉测量原理中,可以看出图像识别误差是另一个对系统测量精度产生直接影响的重要因素。
综合以上考虑,从理论上对系统的结构参数设置和图像识别误差对系统测量精度的影响进行了深入的分析和研究。
结合系统结构参数对摄像机定标精度的影响,给出了实际应用中组建双目立体视觉测量系统的设计方案。
外文文献EXTREME VALUES OF FUNCTIONS OF SEVERALREAL VARIABLES1. Stationary PointsDefinition 1.1 Let n R D ⊆ and R D f →:. The point a D a ∈ is said to be:(1) a local maximum if )()(a f x f ≤for all points x sufficiently close to a ;(2) a local minimum if )()(a f x f ≥for all points x sufficiently close to a ;(3) a global (or absolute) maximum if )()(a f x f ≤for all points D x ∈;(4) a global (or absolute) minimum if )()(a f x f ≥for all points D x ∈;;(5) a local or global extremum if it is a local or global maximum or minimum. Definition 1.2 Let n R D ⊆ and R D f →:. The point a D a ∈ is said to be critical or stationary point if 0)(=∇a f and a singular point if f ∇ does not exist at a .Fact 1.3 Let n R D ⊆ and R D f →:.If f has a local or global extremum at the point D a ∈, then a must be either:(1) a critical point of f , or(2) a singular point of f , or(3) a boundary point of D .Fact 1.4 If f is a continuous function on a closed bounded set then f is bounded and attains its bounds.Definition 1.5 A critical point a which is neither a local maximum nor minimum is called a saddle point.Fact 1.6 A critical point a is a saddle point if and only if there are arbitrarily small values of h for which )()(a f h a f -+ takes both positive and negative values.Definition 1.7 If R R f →2: is a function of two variables such that all second order partial derivatives exist at the point ),(b a , then the Hessian matrix of f at ),(b a is the matrix⎪⎪⎭⎫ ⎝⎛=yy yxxy xx f f f f H where the derivatives are evaluated at ),(b a . If R R f →3: is a function of three variables such that all second order partial derivatives exist at the point ),,(c b a , then the Hessian of f at ),,(c b a is the matrix⎪⎪⎪⎭⎫ ⎝⎛=zz zy zx yz yy yx xz xy xx f f f f f f f f f H where the derivatives are evaluated at ),,(c b a .Definition 1.8 Let A be an n n ⨯ matrix and, for each n r ≤≤1,let r A be the r r ⨯ matrix formed from the first r rows and r columns of A .The determinants det(r A ),n r ≤≤1,are called the leading minors of ATheorem 1.9(The Leading Minor Test). Suppose that R R f →2:is a sufficiently smooth function of two variables with a critical point at ),(b a and H the Hessian of f at ),(b a .If 0)det(≠H , then ),(b a is:(1) a local maximum if 0>det(H 1) = f xx and 0<det(H )=2xy yy xx f f f -;(2) a local minimum if 0<det(H 1) = f xx and 0<det(H )=2xy yy xx f f f -;(3) a saddle point if neither of the above hold.where the partial derivatives are evaluated at ),(b a .Suppose that R R f →3: is a sufficiently smooth function of three variables with a critical point at ),,(c b a and Hessian H at ),,(c b a .If 0)det(≠H , then ),,(c b a is:(1) a local maximum if 0>det(H 1), 0<det(H 2) and 0>det(H 3);(2) a local minimum if 0<det(H 1), 0<det(H 2) and 0>det(H 3);(3) a saddle point if neither of the above hold.where the partial derivatives are evaluated at ),,(c b a .In each case, if det(H )= 0, then ),(b a can be either a local extremum or a saddleExample. Find and classify the stationary points of the following functions:(1) ;1),,(2224+++++=xz z y y x x z y x f(2) ;)1()1(),(422++++=x y x y y x fSolution. (1) 1),,(2224+++++=xz z y y x x z y x f ,so)24),(3z xy x y x f ++=∇(i )2(2y x ++j )2(x z ++kCritical points occur when 0=∇f ,i.e. when(1) z xy x ++=2403(2) y x 202+=(3) x z +=20Using equations (2) and (3) to eliminate y and z from (1), we see that 021433=--x x x or 0)16(2=-x x ,giving 0=x ,66=x and 66-=x .Hence we have three stationary points: )(0,0,0,)(126,121,66-- and )(126,121,66--. Since y x f xx 2122+=,x f xy 2=,1=xz f ,2=yy f ,0=yz f and 2=zz f ,the Hessian matrix is⎪⎪⎪⎭⎫ ⎝⎛+=201022122122x x y x H At )(126,121,66--, ⎪⎪⎪⎪⎭⎫ ⎝⎛=201023/613/66/11H which has leading minors 611>0, 039631123/63/66/11det >=-=⎪⎪⎭⎫ ⎝⎛ And det 042912322>=--=H .By the Leading Minor Test, then, )(126,121,66--is a local minimum. At )(126,121,66--, ⎪⎪⎪⎪⎭⎫ ⎝⎛--=201023/613/66/11H which has leading minors 611>0,039631123/63/66/11det >=-=⎪⎪⎭⎫ ⎝⎛ And det 042912322>=--=H .By the Leading Minor Test, then, )(126,121,66--is also a local minimum. At )(0,0,0, the Hessian is⎪⎪⎪⎭⎫ ⎝⎛=201020100HSince det 2)(-=H , we can apply the leading minor test which tells us that this is a saddle point since the first leading minor is 0. An alternative method is as follows. In this case we consider the value of the expressionhl l k k h h l k h f f D ++++=+++-=22240,0,00,0,0)()(,for arbitrarily small values of h, k and l. But for very small h, k and l , cubic terms and above are negligible in comparison to quadratic and linear terms, sothat hl l k D ++≈22.If h, k and l are all positive, 0>D . However, if 0=k and 0<h and h l <<0,then 0<D .Hence close to )(0,0,0,f both increases and decreases, so )(0,0,0 is a saddle point.(2) 422)1()1(),(++++=x y x y y x f so))1(4)1(2(),(3+++=∇x y x y x f i ))1(2(2+++x y j .Stationary points occur when 0=∇f ,i.e. at )0,1(-.Let us classify this stationary point without considering the Leading Minor Test (in this case the Hessian has determinant 0 at )0,1(- so the test is not applicable). Let.0,10,1422h k h k k h f f D ++=++---=)()(Completing the square we see that .43)2(222h h k D ++=So for any arbitrarily small values of h and k , that are not both 0, 0>D and we see that f has a local maximum at )0,1(-.2. Constrained Extrema and Lagrange MultipliersDefinition 2.1 Let f and g be functions of n variables. An extreme value of f (x )subject to the condition g (x) = 0, is called a constrained extreme value and g (x ) = 0 is called the constraint.Definition 2.2 If R R f n →: is a function of n variables, the Lagrangian function of f subject to the constraint 0),,,(21=n x x x g is the function of n+1 variables),,,,(),,,(),,,,(212121n n n x x x g x x x f x x x L λλ+=where is known as the Lagrange multiplier.The Lagrangian function of f subject to the k constraints0),,,(21=n i x x x g ,k i ≤≤1, is the function with k Lagrange multipliers,i λk i ≤≤1,∑=+=ki n n n x x x g x x x f x x x L 1212121),,,(),,,(),,,,( λλTheorem 2.3 Let R R f →2: and ),(00y x P = be a point on the curve C, withequation g(x,y) = 0, at which f restricted to C has a local extremum.Suppose that both f and g have continuous partial derivatives near to P and that P is not an end point of C and that 0),(00≠∇y x g . Then there is some λ such that ),,(000z y x is a critical point of the Lagrangian Function),(),(),,(y x g y x f y x L λλ+=.Proof. Sketch only. Since P is not an end point and 0≠∇g ,C has a tangent at P with normal g ∇.If f ∇ is not parallel to g ∇at P , then it has non-zero projection along this tangent at P .But then f increases and decreases away from P along C ,so P is not an extremum. Hence f ∇and g ∇are parallel and there is some¸such that g f ∇-=∇λ and the result follows.Example. Find the rectangular box with the largest volume that fits inside the ellipsoid 1222222=++cz b y a x ,given that it sides are parallel to the axes. Solution. Clearly the box will have the greatest volume if each of its corners touch the ellipse. Let one corner of the box be corner (x, y, z) in the positive octant, then the box has corners (±x,±y,±z) and its volume is V= 8xyz .We want to maximize V given that 01222222=-++cz b y a x . (Note that since the constraint surface is bounded a max/min does exist). The Lagrangian is⎪⎪⎭⎫ ⎝⎛-+++=18),,,(222222c z b y a x xyz z y x L λλ and this has critical points when 0=∇L , i.e. when,.280,28022b y zx y L a x yz x L λλ+=∂∂=+=∂∂=⎪⎪⎭⎫ ⎝⎛-++=∂∂=+=∂∂=10,2802222222c z b y a x z L c z xy z L λ (Note that λL will always be the constraint equation.) As we want to maximize V we can assume that 0≠xyz so that 0,,≠z y x .)Hence, eliminating λ, we get,444222zxy c y zx b x yz a -=-=-=λ so that 2222b x a y = and .2222c y b z =But then 222222c z b y a x ==so 2222222231ax c z b y a x =++= or 3a x =,which implies that 3b y = and 3c z = (they are all positive by assumption). So L has only one stationary point ),3,3,3(λc b a (for some value of λ, which we could work out if we wanted to). Since it is the only stationary point it must the required max and the max volume is3383338abc c b a =.中文译文 多元函数的极值1. 稳定点定义1.1 使n R D ⊆并且R D f →:. 对于任意一点D a ∈有以下定义:(1)如果)()(a f x f ≤对于所有x 充分地接近a 时,则)(a f 是一个局部极大值;(2)如果)()(a f x f ≥对于所有x 充分地接近a 时,则)(a f 是一个局部极小值;(3)如果)()(a f x f ≤对于所有点D x ∈成立,则)(a f 是一个全局极大值(或绝对极大值);(4) 如果)()(a f x f ≥对于所有点D x ∈成立,则)(a f 是一个全局极小值(或绝对极小值); (5) 局部极大(小)值统称为局部极值;全局极大(小)值统称为全局极值.定义 1.2 使n R D ⊆并且R D f →:.对于任意一点D a ∈,如果0)(=∇a f ,并且对于任意奇异点a 都不存在f ∇,则称a 是一个关键点或稳定点.结论 1.3 使n R D ⊆并且R D f →:.如果f 有局部极值或全局极值对于一点D a ∈, 则a 一定是:(1)函数f 的一个关键点, 或者(2)函数f 的一个奇异点, 或者(3)定义域D 的一个边界点.结论 1.4 如果函数f 是一个在闭区间上的连续函数,则f 在区间上有边界并且可以取到边界值.定义 1.5 对于任一个关键点a ,当a 既不是局部极大值也不是局部极小值时,a 叫做函数的鞍点.结论 1.6 对于一个关键点a 是鞍点当且仅当h 任意小时,对于函数)()(a f h a f -+取正值和负值.定义 1.7 如果R R f →2: 是二元函数,并且在点),(b a 处所有二阶偏导数都存在,则则根据函数f 在点),(b a 处导数,有f 在点),(b a 处的Hessian 矩阵为:⎪⎪⎭⎫ ⎝⎛=yy yx xy xxf ff f H . 推广:如果R R f →3: 是三元函数,并且在点),,(c b a 处所有二阶偏导数都存在,则根据函数f 在点),,(c b a 处导数,有f 在点),,(c b a 处的Hessian 矩阵为:⎪⎪⎪⎭⎫ ⎝⎛=zz zyzxyz yy yxxz xy xxf f f f f f f f f H . 定义 1.8 矩阵A 是n n ⨯ 阶矩阵,并且对于每一个都有n r ≤≤1,从矩阵A 中选取左上端的r 行和r 列,令其为r r ⨯阶的矩阵r A .则行列式det(r A ),n r ≤≤1,叫做矩阵A 的顺序主子式.定理 1.9 假如R R f →2:是一个充分光滑的二元函数,且在点),(b a 处稳定,其Hessian 矩阵为H .如果0)det(≠H ,则根据偏导数判定),(b a 点是:(1) 一个局部极大值点, 如果0>det(H 1) = f xx 并且0<det(H )=2xy yy xx f f f -; (2) 一个局部极小值点, 如果0<det(H 1) = f xx 并且0<det(H )=2xy yy xx f f f -;(3) 一个鞍点,如果点),(b a 既不是局部极大值点也不是局部极小值点. 假如R R f →3:是一个充分光滑的三元函数,且在点),,(c b a 处稳定,其Hessian 矩阵为H .如果0)det(≠H ,则根据偏导数判定),,(c b a 点是: (1) 一个局部极大值点, 如果当0>det(H 1), 0<det(H 2) 并且 0>det(H 3)时; (2) 一个局部极小值点, 如果当0<det(H 1), 0<det(H 2) 并且 0>det(H 3)时; (3) 一个鞍点,如果点),,(c b a 既不是局部极大值点也不是局部极小值点. 在不同的情况下 ,当det(H )= 0时, 点),(b a 是一个局部极值点,或者是一个鞍点.例. 确定下列函数的稳定点并说明是哪一类点: (1) ;1),,(2224+++++=xz z y y x x z y x f (2) ;)1()1(),(422++++=x y x y y x f 解. (1) 1),,(2224+++++=xz z y y x x z y x f ,so)24),(3z xy x y x f ++=∇(i )2(2y x ++j )2(x z ++k当0=∇f 时有稳定点,也就是说, 当(1) z xy x ++=2403 (2) y x 202+= (3) x z +=20时,将方程(2)和方程(3)带入到方程(1)可以消去变量y 和z, 由此可以得到021433=--x x x 即0)16(2=-x x ,得0=x ,66=x 和66-=x .因此我们可以得到函数的三个稳定点:)(0,0,0,)(126,121,66--和)(126,121,66--. 又因为y x f xx 2122+=,x f xy 2=,1=xz f ,2=yy f ,0=yz f 和2=zz f ,则Hessian 矩阵为⎪⎪⎪⎭⎫⎝⎛+=201022122122x x y x H在点)(126,121,66--处, ⎪⎪⎪⎪⎭⎫⎝⎛=201023/613/66/11H则顺序主子式611>0, 039631123/63/66/11>=-=并且行列式042912322>=--=H .根据主子式判定方法,则点)(126,121,66--是一个局部极小值点.在点)(126,121,66--处, ⎪⎪⎪⎪⎭⎫ ⎝⎛--=201023/613/66/11H则顺序主子式 611>0,039631123/63/66/11>=-=-- 并且行列式042912322>=--=H .根据主子式判定方法,则点)(126,121,66--也是一个极小值点.在点)(0,0,0处,Hessian 矩阵为⎪⎪⎪⎭⎫⎝⎛=201020100H因此det 2)(-=H ,根据主子式判定方法,第一主子式为0,由此我们可以知道该点是一个鞍点. 下面是另一种计算方法,在这种情况下,我们考虑现在下面函数表达式hl l k k h h l k h f f D ++++=+++-=22240,0,00,0,0)()(,的值,对于任意h, k 和l 无限小时. 担当h, k 和l 非常小时, 三次及三次以上方程相对线性二次方程时可忽略不计,则原方程可为hl l k D ++≈22.当h, k 和l 都为正时,0>D .然而, 当0=k 、0<h 和h l <<0,则0<D .因此当接近)(0,0,0时,f 同时增加或者同时减少, 所以 )(0,0,0是一个鞍点. (2) 422)1()1(),(++++=x y x y y x f so))1(4)1(2(),(3+++=∇x y x y x f i ))1(2(2+++x y j .当0=∇f 时有稳定点,也就是说, 当在)0,1(-时.现在我们在不考虑主子式判定方法的情况下为该稳定点进行分类(因为在)0,1(-时Hessian 矩阵的行列式为0,所以该判定方法在此刻无法应用).令.0,10,1422h k h k k h f f D ++=++---=)()(配成完全平方的形式为.43)2(222h h k D ++=所以对h 和k 为任意小时(h 和k 都不为0),有0>D ,因此我们可以确定函数f 在点)0,1(-处有局部极大值.2. 条件极值和Lagrange 乘数法定义 2.1 函数f 和函数g 都是n 元函数.对于限制在条件g (x) = 0下的函数f (x )的极值叫做函数的条件极值,函数g (x ) = 0叫做限制条件.定义 2.2 如果函数R R f n →: 是一个n 元函数, 则对应于函数f 的Lagrange 函数在限制条件0),,,(21=n x x x g 下的函数是一个n +1元函数),,,,(),,,(),,,,(212121n n n x x x g x x x f x x x L λλ+=这就是著名的Lagrange 乘数法.对应于函数f 的Lagrange 函数在k 个限制条件0),,,(21=n i x x x g ,k i ≤≤1时, 带有k 个i λk i ≤≤1,的Lagrange 函数为:∑+=kn n n x x x g x x x f x x x L 212121),,,(),,,(),,,,( λλ定理 2.3 使R R f →2:并且),(00y x P =是曲线C 上的一个点, 有方程 g(x,y) = 0成立,则在限制条件C 上函数f 有局部极值.假设函数f 和函数g 在点P 都有连续的偏导数,点P 不是曲线C 的端点,且0),(00≠∇y x g . 因此存在λ的值使得点),,(000z y x 是Lagrange 函数的关键点),(),(),,(y x g y x f y x L λλ+=.证明.仅仅描述. 因为点P 不是曲线C 的端点,且0≠∇g ,则曲线C 在点P 处的切线与g ∇有关.如果f ∇在点P 处与g ∇平行,则函数在点P 处的切线有非零值.但另一方面函数 f 的值随着P 在C 的运动增加减小,所以点P 不是极值点. 因为f ∇和g ∇平行,所以存在λ使得g f ∇-=∇λ成立.例. 求内接于椭球1222222=++cz b y a x 的体积最大的长方体的体积,长方体的各个面平行于坐标面解:明显地,当长方体的体积最大时,长方体的各个顶点一定在椭球上. 设长方体的一个顶点坐标为(x, y, z) (x>0, y>0, z>0), 则长方体的其他顶点坐标分别为(±x,±y,±z),并且长方体的体积为V= 8xyz.我们要求V 在条件01222222=-++cz b y a x 下的最大值. (注意:因为约束条件是有边界的,故其一定存在极大或者极小值). 其Lagrange 函数为⎪⎪⎭⎫⎝⎛-+++=18),,,(222222c z b y a x xyz z y x L λλ并且存在稳定点当0=∇L 时,也就是说,当,.280,280,280222cz xy z L b yzx y L a x yz x L λλλ+=∂∂=+=∂∂=+=∂∂=⎪⎪⎭⎫ ⎝⎛-++=∂∂=10222222c z b y a x z L 时.(注意:λL 是约束方程.要想求得体积V 的最大值,假设0≠xyz ,则可得0,,≠z y x .)因此, 用其他式子表示λ, 我们可以得到,444222zxyc y zx b x yz a -=-=-=λ 消去λ,有2222b x a y =和.2222c y b z =进而得出 222222cz b y a x ==,因此有2222222231ax c z b y a x =++=或者得出3a x =,同理可得出3by =和3c z = (根据假设可得x, y, z 都是正值).所以函数 L 有且仅有一个稳定点),3,3,3(λc b a (λ为某一计算可得到的常数). 又因为该点是函数L 的唯一稳定点,则该稳定点一定是所要求的最大值点,故其体积的最大值为3383338abcc b a =.。
数据分析外文文献+翻译文献1:《数据分析在企业决策中的应用》该文献探讨了数据分析在企业决策中的重要性和应用。
研究发现,通过数据分析可以获取准确的商业情报,帮助企业更好地理解市场趋势和消费者需求。
通过对大量数据的分析,企业可以发现隐藏的模式和关联,从而制定出更具竞争力的产品和服务策略。
数据分析还可以提供决策支持,帮助企业在不确定的环境下做出明智的决策。
因此,数据分析已成为现代企业成功的关键要素之一。
文献2:《机器研究在数据分析中的应用》该文献探讨了机器研究在数据分析中的应用。
研究发现,机器研究可以帮助企业更高效地分析大量的数据,并从中发现有价值的信息。
机器研究算法可以自动研究和改进,从而帮助企业发现数据中的模式和趋势。
通过机器研究的应用,企业可以更准确地预测市场需求、优化业务流程,并制定更具策略性的决策。
因此,机器研究在数据分析中的应用正逐渐受到企业的关注和采用。
文献3:《数据可视化在数据分析中的应用》该文献探讨了数据可视化在数据分析中的重要性和应用。
研究发现,通过数据可视化可以更直观地呈现复杂的数据关系和趋势。
可视化可以帮助企业更好地理解数据,发现数据中的模式和规律。
数据可视化还可以帮助企业进行数据交互和决策共享,提升决策的效率和准确性。
因此,数据可视化在数据分析中扮演着非常重要的角色。
翻译文献1标题: The Application of Data Analysis in Business Decision-making The Application of Data Analysis in Business Decision-making文献2标题: The Application of Machine Learning in Data Analysis The Application of Machine Learning in Data Analysis文献3标题: The Application of Data Visualization in Data Analysis The Application of Data Visualization in Data Analysis翻译摘要:本文献研究了数据分析在企业决策中的应用,以及机器研究和数据可视化在数据分析中的作用。
Assume that you have a guess U(n) of the solution. If U(n) is close enough to the exact solution, an improved approximation U(n + 1) is obtained by solving the linearized problemwhere have asolution.has. In this case, the Gauss-Newton iteration tends to be the minimizer of the residual, i.e., the solution of minUIt is well known that for sufficiently smallAndis called a descent direction for , where | is the l2-norm. The iteration iswhere is chosen as large as possible such that the step has a reasonable descent.The Gauss-Newton method is local, and convergence is assured only when U(0)is close enough to the solution. In general, the first guess may be outside thergion of convergence. To improve convergence from bad initial guesses, a damping strategy is implemented for choosing , the Armijo-Goldstein line search. It chooses the largestinequality holds:|which guarantees a reduction of the residual norm by at least Note that each step of the line-search algorithm requires an evaluation of the residualAn important point of this strategy is that when U(n) approaches the solution, then and thus the convergence rate increases. If there is a solution to the scheme ultimately recovers the quadratic convergence rate of the standard Newton iteration. Closely related to the above problem is the choice of the initial guess U(0). By default, the solver sets U(0) and then assembles the FEM matrices K and F and computesThe damped Gauss-Newton iteration is then started with U(1), which should be a better guess than U(0). If the boundary conditions do not depend on the solution u, then U(1) satisfies them even if U(0) does not. Furthermore, if the equation is linear, then U(1) is the exact FEM solution and the solver does not enter the Gauss-Newton loop.There are situations where U(0) = 0 makes no sense or convergence is impossible.In some situations you may already have a good approximation and the nonlinear solver can be started with it, avoiding the slow convergence regime.This idea is used in the adaptive mesh generator. It computes a solution on a mesh, evaluates the error, and may refine certain triangles. The interpolant of is a very good starting guess for the solution on the refined mesh.In general the exact Jacobianis not available. Approximation of Jn by finite differences in the following way is expensive but feasible. The ith column of Jn can be approximated bywhich implies the assembling of the FEM matrices for the triangles containing grid point i. A very simple approximation to Jn, which gives a fixed point iteration, is also possible as follows. Essentially, for a given U(n), compute the FEM matrices K and F and setNonlinear EquationsThis is equivalent to approximating the Jacobian with the stiffness matrix. Indeed, since putting Jn = K yields In many cases the convergence rate is slow, but the cost of each iteration is cheap.The nonlinear solver implemented in the PDE Toolbox also provides for a compromise between the two extremes. To compute the derivative of the mapping , proceed as follows. The a term has been omitted for clarity, but appears again in the final result below.The first integral term is nothing more than Ki,j.The second term is “lumped,” i.e., replaced by a diagonal matrix that contains the row j j = 1, the second term is approximated bywhich is the ith component of K(c')U, where K(c') is the stiffness matrixassociated with the coefficient rather than c. The same reasoning can beapplied to the derivative of the mapping . Finally note that thederivative of the mapping is exactlywhich is the mass matrix associated with the coefficient . Thus the Jacobian ofU) is approximated bywhere the differentiation is with respect to u. K and M designate stiffness and mass matrices and their indices designate the coefficients with respect to which they are assembled. At each Gauss-Newton iteration, the nonlinear solver assembles the matrices corresponding to the equationsand then produces the approximate Jacobian. The differentiations of the coefficients are done numerically.In the general setting of elliptic systems, the boundary conditions are appended to the stiffness matrix to form the full linear system: where the coefficients of and may depend on the solution . The “lumped”approach approximates the derivative mapping of the residual by The nonlinearities of the boundary conditions and the dependencies of the coefficients on the derivatives of are not properly linearized by this scheme. When such nonlinearities are strong, the scheme reduces to the fix-pointiter ation and may converge slowly or not at all. When the boundary condition sare linear, they do not affect the convergence properties of the iteration schemes. In the Neumann case they are invisible (H is an empty matrix) and in the Dirichlet case they merely state that the residual is zero on the corresponding boundary points.Adaptive Mesh RefinementThe toolbox has a function for global, uniform mesh refinement. It divides each triangle into four similar triangles by creating new corners at the midsides, adjusting for curved boundaries. You can assess the accuracy of the numerical solution by comparing results from a sequence of successively refined meshes.If the solution is smooth enough, more accurate results may be obtained by extra polation. The solutions of the toolbox equation often have geometric features like localized strong gradients. An example of engineering importance in elasticity is the stress concentration occurring at reentrant corners such as the MATLAB favorite, the L-shaped membrane. Then it is more economical to refine the mesh selectively, i.e., only where it is needed. When the selection is based ones timates of errors in the computed solutions, a posteriori estimates, we speak of adaptive mesh refinement. Seeadapt mesh for an example of the computational savings where global refinement needs more than 6000elements to compete with an adaptively refined mesh of 500 elements.The adaptive refinement generates a sequence of solutions on successively finer meshes, at each stage selecting and refining those elements that are judged to contribute most to the error. The process is terminated when the maximum number of elements is exceeded or when each triangle contributes less than a preset tolerance. You need to provide an initial mesh, and choose selection and termination criteria parameters. The initial mesh can be produced by the init mesh function. The three components of the algorithm are the error indicator function, which computes an estimate of the element error contribution, the mesh refiner, which selects and subdivides elements, and the termination criteria.The Error Indicator FunctionThe adaption is a feedback process. As such, it is easily applied to a lar gerrange of problems than those for which its design was tailored. You wantes timates, selection criteria, etc., to be optimal in the sense of giving the mostaccurate solution at fixed cost or lowest computational effort for a given accuracy. Such results have been proved only for model problems, butgenerally, the equid is tribution heuristic has been found near optimal. Element sizes should be chosen such that each element contributes the same to the error. The theory of adaptive schemes makes use of a priori bounds forsolutions in terms of the source function f. For none lli ptic problems such abound may not exist, while the refinement scheme is still well defined and has been found to work well.The error indicator function used in the toolbox is an element-wise estimate of the contribution, based on the work of C. Johnson et al. For Poisson'sequation –f -solution uh holds in the L2-normwhere h = h(x) is the local mesh size, andThe braced quantity is the jump in normal derivative of v hr is theEi, the set of all interior edges of thetrain gulation. This bound is turned into an element-wise error indicator function E(K) for element K by summing the contributions from its edges. The final form for the toolbox equation Becomeswhere n is the unit normal of edge and the braced term is the jump in flux across the element edge. The L2 norm is computed over the element K. This error indicator is computed by the pdejmps function.The Mesh RefinerThe PDE Toolbox is geared to elliptic problems. For reasons of accuracy and ill-conditioning, they require the elements not to deviate too much from beingequilateral. Thus, even at essentially one-dimensional solution features, such as boundary layers, the refinement technique must guarantee reasonably shaped triangles.When an element is refined, new nodes appear on its mid sides, and if the neighbor triangle is not refined in a similar way, it is said to have hanging nodes. The final triangulation must have no hanging nodes, and they are removed by splitting neighbor triangles. To avoid further deterioration oftriangle quality in successive generations, the “longest edge bisection” scheme Rosenberg-Stenger [8] is used, in which the longest side of a triangle is always split, whenever any of the sides have hanging nodes. This guarantees that no angle is ever smaller than half the smallest angle of the original triangulation. Two selection criteria can be used. One, pdead worst, refines all elements with value of the error indicator larger than half the worst of any element. The other, pdeadgsc, refines all elements with an indicator value exceeding a user-defined dimensionless tolerance. The comparison with the tolerance is properly scaled with respect to domain and solution size, etc.The Termination CriteriaFor smooth solutions, error equi distribution can be achieved by the pde adgsc selection if the maximum number of elements is large enough. The pdead worst adaption only terminates when the maximum number of elements has been exceeded. This mode is natural when the solution exhibits singularities. The error indicator of the elements next to the singularity may never vanish, regardless of element size.外文翻译假定估计值,如果是最接近的准确的求解,通过解决线性问题得到更精确的值当为正数时,( 有一个解,即使也有一个解都是不需要的。