[3]Conjugate Gradient Methods for Toeplitz Systems
- 格式:pdf
- 大小:78.29 KB
- 文档页数:41
完成下面两个练习,提交截图1.QM/MM calculation of the SW1 defect formation energy for a carbon Purpose: Introduces how to use the QMERA module in Materials Studio. Special attention is paid to preparing the system and which type of embedding scheme to use.Modules: Materials Visualizer, QMERATime:Prerequisites: NoneThe Stone-Wales (SW) defect is a common defect on carbon nanotubes that is thought to have important implications for their mechanical properties (see Andzelm et al., 2006). The 90° rotation of two carbon atoms around the midpoint of the C-C bond transforms four hexagons into two pentagons and two heptagons. This substructure is known as Stone-Wales defect. In this tutorial you will calculate the formation energy of a nonchiral SW defect (SW1).The following steps will be covered here:Getting startedQM region definitionQMERA calculationAnalysis of resultsNote: In order to ensure that you can follow this tutorial exactly as intended, you should use the1. Getting startedBegin by starting Materials Studio and creating a new project.Open the New Project dialog and enter Stone-Wales as the project name, click the OK button.The new project is created with Stone-Wales listed in the Project Explorer.2. Structure preparationThe first thing you need to do is prepare the structure of the single-walled nanotube (SWNT).Select Build | Build Nanostructure | Single-Wall Nanotube from the menu bar. Change the N and M indices to 8 and 0 respectively.This corresponds to a nanotube of 6.26 Å diameter.Uncheck the Periodic nanotube box and change the number of Repeat units to 7, this gives a nanotube length of 29.82 Å. Select Both ends from the Hydrogen termination dropdown list. Click the Build button and close the dialog.Now you have to create the defect in the middle of the nanotube.Right-click in the 3D Viewer and select Display Style from the shortcut menu to open the Display Style dialog. Click the Stick radio button and close the dialog.Press the LEFT arrow key twice to rotate the nanotube so that you can see its full length horizontally. The Z axis should be pointing to the left and the Y axis should be pointing up, on the axis orientation display, see Figure 1.Select two carbon atoms which are near the center of the nanotube wall and which are connected by a horizontal bond and then select the remainder of benzene rings at each end of the bond.Figure 1. SWNT with two central carbon atoms and their pendant benzene rings selected.Click on the arrow for the 3D Viewer Recenter from the toolbar and select View Onto fromthe dropdown list. Click anywhere in the 3D Viewer to deselect everything and reselect the central two carbon atoms.Figure 2. SWNT viewed from above, with two central carbon atoms selected.Select the Movement tools from the toolbar, change the Angle to 90.0 and click the Move Around Z button. Close the dialog.This creates the defect by rotating the two carbon atoms 90° around the screen Z axis.To view the appropriate connectivity select Build | Bonds from the menu bar to open the Bond Calculation dialog. Uncheck Calculate bond type, set the Convert representation to option to Resonant. Click the Calculate button and close the dialog.Rename the SWNT.xsd document to SW1.xsd.Figure 3. SW1 defect (highlighted here in blue) on an SWNT.3. QM region definitionThe next step is to define the QM region that you want to use in the simulation. It is necessary to include full rings in the calculation to avoid possible clashes between hydrogen link atoms, and to leave enough space between the defect and the boundary QM-MM atoms. In this case you will include the defect plus a crown of full rings around it in the QM region (see Figure 4).With the two carbon atoms central to the defect still selected, choose Edit | Atom Selection from the menu bar to open the Selection dialog. Select Connected from the Select by Property dropdown list and choose the Add to the existing selection radio button. Click the Select button four times and close the dialog. Hold down the SHIFT key and select the four carbons needed to complete the crown of six-membered rings.Select | Calculation from the modules toolbar to open the QMERA Calculation dialog. Click the Add button to add the selected atoms to the QuantumAtoms set.Click anywhere in the 3D Viewer, the atoms in the set will be highlighted in purple.Figure 4. SW1 defect with the QuantumAtoms set defined.If you want to visualize the hydrogen link atoms to be sure that there are no problems related to their position, you can use the View button on the QMERA Calculation dialog.On the Setup tab of the QMERA Calculation dialog click the View button. A new window will open, double-click on the LinkAtoms label. Check that the position of the hydrogen link atoms makes sense and close the window. Click the No button on the dialog asking whether to save the document.4. QMERA calculationYou are now ready to run the QMERA calculation. In this case the polarization effects are negligible and the charges of all atoms will be left as zero, which is compatible with the Dreiding forcefield. It is also sufficient to choose a mechanical embedding approach for the QM/MM calculation.There are two different models available for mechanical embedding: QM-Pot and additive. You will use the QM-Pot model which uses a subtractive expression to calculate the total energy. Forcefield parameters are therefore required for all atoms of the system.On the Setup tab of the QMERA Calculation dialog select Geometry Optimization as the Task and ensure that the Quality of the calculation is set to Medium.In order to complete the tutorial more quickly, you could use the Coarse quality setting.Click the More... button associated with the task, to open the QMERA Geometry Optimization dialog. Select HDLC as the Method and close the dialog.The HDLC minimizer combines the use of highly decoupled delocalized internal coordinates with the linear scaling BFGS update of the Hessian (L-BFGS) method. This usually achieves faster convergence than normal BFGS or conjugate gradient methods for covalent systems of this size.Click the More... button for the QM server to open the QMERA DMol3 Parameters dialog. Select GGA and PBE for the functional and close the dialog.Click the More... button for the MM server to open the QMERA GULP Parameters dialog. Ensure that Dreiding is selected as the Forcefield and Use current is selected for Charges, close the dialog. Select the Options tab of the QMERA Calculation dialog, ensure that Mechanical is selected as the Embedding scheme and Model is set to QM-Pot. Click the Run button.GGA functionals provide a good description of the electronic subsystem and the PBE exchangecorrelation functional has previously been identified as efficient for QM/MM calculations on nanotubes, see Andzelm et al., 2006 for similar calculations.Depending on your hardware, this calculation may take several hours to complete. If you wish to examine and analyze the results directly the output files are provided in theExamples/Projects/QMERA/Stone-Wales Files/Documents/ directory in the SW1 QMERA GeomOpt and SWNT QMERA GeomOpt folders.5. Analysis of resultsThe results of the calculation will be returned in a new folder called SW1 QMERA GeomOpt.Open the SW1.xsd file in the SW1 QMERA GeomOpt folder to see the optimized geometry.The final energy for this structure can be found in the SW1.csout file, the QM/MM Energy heading reports the corresponding energy in a.u., which is Hartree in this case.Double-click on SW1.csout to open the energy file, press the CTRL + F keys and enter Energy (subtractive) into the Find dialog.The end of the file is displayed. Scroll up a little and examine the QM/MM Energy.To examine the relationship between the energy and the structure you can compare the energy chart with the trajectory. You will need to analyze the results to obtain the trajectory and chart documents, even if you already have some charts with intermediate updates.Select Modules | QMERA | Analysis from the menu bar to open the QMERA Analysis dialog. Select Energy evolution and click the View button. Close the dialog.The energy evolution either creates or opens two chart documents, called SW1 Energies.xcd and SW1 Convergence.xcd.Make SW1.xtd the active document and, on the animation toolbar, click on the Play button.As the animation proceeds the seven-membered rings widen.Stop the animation and open SW1 Energies.xcd.Click on a point on the graph near the beginning of the optimization.The 3D Viewer displays the structure at the corresponding step in the calculation. In this way you can examine the structure at specific energies during the calculation.To obtain the formation energy for the SW1 defect you need to perform a QMERA calculation with the same settings for the defect free nanotube. To do this you should use a QM region of four fused C6 rings and a surrounding crown. The resultant QM region will be similar to the one shown in Figure 4, except that the central C-C bond of the QM region in Figure 4 will be horizontal rather than vertical. The output files for this calculation are provided in the Examples/Projects/QMERA/Stone-Wales Files/Documents/SWNT QMERA GeomOpt/ folder.Once you have both calculations you can calculate the formation energy of the SW1 defect as the difference in QM/MM Energy, converting from atomic units to eV according to: 1 a.u. (Hartree) =27.2113845 eV.You should obtain a value of around 2.1 eV.This is the end of this tutorial.ReferenceAndzelm, J., Govind, N., Maiti, A., Chem. Phys. Lett., 2006, (421), 58-62.2.QM/MM geometry optimization of a Ru(H)2(diphosphine)(diamine) Purpose: Introduces how to use the QMERA module in Materials Studio with special attention paidto which type of embedding scheme to use.Modules: Materials Visualizer, QMERATime:Prerequisites: NoneThe preparation of enantiomerically pure alcohols is of high importance in drug design. A breakthrough in this field was the discovery, by Noyori and co-workers, of highly efficient ruthenium catalysts for the enantioselective hydrogenation of ketones (R. Noyori, Angew. Chem., Int. Ed., 2002, 41, 2008). Among the best catalysts for carbonyl hydrogenation are octahedral complexes where Ru(II) is coordinated by a chiral diphosphine and a chiral diamine.Figure 1. Conversion of a ketone to a chiral secondary alcohol.In this tutorial you will use the QMERA module in Materials Studio to optimize the structure of a Ru(H)2 (diphosphine)(diamine) complex. You will use DMol3 to describe the QM region and the Dreiding forcefield to describe the MM region. The following steps will be covered here:Getting startedStructure and QM/MM setupQMERA calculationNote: In order to ensure that you can follow this tutorial exactly as intended, you should use the1. Getting startedBegin by starting Materials Studio and creating a new project.Open the New Project dialog and enter Ru_complex as the project name, click the OK button.The new project is created with Ru_complex listed in the Project Explorer.2. Structure and QM/MM setupThe structure you will use is shown below:Figure 2. Ru(II) complex used as an asymmetric hydrogenation catalyst for ketones.Select File | Import... from the menu bar and browse to Examples\Projects\QMERA\Ru_complex Files\Documents\Ru_start.xsd. Click the Open button.Once you have the structure of the complex you can prepare the QMERA calculation. For this system you will include the polarization of the QM region due to the MM region. To this end, you will include the MM point charges in the SCF part of the QM calculation. This type of approach is called electrostatic embedding and it does not require forcefield parameters for the QM region, for either atom types or charges, because an additive expression is used to calculate the total energy of the system.You need to define the QM region first. The atoms to include in the QM region are shown in Figure 3. The QM region includes the Ru center, the two hydrides (H), the two P atoms and the H2NCHCHNH2 diamine backbone.Figure 3. Ru(II) complex with the QM region indicated using stick representation.Use the selection tool to select the QM region indicated above. Select the QMERA modulefrom the Modules toolbar and choose Calculation to open the QMERA Calculation dialog. Click the Add button to add the selected atoms to the QuantumAtoms set.Click anywhere in the 3D Viewer, the atoms in the set will be highlighted in purple.If you want to visualize the hydrogen link atoms to be sure that there are no problems related to their position, you can use the View button in the QMERA Calculation dialog.On the Setup tab of the QMERA Calculation dialog click the View button. A new window will open with the LinkAtoms selected. Check that the position of the hydrogen link atoms makes sense and close the window. Click the No button on the dialog which asks if you want to save this document. You need to setup and modify the ligand charges. In electronic embedding methods, the basic requirement for the choice of charges is that net charge of the MM atoms must be integer. In this case this is achieved by using the QEq method to calculate separately the charges of each ligand bound to the QM region, under the constraint that the net charge must be zero.Select Modify | Charges from the menu bar to open the Charges dialog. On the Calculate tab choose QEq as the Method. Select one of the MM ligand residues (for example a phenyl ring) and click the Calculate button. The ligand charges have been determined now. Repeat this procedure for all the other MM ligands.Close the Charges dialog.Note that the atoms in the QM region do not need to have charges assigned.The prepared structure can also be imported from Examples\Projects\QMERA\Ru_complex Files\Documents\Ru_complex.xsd.You can now run the QMERA calculation.3. QMERA calculationOn the Setup tab of the QMERA Calculation dialog, select Geometry Optimization as the Task and Medium for the Quality of the calculation. Click the More... button for the Task to open the QMERA Geometry Optimization dialog.Select HDLC as the Method and close the dialog.Click the More... button for the QM server to open QMERA DMol3 Parameters dialog. Select GGA and PBE for the Functional and close the dialog.This Ru(II) complex has a zero net QM charge, the two hydride ligands act as electron donors (2 × -1) to compensate for the metal's 2+ charge and no other QM atoms contribute charges (all other ligands coordinate the Ru center through dative bonding). So the DMol3 charge can remain at a value of zero for this system.For the MM server click the More... button to open the QMERA GULP Parameters dialog. Select Dreiding as the Forcefield and Use current for the Charges, close the dialog.On the QMERA Calculation dialog, click on the Options tab and select Electronic as the Embedding scheme and Disperse boundary charge as the Model. Click the Run button.Depending on your hardware, this calculation may take several hours to complete. If you wish to examine and analyze the results directly the output files are provided in theExamples\Projects\QMERA\Ru_complex Files\Documents\ directory in the Ru_complex QMERA GeomOpt folder.After performing the calculation for the Ru(II) complex you can proceed to include the substrate (ketone) in the calculation. The ketone will belong to the QM region and as a consequence you do not need charges or atom types for that structure. You can draw the ketone in the same document as the Ru(II) complex and add it to the QM region using the Add button on the QMERA Calculation dialog.The prepared structure for the complex and substrate can be found atExamples\Projects\QMERA\Ru_complex Files\Documents\Ru_complex+ketone_2.xsd. If you wish to examine and analyze the results of the QM/MM calculation on the ketone system directly, the output files from the QMERA run can be found in the Examples\Projects\QMERA\Ru_ complex Files\Documents\Ru_complex+ketone_2 QMERA GeomOpt folder.。
丘赛计算与应⽤数学考试⼤纲原⽂地址:Computational MathematicsInterpolation and approximationPolynomial interpolation and least square approximation; trigonometric interpolation and approximation, fast Fourier transform; approximations by rational functions; splines.Nonlinear equation solversConvergence of iterative methods (bisection, secant method, Newton method, other iterative methods) for both scalar equations and systems; finding rootsof polynomials.Linear systems and eigenvalue problemsDirect solvers (Gauss elimination, LU decomposition, pivoting, operation count, banded matrices, round-off error accumulation); iterative solvers (Jacobi, Gauss-Seidel, successive over-relaxation, conjugate gradient method, multi-grid method, Krylov methods); numerical solutions for eigenvalues and eigenvectorsNumerical solutions of ordinary differential equationsOne step methods (Taylor series method and Runge-Kutta method); stability, accuracy and convergence; absolute stability, long time behavior; multi-stepmethodsNumerical solutions of partial differential equationsFinite difference method; stability, accuracy and convergence, Lax equivalence theorem; finite element method, boundary value problemsReferences:[1] C. de Boor and S.D. Conte, Elementary Numerical Analysis, an algorithmicapproach, McGraw-Hill, 2000.[2] G.H. Golub and C.F. van Loan, Matrix Computations, third edition, JohnsHopkins University Press, 1996.[3] E. Hairer, P. Syvert and G. Wanner, Solving Ordinary Differential Equations, Springer, 1993.[4] B. Gustafsson, H.-O. Kreiss and J. Oliger, Time Dependent Problems and Difference Methods, John Wiley Sons, 1995.[5] G. Strang and G. Fix, An Analysis of the Finite Element Method, second edition, Wellesley-Cambridge Press, 2008.Applied MathematicsODE with constant coefficients; Nonlinear ODE: critical points, phase space& stability analysis; Hamiltonian, gradient, conservative ODE's.Calculus of Variations: Euler-Lagrange Equations; Boundary Conditions, parametric formulation; optimal control and Hamiltonian, Pontryagin maximum principle.First order partial differential equations (PDE) and method of characteristics; Heat, wave, and Laplace's equation; Separation of variables and eigen-function expansions; Stationary phase method; Homogenization method for elliptic and linear hyperbolic PDEs; Homogenization and front propagation of Hamil ton-Jacobi equations; Geometric optics for dispersive wave equations. References:W.D. Boyce and R.C. DiPrima, Elementary Differential Equations, Wiley, 2009 F.Y.M. Wan, Introduction to Calculus of Variations and Its Applications, Cha pman & Hall, 1995G. Whitham, "Linear and Nonlinear Waves", John-Wiley and Sons, 1974.J. Keener, "Principles of Applied Mathematics", Addison-Wesley, 1988.A. Benssousan, P-L Lions, G. Papanicolaou, "Asymptotic Analysis for Periodic Structures", North-Holland Publishing Co, 1978.V. Jikov, S. Kozlov, O. Oleinik, "Homogenization of differential operators and integral functions", Springer, 1994.J. Xin, "An Introduction to Fronts in Random Media", Surveys and Tutorials in Applied Math Sciences, No. 5, Springer, 2009。
Materials Studio 案例1:Au (111)表面自组装单层膜结构优化目的:用Materials Studio (MS )软件对金表面自组装膜的结构进行优化。
模块:MinimizerMS Discover 结构优化原理分子的势能一般为键合(键长、键角、二面角、扭转角等)和非键合相互作用(静电作用、范德华作用等)能量项的加和,总势能是各类势能之和,如下式:总势能 = 范德华非键结势能 + 键伸缩势能 + 键角弯曲势能+ 双面角扭曲势能 + 离平面振动势能 + 库伦静电势能 + …除了一些简单的分子以外,大多数的势能是分子中一些复杂形势的势能的组合。
势能为分子中原子坐标的函数,由原子不同的坐标所得到的势能构成势能面(Potential Energy Surface ,PES )。
势能越低,构象越稳定,在系统中出现的机率越大;反之,势能越高,构象越不稳定,在系统中出现的机率越小。
通常势能面可得到许多极小值的位置,其中对应于最低能量的点称为全局最小值(Global Energy Minimum ),相当于分子最稳定的构象。
由势能面求最低极小值的过程称为能量最小化(Energy Minimum ),其所对应的结构为最优化结构(Optimized Structure ),能量最小化过程,亦是结构优化的过程。
通过最小化算法进行结构优化时,应避免陷入局部最小值(local minimum ),也就是避免仅得到某一构象附近的相对稳定的构象,而力求得到全局最小值,即实现全局优化。
分子力学的最小化算法能较快进行能量优化,但它的局限性在于易陷入局部势阱,求得的往往是局部最小值,而要寻求全局最小值只能采用系统搜寻法或分子动力学法。
在Materials Studio 的Discover 模块中,能量最小化算法有以下四种:1)最陡下降法(Steepest Descent ),为一经典的方法,通过迭代求导,对多变量的非线性目标函数极小化,按能量梯度相反的方向对坐标添加一位移,即能量函数的负梯度方向是目标函数最陡下降的方向,所以称为最陡下降法。
An Introduction tothe Conjugate Gradient MethodWithout the Agonizing PainJonathan Richard ShewchukMarch7,1994CMU-CS-94-125School of Computer ScienceCarnegie Mellon UniversityPittsburgh,PA15213AbstractThe Conjugate Gradient Method is the most prominent iterative method for solving sparse systems of linear equations. Unfortunately,many textbook treatments of the topic are written so that even their own authors would be mystified,if they bothered to read their own writing.For this reason,an understanding of the method has been reserved for the elite brilliant few who have painstakingly decoded the mumblings of their forebears.Nevertheless,the Conjugate Gradient Method is a composite of simple,elegant ideas that almost anyone can understand.Of course,a reader as intelligent as yourself will learn them almost effortlessly.The idea of quadratic forms is introduced and used to derive the methods of Steepest Descent,Conjugate Directions, and Conjugate Gradients.Eigenvectors are explained and used to examine the convergence of the Jacobi Method, Steepest Descent,and Conjugate Gradients.Other topics include preconditioningand the nonlinear Conjugate Gradient Method.I have taken pains to make this article easy to read.Sixty-two illustrations are provided.Dense prose is avoided.Concepts are explained in several different ways.Most equations are coupled with an intuitive interpretation.Supported in part by the Natural Sciences and Engineering Research Council of Canada under a1967Science and Engineering Scholarship and by the National Science Foundation under Grant ASC-9318163.The views and conclusions contained in this document are those of the author and should not be interpreted as representing the official policies,either express or implied,of NSERC,NSF,or the ernment.Keywords:conjugate gradient method,preconditioning,convergence analysis,agonizing painContents1.Introduction12.Notation13.The Quadratic Form24.The Method of Steepest Descent65.Thinking with Eigenvectors and Eigenvalues95.1.Eigen do it if I try95.2.Jacobi iterations105.3.A Concrete Example126.Convergence Analysis of Steepest Descent136.1.Instant Results136.2.General Convergence177.The Method of Conjugate Directions217.1.Conjugacy217.2.Gram-Schmidt Conjugation257.3.Properties of the Residual268.The Method of Conjugate Gradients289.Convergence Analysis of Conjugate Gradients309.1.Optimality of the Error Term309.2.Chebyshev Polynomials33plexity3611.Starting and Stopping3611.1.Starting3611.2.Stopping3612.Preconditioning3713.Conjugate Gradients on the Normal Equations3914.The Nonlinear Conjugate Gradient Method4014.1.Outline of the Nonlinear Conjugate Gradient Method4014.2.General Line Search4114.3.Preconditioning45A Notes46B Canned Algorithms47B1.Steepest Descent47 B2.Conjugate Gradients48 B3.Preconditioned Conjugate Gradients49B4.Nonlinear Conjugate Gradients with Newton-Raphson and Fletcher-Reeves50 B5.Preconditioned Nonlinear Conjugate Gradients with Secant and Polak-Ribi`e re51 C Ugly Proofs52C1.The Solution to Minimizes the Quadratic Form52 C2.A Symmetric Matrix Has Orthogonal Eigenvectors.52 C3.Convergence of Steepest Descent53 C4.Optimality of Chebyshev Polynomials53List of Figures1Sample two-dimensional linear system and its solution.2 2Graph of a quadratic form.3 3Contours of a quadratic form.3 4Gradient of a quadratic form.4 5Quadratic forms for positive-definite,negative-definite,singular,and indefinite matrices (four illustrations).5 6The method of Steepest Descent(four illustrations).7 7On the search line,is minimized where the gradient is orthogonal to the search line.7 8Convergence of Steepest Descent.8 9Converging eigenvector.9 10Diverging eigenvector.9 11A vector can be expressed as a linear combination of eigenvectors.10 12The eigenvectors are directed along the axes of the paraboloid defined by the quadratic form.12 13Convergence of the Jacobi Method(six illustrations).14 14Steepest Descent converges to the exact solution on thefirst iteration if the error term is an eigenvector.15 15Steepest Descent converges to the exact solution on thefirst iteration if the eigenvalues are all equal.16 16The energy norm.17 17Convergence of Steepest Descent as a function of the slope and the condition number.19 18These four examples represent points near the corresponding four corners of the graph of (four illustrations).19 19Starting points that give the worst convergence for Steepest Descent.20 20Convergence of Steepest Descent worsens as the condition number of the matrix increases.21 21Method of Orthogonal Directions.22 22-orthogonal vectors(two illustrations).23 23The method of Conjugate Directions converges in steps(two illustrations).24 24Gram-Schmidt conjugation.25 25The method of Conjugate Directions using the axial unit vectors,also known as Gaußian elimination.26 26The search directions span the same subspace as the vectors from which they were constructed.27 27Conjugate Gradient search directions span the same subspace as the residuals.28 28The method of Conjugate Gradients.29 29CG minimizes the energy norm of the error at each step.30 30The convergence of CG depends on how close a polynomialcan be to zero on each eigenvalue (four illustrations).32 31Chebyshev polynomials of degree2,5,10,and49.33 32Optimal polynomial of degree2.34 33Convergence of Conjugate Gradients as a function of condition number.35 34Number of iterations of Steepest Descent required to match one iteration of CG.35 35Contour lines of the quadratic form of the diagonally preconditioned sample problem.38 36Convergence of the nonlinear Conjugate Gradient Method(four illustrations).42 37Nonlinear CG can be more effective with periodic restarts.43 38The Newton-Raphson method.43 39The Secant method.45 40The preconditioned nonlinear Conjugate Gradient Method.46About this ArticleAn electronic copy of this article is available by anonymous FTP to (IP address128.2.222.79)under thefilename1994/CMU-CS-94-125.ps.A PostScriptfile containing full-page copies of thefigures herein,suitable for transparencies,is available electronically on request from the author(jrs@).Most of the illustrations were created using Mathematica.c1994by Jonathan Richard Shewchuk.This article may be freely duplicated and distributed so long as no consideration is received in return,and this copyright notice remains intact.This guide was created to help students learn Conjugate Gradient Methods as easily as possible.Please mail me(jrs@)comments,corrections,and any intuitions I might have missed;some of these will be incorporated into a second edition.I am particularly interested in hearing about use of this guide for classroom teaching.For those who wish to learn more about iterative methods,I recommend William L.Briggs’“A Multigrid Tutorial”[2],one of the best-written mathematical books I have read.Special thanks to Omar Ghattas,who taught me much of what I know about numerical methods,and provided me with extensive comments on thefirst draft of this article.Thanks also to David O’Hallaron, James Stichnoth,and Daniel Tunkelang for their comments.To help you skip chapters,here’s a dependence graph of the sections:This article is dedicated to every mathematician who usesfigures as abundantly as I have herein.1.IntroductionWhen I decided to learn the Conjugate Gradient Method (henceforth,CG),I read four different descriptions,which I shall politely not identify.I understood none of them.By the end of the last,I swore in my rage that if ever I unlocked the secrets of CG,I should guard them as jealously as my intellectual ancestors.Foolishly,I wrote this article instead.CG is the most popular iterative method for solving large systems of linear equations.CG is effective for systems of the form1where is an unknown vector,is a known vector,and is a known,square,symmetric,positive-de finite (or positive-inde finite)matrix.(Don’t worry if you’ve forgotten what “positive-de finite”means;we shall review it.)These systems arise in many important settings,such as finite difference and finite element methods for solving partial differential equations,structural analysis,circuit analysis,and math homework.Iterative methods like CG are suited for use with sparse matrices.If is dense,your best course of action is probably to factor and solve the equation by backsubstitution.The time spent factoring a dense is roughly equivalent to the time spent solving the system iteratively;and once is factored,the system can be backsolved quickly for multiple values of .Compare this dense matrix with a sparse matrix of larger size that fills the same amount of memory.The triangular factors of a sparse usually have many more nonzero elements than itself.Factoring may be impossible due to limited memory,and will be time-consuming as well;even the backsolving step may be slower than iterative solution.On the other hand,most iterative methods are memory-ef ficient and run quickly with sparse matrices.I assume that you have taken a first course in linear algebra,and that you have a solid understanding of matrix multiplication and linear independence,although you probably don’t remember what those eigenthingies were all about.From this foundation,I shall build the edi fice of CG as clearly as I can.2.NotationBefore we begin,a few de finitions and notes on notation are in order.With a few exceptions,I shall use capital letters to denote matrices,lower case letters to denote vectors,and Greek letters to denote scalars.is an matrix,and and are vectors —that is,1matrices.Equation 1,written out fully,looks like this:1112 (121)222.........12 (12)...12...The inner product of two vectors is written ,and represents the scalar sum 1.Note that .If and are orthogonal,then 0.In general,expressions that reduce to 11matrices,such as and ,are treated as scalar values.2Jonathan Richard Shewchuk-4-2246-6-4-224123122221628Figure 1:Sample two-dimensional linear system.The solution lies at the intersection of the lines.A matrixis positive-de finite if,for every nonzero vector ,2This may mean little to you,but don’t feel bad;it’s not a very intuitive idea,and it’s hard to imagine how a matrix that is positive-de finite might look differently from one that isn’t.We will get a feeling for what positive-de finiteness is about when we see how it affects the shape of quadratic forms.Finally,don’t forget the important basic identitiesand111.3.The Quadratic FormA quadratic form is simply a scalar,quadratic function of a vector with the form123where is a matrix,and are vectors,and is a scalar constant.I shall show shortly that if is symmetricand positive-de finite,is minimized by the solution to .Throughout this paper,I will demonstrate ideas with the simple sample problem3226284The system is illustrated in Figure 1.In general,the solution lies at the intersection point of hyperplanes,each having dimension 1.The solution in this case is 22.The corresponding quadratic form appears in Figure 2.A contour plot of is illustrated in Figure 3.Because isThe Quadratic Form 3Figure 2:Graph of a quadratic form.The minimum point of this surface is the solution to.12Figure 3:Contours of the quadratic form.Each ellipsoidal curve has constant.4Jonathan Richard Shewchuk12Figure4:Gradient of the quadratic form.For every,the gradient points in the direction of steepest increase of,and is orthogonal to the contour lines.positive-definite,the surface defined by is shaped like a paraboloid bowl.(I’ll have more to say about this in a moment.)The gradient of a quadratic form is defined to be12...5The gradient is a vectorfield that,for a given point,points in the direction of greatest increase of. Figure4illustrates the gradient vectors for Equation3with the constants given in Equation4.At the bottom of the paraboloid bowl,the gradient is zero.One can minimize by setting equal to zero.With a little bit of tedious math,one can apply Equation5to Equation3,and derive1 2126If is symmetric,this equation reduces to7 Setting the gradient to zero,we obtain Equation1,the linear system we wish to solve.Therefore,the solution to is a critical point of.If is positive-definite as well as symmetric,then thisThe Quadratic Form5(c)(d)(a)(b)Figure 5:(a)Quadratic form for a positive-de finite matrix.(b)For a negative-de finite matrix.(c)For a singular (and positive-inde finite)matrix.A line that runs through the bottom of the valley is the set of solutions.(d)For an inde finite matrix.Because the solution is a saddle point,Steepest Descent and CG will not work.In three dimensions or higher,a singular matrix can also have a saddle.solution is a minimum of ,so can be solved by finding an that minimizes.(If is not symmetric,then Equation 6hints that CG will find a solution to the system 12.Note that12is symmetric.)Why do symmetric positive-de finite matrices have this nice property?Consider the relationship between at some arbitrary point and at the solution1.From Equation 3one can show (Appendix C1)that if is symmetric (be it positive-de finite or not),128If is positive-de finite as well,then by Property 2,the latter term is positive for all.It follows that is a global minimum of .The fact that is a paraboloid is our best intuition of what it means for a matrix to be positive-de finite.If is not positive-de finite,there are several other possibilities.could be negative-de finite —the result of negating a positive-de finite matrix (see Figure 2,but hold it upside-down).might be singular,in which case no solution is unique;the set of solutions is a line or hyperplane having a uniform value for .If is none of the above,then is a saddle point,and techniques like Steepest Descent and CG will likely fail.Figure 5demonstrates the possibilities.The value of determines where the minimum point of the paraboloid lies,but does not affect the paraboloid’s shape.Why go to the trouble of converting the linear system into a tougher-looking problem?The methods under study —Steepest Descent and CG —were developed and are intuitively understood in terms of minimization problems like Figure 2,not in terms of intersecting hyperplanes such as Figure 1.6Jonathan Richard Shewchuk4.The Method of Steepest DescentIn the method of Steepest Descent,we start at an arbitrary point0and slide down to the bottom of the paraboloid.We take a series of steps12...until we are satisfied that we are close enough to the solution.When we take a step,we choose the direction in which decreases most quickly,which is the direction opposite of.According to Equation7,this direction is.Allow me to introduce a few definitions,which you should memorize.The error is a vector that indicates how far we are from the solution.The residual indicates how far we are from the correct value of.It is easy to see that,and you should think of the residual as being the error transformed by into the same space as.More importantly,,and you should also think of the residual as the direction of steepest descent.For nonlinear problems,discussed in Section14,only the latter definition applies.So remember,whenever you read“residual”,think“direction of steepest descent.”Suppose we start at022.Ourfirst step,along the direction of steepest descent,will fall somewhere on the solid line in Figure6(a).In other words,we will choose a point1009 The question is,how big a step should we take?A line search is a procedure that chooses to minimize along a line.Figure6(b)illustrates this task: we are restricted to choosing a point on the intersection of the vertical plane and the paraboloid.Figure6(c) is the parabola defined by the intersection of these surfaces.What is the value of at the base of the parabola?minimizes when the directional derivative1is equal to zero.By the chain rule,1 1110.Setting this expression to zero,wefind that should be chosen so that 0and1are orthogonal(see Figure6(d)).There is an intuitive reason why we should expect these vectors to be orthogonal at the minimum. Figure7shows the gradient vectors at various points along the search line.The slope of the parabola (Figure6(c))at any point is equal to the magnitude of the projection of the gradient onto the line(Figure7, dotted arrows).These projections represent the rate of increase of as one traverses the search line.is minimized where the projection is zero—where the gradient is orthogonal to the search line.To determine,note that11,and we have1010000000000000000000000The Method of Steepest Descent 70.20.40.620406080100120140(c)1(d)21(a)2(b)Figure 6:The method of Steepest Descent.(a)Starting at 22,take a step in the direction of steepest descent of .(b)Find the point on the intersection of these two surfaces that minimizes .(c)This parabola is the intersection of surfaces.The bottommost point is our target.(d)The gradient at the bottommost point is orthogonal to the gradient of the previous step.12Figure 7:The gradientis shown at several locations along the search line (solid arrows).Each gradient’sprojection onto the line is also shown (dotted arrows).The gradient vectors represent the direction of steepest increase of ,and the projections represent the rate of increase as one traverses the search line.On the search line,is minimized where the gradient is orthogonal to the search line.8Jonathan Richard Shewchuk2Figure8:Here,the method of Steepest Descent starts at22and converges at22.Putting it all together,the method of Steepest Descent is:1011112 The example is run until it converges in Figure8.Note that the gradient is always orthogonal to the gradient of the previous step.The algorithm,as written above,requires two matrix-vector multiplications per iteration.In general,the computational cost of iterative algorithms is dominated by matrix-vector products;fortunately,one can be eliminated.By premultiplying both sides of Equation12by and adding,we have113 Although Equation10is needed to compute0,Equation13can be used for every iteration thereafter.The product,which occurs in both Equations11and13,need only be computed once.The disadvantage of using this recurrence is that the sequence defined by Equation13is generated without any feedback from the value of;so that an accumulation offloating point roundoff error may cause to converge to some point near.This effect can be avoided by periodically using Equation10to recompute the correct residual.Before analyzing the convergence of Steepest Descent,I must digress to ensure that you have a solid understanding of eigenvectors.Thinking with Eigenvectors and Eigenvalues9 5.Thinking with Eigenvectors and EigenvaluesAfter my one course in linear algebra,I knew eigenvectors and eigenvalues like the back of my head.If your instructor was anything like mine,you recall solving problems involving eigendoohickeys,but you never really understood them.Unfortunately,without an intuitive grasp of them,CG won’t make sense either.If you’re already eigentalented,feel free to skip this section.Eigenvectors are used primarily as an analysis tool;Steepest Descent and CG do not calculate the value of any eigenvectors as part of the algorithm1.5.1.Eigen do it if I tryAn eigenvector of a matrix is a nonzero vector that does not rotate when is applied to it(except perhaps to point in precisely the opposite direction).may change length or reverse its direction,but it won’t turn sideways.In other words,there is some scalar constant such that.The value is an eigenvalue of.For any constant,the vector is also an eigenvector with eigenvalue,because .In other words,if you scale an eigenvector,it’s still an eigenvector.Why should you care?Iterative methods often depend on applying to a vector over and over again. When is repeatedly applied to an eigenvector,one of two things can happen.If1,thenwill vanish as approaches infinity(Figure9).If1,then will grow to infinity(Figure10).Each time is applied,the vector grows or shrinks according to the value of.Figure9:is an eigenvector of with a corresponding eigenvalue of05.As increases,converges to zero.Figure10:Here,has a corresponding eigenvalue of2.As increases,diverges to infinity.1However,there are practical applications for eigenvectors.The eigenvectors of the stiffness matrix associated with a discretized structure of uniform density represent the natural modes of vibration of the structure being studied.For instance,the eigenvectors of the stiffness matrix associated with a one-dimensional uniformly-spaced mesh are sine waves,and to express vibrations as a linear combination of these eigenvectors is equivalent to performing a discrete Fourier transform.10Jonathan Richard ShewchukIf is nonsingular,then there exists a set of linearly independent eigenvectors of,denoted 12....This set is not unique,because each eigenvector can be scaled by an arbitrary nonzero constant.Each eigenvector has a corresponding eigenvalue,denoted12....These are uniquely defined for a given matrix.The eigenvalues may or may not be equal to each other;for instance,the eigenvalues of the identity matrix are all one,and every nonzero vector is an eigenvector of.What if is applied to a vector that is not an eigenvector?A very important skill in understanding linear algebra—the skill this section is written to teach—is to think of a vector as a sum of other vectors whose behavior is understood.Consider that the set of eigenvectors forms a basis for (because a nonsingular has eigenvectors that are linearly independent).Any-dimensional vector can be expressed as a linear combination of eigenvectors,and because matrix multiplication is distributive,one can examine the effect of on each eigenvector separately.In Figure11,a vector is illustrated as a sum of two eigenvectors1and2.Applying to is equivalent to applying to the eigenvectors,and summing the result.Upon repeated application,we have 121122.If the magnitudes of all the eigenvalues are smaller than one,will converge to zero,because the eigenvectors which compose converge to zero when is repeatedly applied.If one of the eigenvalues has magnitude greater than one,will diverge to infinity.This is why numerical analysts attach importance to the spectral radius of a matrix:max is an eigenvalue of.If we want to converge to zero quickly,should be less than one,and preferably as small as possible.Figure11:The vector(solid arrow)can be expressed as a linear combination of eigenvectors(dashed arrows),whose associated eigenvalues are107and22.The effect of repeatedly applying to is best understood by examining the effect of on each eigenvector.When is repeatedly applied,one eigenvector converges to zero while the other diverges;hence,also diverges.Here’s a useful fact:the eigenvalues of a positive-definite matrix are all positive.This fact can be proven from the definition of eigenvalue:By the definition of positive-definite,the left-hand term is positive(for nonzero).Hence,must be positive also.5.2.Jacobi iterationsOf course,a procedure that always converges to zero isn’t going to help you attract friends.Consider a more useful procedure:the Jacobi Method for solving.The matrix is split into two parts:,whoseThinking with Eigenvectors and Eigenvalues11 diagonal elements are identical to those of,and whose off-diagonal elements are zero;and,whose diagonal elements are zero,and whose off-diagonal elements are identical to those of.Thus,. We derive the Jacobi Method:11where11(14) Note that because is diagonal,it is easy to invert.This identity can be converted into an iterative method by forming the recurrence115 Given a starting vector0,this formula generates a sequence of vectors.Our hope is that each successive vector will be closer to the solution than the last.is called a stationary point of Equation15,because if ,then1will also equal.Now,this derivation may seem quite arbitrary to you,and you’re right.We could have formed any number of identities for instead of Equation14.In fact,simply by splitting differently—that is, by choosing a different and—we could have derived the Gauß-Seidel method,or the method of Successive Over-Relaxation(SOR).Our hope is that we have chosen a splitting for which has a small spectral radius.I chose the Jacobi splitting arbitrarily for simplicity.Suppose we start with some arbitrary vector0.For each iteration,we apply to this vector,then add to the result.What does each iteration do?Again,apply the principle of thinking of a vector as a sum of other,well-understood vectors.Express each iterate as the sum of the exact solution and the error term.Then,Equation15becomes1(by Equation14)1(16) Each iteration does not affect the“correct part”of(because is a stationary point);but each iteration does affect the error term.It is apparent from Equation16that if1,then the error term will converge to zero as approaches infinity.Hence,the initial vector0has no effect on the inevitable outcome!Of course,the choice of0does affect the number of iterations required to converge to within a given tolerance.However,its effect is less important than that of the spectral radius,which determines the speed of convergence.Suppose that is the eigenvector of with the largest eigenvalue (so that).If the initial error0,expressed as a linear combination of eigenvectors,includes a component in the direction of,this component will be the slowest to converge.The convergence of the Jacobi Method can be described as follows:where12is the Euclidean norm(length)of.(In fact,the inequality holds for any norm.)12Jonathan Richard Shewchuk12Figure12:The eigenvectors of are directed along the axes of the paraboloid defined by the quadratic form.Each eigenvector is labeled with its associated eigenvalue.Each eigenvalue is proportional to the steepness of the corresponding slope.The convergence of the Jacobi Method depends on,which depends on.Unfortunately,Jacobi does not converge for every,or even for every positive-definite.5.3.A Concrete ExampleTo demonstrate these ideas,I shall solve the system specified by Equation4.First,we need a method of finding eigenvalues and eigenvectors.By definition,for any eigenvector with eigenvalue,Eigenvectors are nonzero,so must be singular.Then,det0The determinant of is called the characteristic polynomial.It is an-degree polynomial in whose roots are the set of eigenvalues.The characteristic polynomial of(from Equation4)isdet3226291472and the eigenvalues are7and2.Tofind the eigenvector associated with7,42 21120 41220Convergence Analysis of Steepest Descent13 Any solution to this equation is an eigenvector;say,12.By the same method,wefind that21 is an eigenvector corresponding to the eigenvalue2.In Figure12,we see that these eigenvectors coincide with the axes of the familiar ellipsoid,and that a larger eigenvalue corresponds to a steeper slope.(Negative eigenvalues indicate that decreases along the axis,as in Figures5(b)and5(d).)Now,let’s see the Jacobi Method in ing the constants specified by Equation4,we have11301602201301628023132343The eigenvectors of are21with eigenvalue23,and21with eigenvalue23. These are graphed in Figure13(a);note that they do not coincide with the eigenvectors of,and are not related to the axes of the paraboloid.Figure13(b)shows the convergence of the Jacobi method.The mysterious path the algorithm takes can be understood by watching the eigenvector components of each successive error term(Figures13(c),(d), and(e)).Figure13(f)plots the eigenvector components as arrowheads.These are converging normally at the rate defined by their eigenvalues,as in Figure11.I hope that this section has convinced you that eigenvectors are useful tools,and not just bizarre torture devices inflicted upon you by your professors for the pleasure of watching you suffer(although the latter isa nice fringe benefit).6.Convergence Analysis of Steepest Descent6.1.Instant ResultsTo understand the convergence of Steepest Descent,let’sfirst consider the case where is an eigenvector with eigenvalue.Then,the residual is also an eigenvector.Equation12gives1Figure14demonstrates why it takes only one step to converge to the exact solution.The point lies on one of the axes of the ellipsoid,and so the residual points directly to the center of the ellipsoid.Choosing 1gives us instant convergence.For a more general analysis,we must express as a linear combination of eigenvectors,and we shall furthermore require these eigenvectors to be orthonormal.It is proven in Appendix C2that ifis nonsingular and symmetric,there exists a set of orthogonal eigenvectors of.As we can scale。
Package‘cPCG’October12,2022Type PackageTitle Efficient and Customized Preconditioned Conjugate GradientMethod for Solving System of Linear EquationsVersion1.0Date2018-12-30Author Yongwen ZhuangMaintainer Yongwen Zhuang<******************>Description Solves system of linear equations using(preconditioned)conjugate gradient algo-rithm,with improved efficiency using Armadillo templated'C++'linear algebra library,andflex-ibility for user-specified precondition-ing method.Please check<https:///styvon/cPCG>for latest updates.Depends R(>=3.0.0)License GPL(>=2)Imports Rcpp(>=0.12.19)LinkingTo Rcpp,RcppArmadilloRoxygenNote6.1.1Encoding UTF-8Suggests knitr,rmarkdownVignetteBuilder knitrNeedsCompilation yesRepository CRANDate/Publication2019-01-1117:00:10UTCR topics documented:cPCG-package (2)cgsolve (3)icc (4)pcgsolve (5)Index712cPCG-package cPCG-package Efficient and Customized Preconditioned Conjugate Gradient Methodfor Solving System of Linear EquationsDescriptionSolves system of linear equations using(preconditioned)conjugate gradient algorithm,with im-proved efficiency using Armadillo templated’C++’linear algebra library,andflexibility for user-specified preconditioning method.Please check<https:///styvon/cPCG>for latest up-dates.DetailsFunctions in this package serve the purpose of solving for x in Ax=b,where A is a symmetric andpositive definite matrix,b is a column vector.To improve scalability of conjugate gradient methods for larger matrices,the Armadillo templatedC++linear algebra library is used for the implementation.The package also providesflexibility tohave user-specified preconditioner options to cater for different optimization needs.The DESCRIPTIONfile:Package:cPCGType:PackageTitle:Efficient and Customized Preconditioned Conjugate Gradient Method for Solving System of Linear Equati Version: 1.0Date:2018-12-30Author:Yongwen ZhuangMaintainer:Yongwen Zhuang<******************>Description:Solves system of linear equations using(preconditioned)conjugate gradient algorithm,with improved effic Depends:R(>=3.0.0)License:GPL(>=2)Imports:Rcpp(>=0.12.19)LinkingTo:Rcpp,RcppArmadilloRoxygenNote: 6.1.1Encoding:UTF-8Suggests:knitr,rmarkdownVignetteBuilder:knitrIndex of help topics:cPCG-package Efficient and Customized PreconditionedConjugate Gradient Method for Solving System ofLinear Equationscgsolve Conjugate gradient methodicc Incomplete Cholesky Factorizationpcgsolve Preconditioned conjugate gradient methodcgsolve3Author(s)Yongwen ZhuangReferences[1]Reeves Fletcher and Colin M Reeves.“Function minimization by conjugate gradients”.In:Thecomputer journal7.2(1964),pp.149–154.[2]David S Kershaw.“The incomplete Cholesky—conjugate gradient method for the iter-ativesolution of systems of linear equations”.In:Journal of computational physics26.1(1978),pp.43–65.[3]Yousef Saad.Iterative methods for sparse linear systems.V ol.82.siam,2003.[4]David Young.“Iterative methods for solving partial difference equations of elliptic type”.In:Transactions of the American Mathematical Society76.1(1954),pp.92–111.Examples#generate test datatest_A<-matrix(c(4,1,1,3),ncol=2)test_b<-matrix(1:2,ncol=1)#conjugate gradient method solvercgsolve(test_A,test_b,1e-6,1000)#preconditioned conjugate gradient method solver,#with incomplete Cholesky factorization as preconditionerpcgsolve(test_A,test_b,"ICC")cgsolve Conjugate gradient methodDescriptionConjugate gradient method for solving system of linear equations Ax=b,where A is symmetric and positive definite,b is a column vector.Usagecgsolve(A,b,tol=1e-6,maxIter=1000)ArgumentsA matrix,symmetric and positive definite.b vector,with same dimension as number of rows of A.tol numeric,threshold for convergence,default is1e-6.maxIter numeric,maximum iteration,default is1000.4iccDetailsThe idea of conjugate gradient method is tofind a set of mutually conjugate directions for the unconstrained problemargmin x f(x)where f(x)=0.5b T Ab−bx+z and z is a constant.The problem is equivalent to solving Ax=b.This function implements an iterative procedure to reduce the number of matrix-vector multiplica-tions[1].The conjugate gradient method improves memory efficiency and computational complex-ity,especially when A is relatively sparse.ValueReturns a vector representing solution x.WarningUsers need to check that input matrix A is symmetric and positive definite before applying the function.References[1]Yousef Saad.Iterative methods for sparse linear systems.V ol.82.siam,2003.See AlsopcgsolveExamples##Not run:test_A<-matrix(c(4,1,1,3),ncol=2)test_b<-matrix(1:2,ncol=1)cgsolve(test_A,test_b,1e-6,1000)##End(Not run)icc Incomplete Cholesky FactorizationDescriptionIncomplete Cholesky factorization method to generate preconditioning matrix for conjugate gradi-ent method.Usageicc(A)ArgumentsA matrix,symmetric and positive definite.DetailsPerforms incomplete Cholesky factorization on the input matrix A,the output matrix is used for preconditioning in pcgsolve()if"ICC"is specified as the preconditioner.ValueReturns a matrix after incomplete Cholesky factorization.WarningUsers need to check that input matrix A is symmetric and positive definite before applying the function.See AlsopcgsolveExamples##Not run:test_A<-matrix(c(4,1,1,3),ncol=2)out<-icc(test_A)##End(Not run)pcgsolve Preconditioned conjugate gradient methodDescriptionPreconditioned conjugate gradient method for solving system of linear equations Ax=b,where A is symmetric and positive definite,b is a column vector.Usagepcgsolve(A,b,preconditioner="Jacobi",tol=1e-6,maxIter=1000) ArgumentsA matrix,symmetric and positive definite.b vector,with same dimension as number of rows of A.preconditioner string,method for preconditioning:"Jacobi"(default),"SSOR",or"ICC".tol numeric,threshold for convergence,default is1e-6.maxIter numeric,maximum iteration,default is1000.DetailsWhen the condition number for A is large,the conjugate gradient(CG)method may fail to converge in a reasonable number of iterations.The Preconditioned Conjugate Gradient(PCG)Method appliesa precondition matrix C and approaches the problem by solving:C−1Ax=C−1bwhere the symmetric and positive-definite matrix C approximates A and C−1A improves the con-dition number of A.Common choices for the preconditioner include:Jacobi preconditioning,symmetric successive over-relaxation(SSOR),and incomplete Cholesky factorization[2].ValueReturns a vector representing solution x.PreconditionersJacobi:The Jacobi preconditioner is the diagonal of the matrix A,with an assumption that all diagonal elements are non-zero.SSOR:The symmetric successive over-relaxation preconditioner,implemented as M=(D+L)D−1(D+ L)T.[1]ICC:The incomplete Cholesky factorization preconditioner.[2]WarningUsers need to check that input matrix A is symmetric and positive definite before applying the function.References[1]David Young.“Iterative methods for solving partial difference equations of elliptic type”.In:Transactions of the American Mathematical Society76.1(1954),pp.92–111.[2]David S Kershaw.“The incomplete Cholesky—conjugate gradient method for the iter-ativesolution of systems of linear equations”.In:Journal of computational physics26.1(1978),pp.43–65.See AlsocgsolveExamples##Not run:test_A<-matrix(c(4,1,1,3),ncol=2)test_b<-matrix(1:2,ncol=1)pcgsolve(test_A,test_b,"ICC")##End(Not run)Index∗methodscgsolve,3icc,4pcgsolve,5∗optimizecgsolve,3pcgsolve,5∗packagecPCG-package,2cgsolve,3,6cPCG(cPCG-package),2cPCG-package,2icc,4pcgsolve,4,5,5preconditioner(pcgsolve),57。
薄膜电阻温度计原理性误差分析及数据处理方法研究曾磊;石友安;孔荣宗;贺立新;桂业伟【摘要】Thin film resistance thermometer is usually used in shock wave tunnel to measure the heat flux. A postprocessor method which is based on one-dimension inverse heat conduction problem is studied and the coupling relationship between temperature rising and heat flux is considered to improve the data precision of heat flux measurement. One-dimension inverse heat conduction analysis method is used to convert voltage signal to heat flux. The distribution of temperature of platinum film is calculated by finite element method based on three-dimension and one-dimension semi-infinite heat conduction equation respectively, and the difference is studied to correct the postprocessor error. The postprocessor method is validated by comparison with CookFelderman and thermoelectric analog network method, and the method provides a "software" way to improve the precision of heat flux measurement.%薄膜电阻温度计是高超声速测热试验中一种常用的传感器,多用于激波风洞中.改进薄膜电阻温度计测热数据的后处理方法,分析其原理性误差,提出修正方法,可以进一步提高热流测量精度,为防热设计提供可靠数据.应用三维热传导理论,考虑热流和温升的耦合影响,计算了气动加热条件下薄膜电阻温度计结构温升情况,得到了铂层内的温度分布规律,并与一维半无限简化理论得到的薄膜电阻温度计表层温度相互对比,得到了模型简化带来的原理性误差;建立了由表面温升计算表面热流的导热反问题计算方法,与经典的Cook-Felderman处理公式和热电模拟网络处理方法相互对比,提出了修止热流值的方法,为提高热流测量精度提供了一种可行的手段.【期刊名称】《实验流体力学》【年(卷),期】2011(025)001【总页数】5页(P79-83)【关键词】薄膜电阻温度计;热流辨识;数据处理【作者】曾磊;石友安;孔荣宗;贺立新;桂业伟【作者单位】中国空气动力研究与发展中心,四川,绵阳,621000;中国空气动力研究与发展中心,四川,绵阳,621000;中国空气动力研究与发展中心,四川,绵阳,621000;中国空气动力研究与发展中心,四川,绵阳,621000;中国空气动力研究与发展中心,四川,绵阳,621000【正文语种】中文【中图分类】TH765.2+30 引言热环境地面风洞试验是考核理论计算方法,为防热设计提供依据的重要手段。
神经⽹络的6种有监督训练算法神经⽹络可以采⽤有监督和⽆监督两种⽅式来进⾏训练。
传播训练算法是⼀种⾮常有效的有监督训练算法。
6种传播算法如下: 1·Backpropagation Training 2·Quick Propagation Training (QPROP) 3·Manhattan Update Rule 4·Resilient Propagation Training (RPROP) 5·Scaled Conjugate Gradient (SCG) 6·Levenberg Marquardt (LMA)1、反向传播算法(Backpropagation Training) Backpropagation is one of the oldest training methods for feedforward neural networks. Backpropagation uses two parameters in conjunction with the gradient descent calculated in the previous section. The first parameter is the learning rate which is essentially a percent that determines how directly the gradient descent should be applied to the weight matrix. The gradient is multiplied by the learning rate and then added to the weight matrix. This slowly optimizes the weights to values that will produce a lower error. One of the problems with the backpropagation algorithm is that the gradient descent algorithm will seek out local minima. These local minima are points of low error, but may not be a global minimum. The second parameter provided to the backpropagation algorithm helps the backpropagation out of local minima. The second parameter is called momentum. Momentum specifies to what degree the previous iteration weight changes should be applied to the current iteration. The momentum parameter is essentially a percent, just like the learning rate. To use momentum, the backpropagation algorithm must keep track of what changes were applied to the weight matrix from the previous iteration. These changes will be reapplied to the current iteration, except scaled by the momentum parameters. Usually the momentum parameter will be less than one, so the weight changes from the previous training iteration are less significant than the changes calculated for the current iteration. For example, setting the momentum to 0.5 would cause 50% of the previous training iteration's changes to be applied to the weights for the current weight matrix. 总结:最早提出的⽅法,需要提供学习速率和动量参数2、曼哈顿跟新规则(Manhattan Update Rule) One of the problems with the backpropagation training algorithm is the degree to which the weights are changed. The gradient descent can often apply too large of a change to the weight matrix. The Manhattan Update Rule and resilient propagation training algorithms only use the sign of the gradient. The magnitude is discarded. This means it is only important if the gradient is positive, negative or near zero. For the Manhattan Update Rule, this magnitude is used to determine how to update the weight matrix value. If the magnitude is near zero, then no change is made to the weight value. If the magnitude is positive, then the weight value is increased by a specific amount. If the magnitude is negative, then the weight value is decreased by a specific amount. The amount by which the weight value is changed is defined as a constant. You must provide this constant to the Manhattan Update Rule algorithm, like 0.00001.Manhattan propagation generally requires a small learning rate. 总结:提供学习速率参数,权重矩阵的该变量是⼀个固定的值,解决了采⽤梯度下降算法计算得到的权重改变量往往过⼤的问题。
量化共轭梯度算法量化共轭梯度算法(Conjugate Gradient Algorithm)是一种用于求解线性方程组的迭代算法。
它的特点是每次迭代都在共轭方向上进行,从而加快了迭代的收敛速度。
下面将详细介绍量化共轭梯度算法的原理、步骤和应用。
1.原理:量化共轭梯度算法是基于共轭梯度法(Conjugate Gradient Method)发展而来的。
共轭梯度法是一种用于求解对称正定线性方程组的优化算法。
它利用了线性方程组的特殊性质,通过选择恰当的方向,将问题转化为一系列独立的一维优化问题,从而获得线性方程组的近似解。
2.步骤:(1)初始化:给定一个初始点x0和一个初始方向d0。
(2)迭代更新:根据共轭方向的性质,依次求解近似问题的解。
更新公式为:αk = (rk^T * rk) / (dk^T * A * dk)xk+1 = xk + αk * dkrk+1 = rk - αk * A * dkβk+1 = (r k+1^T * rk+1) / (rk^T * rk)dk+1 = rk+1 + βk+1 * dk其中,A表示线性方程组的系数矩阵,rk表示当前残差,dk表示方向,αk表示步长。
(3)重复步骤(2)直到满足收敛条件。
3.应用:(1)线性方程组的求解:量化共轭梯度算法可以高效地求解对称正定线性方程组,特别适用于大规模稀疏线性方程组求解。
(2)优化问题的求解:量化共轭梯度算法可以用于求解凸优化问题,例如最小二乘问题、最大似然估计等。
(3)机器学习算法的训练:量化共轭梯度算法可以用于训练一些机器学习算法,如逻辑回归、支持向量机等,提高算法的收敛速度。
(4)图像处理:量化共轭梯度算法在图像处理领域有广泛应用,例如图像恢复、图像分割和图像压缩等。
总结:量化共轭梯度算法是一种用于求解线性方程组的迭代算法,它利用了共轭方向的特性,加速了收敛速度。
它的原理和步骤相对简单,但在实际应用中具有广泛的用途。
[1]SIAM ReviewVolume 38, Number 3Contents[2]On Projection Algorithms for Solving Convex Feasibility ProblemsHeinz H. Bauschke, Jonathan M. Borwein(Pages 367 - 426)[3]Conjugate Gradient Methods for Toeplitz SystemsRaymond H. Chan, Michael K. Ng(Pages 427 - 482)[4]Modelling the Stem Curve of a Palm in a Strong WindC. Philipsen, S. Markvorsen, W. Kliem(Pages 483 - 484)[5]Erratum and Reformulation: On the Stem Curve of a Tall Palm in aStrong WindDonald F. Winter(Pages 485 - 486)[6]Catastrophe Theory Implications for Rightsizing when PlanningInterim Solutions for Progressing from a Partial Mainframe to Client--Server Distributed Databases: 3D Previewing of Possible ProblemsBarry S. Thornton, W. T. Hung(Pages 487 - 495)[7]The Use of Linear Programming in the Construction of ExtremalSolutions to Linear Inverse ProblemsStephen P. Huestis(Pages 496 - 506)[8]The Matrix ExponentialI. E. Leonard(Pages 507 - 512)[9][ Home ] [10][ Search ] [11][ Help ]Copyright © 2002 by Society for Industrial and Applied Mathematics For additional information contact [12]service@.[1]SIAM ReviewVolume 38, Number 4Contents[2]The Decay Of Axisymmetric Magnetic Fields: A Review Of Cowling's TheoremManuel N˙Òez(Pages 553 - 564)[3]Computer-Assisted Proofs in Analysis and Programming in Logic: A Case StudyHans Koch, Alain Schenkel, Peter Wittwer(Pages 565 - 604)[4]Lagrangian Aspects of the Kirchhoff Elastic RodJoel Langer, David A. Singer(Pages 605 - 618)[5]Algorithmic Derivation Of Centre ConditionsJ. M. Pearson, N. G. Lloyd, C. J. Christopher(Pages 619 - 636)[6]Vertex Latitudes on Ellipsoid GeodesicsT. E. Wood(Pages 637 - 644)[7]Some Boundary Problems with Even or Odd SolutionsWilliam C. Waterhouse(Pages 645 - 646)[8]Optimal Intercept Course of Vessels to a Nonzero RangeB. U. Nguyen, D. Nguyen(Pages 647 - 649)[9]The Potential Value of Saaty's Eigenvector Scaling Method forShort-Term Forecasting of Currency Exchange RatesMarvin D. Troutt, Hussein H. Elsaid(Pages 650 - 654)[10]Rectangular Parallelelpipeds in EllipsoidsJ. Duncan, D. Khavinson, H. Shapiro(Pages 655 - 657)[11]Accelerated Convergence in Newton's MethodWilliam F. Ford, James A. Pennline(Pages 658 - 659)[12]Geometric Properties of Factorable Planar Systems of Differential EquationsHassan Sedaghat(Pages 660 - 665)[13]Analytic Functions, Ideal Fluid Flow, and Bernouilli's Equation J. G. Simmonds(Pages 666 - 667)[14][ Home ] [15][ Search ] [16][ Help ]Copyright © 2002 by Society for Industrial and Applied Mathematics For additional information contact [17]service@. References17. mailto:service@[1]SIAM ReviewVolume 39, Number 1Contents[2]Eigenmodes of Isospectral DrumsTobin A. Driscoll(Pages 1 - 17)[3]On Two Ways of Stabilizing the Hierarchical Basis MultilevelMethodsPanayot S. Vassilevski(Pages 18 - 53)[4]The Mathematics of the Pentium Division BugAlan Edelman(Pages 54 - 67)[5]New Conservation Laws for the Interaction of Nonlinear WavesA. M. Balk(Pages 68 - 94)[6]Classroom Note:The Inspection Paradox InequalityJohn E. Angus(Pages 95 - 97)[7]Classroom Note:Hoffman's Circle UntangledJon Lee(Pages 98 - 105)[8]Classroom Note:An Analytic Center Manifold For a Simple Epidemiological Model Marc R. Roussel(Pages 106 - 109)[9]Classroom Note:Horizontal Circular Curves and CubicsFranÁois Dubeau(Pages 110 - 117)[10]Classroom Note:Optimum Spring-Damper Design For Mass ImpactDavid A. Peters(Pages 118 - 122)[11]Problems and SolutionsCecil C. Rousseau and Otto G. Ruehr, Editors(Pages 123 - 141)[12]Book Reviews(Pages 142 - 178)[13][ Home ] [14][ Search ] [15][ Help ]Copyright © 2002 by Society for Industrial and Applied Mathematics For additional information contact [16]service@. References16. mailto:service@[1]SIAM ReviewVolume 39, Number 2Contents[2]Solving a Polynomial Equation: Some History and Recent ProgressVictor Y. Pan(Pages 187 - 220)[3]A Class of Codimension-Two Free Boundary ProblemsS. D. Howison, J. D. Morgan, J. R. Ockendon(Pages 221 - 253)[4]Computing an Eigenvector with Inverse IterationIlse C. F. Ipsen(Pages 254 - 291)[5]Classroom Note: On the Limits of the Lagrange Multiplier RuleLuis A. Fern·ndez(Pages 292 - 297)[6]Classroom Note: The Lagrange--Charpit MethodManuel Delgado(Pages 298 - 304)[7]Classroom Note: A Unified Elementary Approach to Canonical Forms of MatricesJohn Karro, Chi-Kwong Li(Pages 305 - 309)[8]Classroom Note: Putting Constraints in Optimization for First-Year Calculus StudentsKelly Black(Pages 310 - 312)[9]Classroom Note: Some Eigenvalue Properties of Persymmetric Matrices Russell M. Reid(Pages 313 - 316)[10]Problems and SolutionsCecil C. Rousseau and Otto G. Ruehr, Editors(Pages 317 - 332)[11]Book ReviewsR. B. Kellogg, Editor(Pages 333 - 374)[12][ Home ] [13][ Search ] [14][ Help ]Copyright © 2002 by Society for Industrial and Applied Mathematics For additional information contact [15]service@. References15. mailto:service@[1]SIAM ReviewVolume 39, Number 3Contents[2]Pseudospectra of Linear OperatorsLloyd N. Trefethen(Pages 383 - 406)[3]Molecular Modeling of Proteins and Mathematical Prediction ofProtein StructureArnold Neumaier(Pages 407 - 460)[4]Case Studies from Industry:Optimal and Dominating Strategies for Determining Continuous Caster Product DimensionsDicky Yan(Pages 461 - 471)[5]Case Studies from Industry:Skiving Addition to the Cutting Stock Problem in the Paper IndustryM. P. Johnson, C. Rennick, E. Zak(Pages 472 - 483)[6]Classroom Note:Numerical and Analytical Solutions of Volterra's Population ModelKevin G. TeBeest(Pages 484 - 493)[7]Classroom Note:A Study of a Semi-Infinite IntegralY. Villacampa, A. Balaguer, J. L. UsÛ(Pages 494 - 495)[8]Classroom Note:Global Stability in an $S \to I \to R \to I$ ModelHelmar Nunes Moreira, Wang Yuquan(Pages 496 - 502)[9]Classroom Note:An Elementary Proof of Farkas' LemmaAchiya Dax(Pages 503 - 507)[10]Classroom Note:Converting Matrix Riccati Equations to Second-Order Linear ODE R. W. R. Darling(Pages 508 - 510)[11]Classroom Note:Time-Dependent Poiseuille FlowS. H. Smith(Pages 511 - 513)[12]Problems and SolutionsCecil C. Rousseau and Otto G. Ruehr, Editors(Pages 514 - 527)[13]Book ReviewsR. B. Kellogg, Editor(Pages 528 - 569)[14][ Home ] [15][ Search ] [16][ Help ]Copyright © 2002 by Society for Industrial and Applied Mathematics For additional information contact [17]service@. References17. mailto:service@[1]SIAM ReviewVolume 39, Number 4Contents[2]Of Stable Marriages and Graphs, and Strategy and PolytopesMichel Balinski, Guillaume Ratier(Pages 575 - 604)[3]A Survey of Combinatorial Gray CodesCarla Savage(Pages 605 - 629)[4]Interference Effects in ComputationWillard L. Miranker(Pages 630 - 643)[5]On the Gibbs Phenomenon and Its ResolutionDavid Gottlieb, Chi-Wang Shu(Pages 644 - 668)[6]Engineering and Economic Applications of Complementarity Problems M. C. Ferris, J. S. Pang(Pages 669 - 713)[7]Case Study from Industry:Process Modeling in Resin Transfer Molding as a Method toEnhance Product QualityW. K. Chui, J. Glimm, F. M. Tangerman, A. P. Jardine, J. S. Madsen, T. M. Donnellan, R. Leek(Pages 714 - 727)[8]Classroom Note:Geometry and Convergence of Euler's and Halley's MethodsA. Melman(Pages 728 - 735)[9]Classroom Note:Initialization of the Simplex Algorithm: An Artificial-Free ApproachH. Arsham(Pages 736 - 744)[10]Classroom Note:Finding the Center of a Circular Starting Line in an Ancient Greek StadiumChris Rorres, David Gilman Romano(Pages 745 - 754)[11]Classroom Note:Stability Considerations for Numerical MethodsJohnny Snyder(Pages 755 - 760)[12]Problems and SolutionsCecil C. Rousseau and Otto G. Ruehr, Editors(Pages 761 - 789)[13]Book ReviewsR. B. Kellogg, Editor(Pages 790 - 809)[14][ Home ] [15][ Search ] [16][ Help ]Copyright © 2002 by Society for Industrial and Applied Mathematics For additional information contact [17]service@. References17. mailto:service@[1]SIAM ReviewVolume 40, Number 1Contents[2]Inverse Eigenvalue ProblemsMoody T. Chu(Pages 1 - 39)[3]From Semidiscrete to Fully Discrete: Stability of Runge--KuttaSchemes by The Energy MethodDoron Levy, Eitan Tadmor(Pages 40 - 73)[4]Solution to an Inverse Problem in DiffusionYves Nievergelt(Pages 74 - 80)[5]Introducing Computational Science Methods Using ParallaxD. E. Stevenson(Pages 81 - 86)[6]Games to Teach Mathematical ModellingJames A. Powell, James S. Cangelosi, Ann Marie Harris(Pages 87 - 95)[7]Similarity Transformations for Partial Differential EquationsMehmet Pakdemirli, Muhammet Yurusoy(Pages 96 - 101)[8]Fractal Basins of Attraction Associated with a Damped Newton'sMethodBogdan I. Epureanu, Henry S. Greenside(Pages 102 - 109)[9]Using Complex Variables to Estimate Derivatives of Real Functions William Squire, George Trapp(Pages 110 - 112)[10]Estimating the Rate of Natural Bioattenuation of Ground WaterContaminants by a Mass Conservation ApproachJames W. Weaver, Freda Porter-Locklear(Pages 113 - 117)[11]Problems and SolutionsEdited by: C. Rousseau and O. Ruehr(Pages 118 - 145)[12]Book ReviewsEdited by: R.B. Kellogg(Pages 146 - 181)[13][ Home ] [14][ Search ] [15][ Help ]Copyright © 2002 by Society for Industrial and Applied Mathematics For additional information contact [16]service@. References16. mailto:service@[1]SIAM ReviewVolume 40, Number 2Contents[2]A Probabilistic Look at the Wiener--Hopf EquationSoren Asmussen(Pages 189 - 201)[3]Bayesian Assessment of Network ReliabilityNicholas Lynn, Nozer Singpurwalla, Adrian Smith(Pages 202 - 227)[4]Optimization Problems with Perturbations: A Guided TourJ. FrÈdÈric Bonnans, Alexander Shapiro(Pages 228 - 264)[5]New Perspectives in Turbulence: Scaling Laws, Asymptotics, andIntermittencyG. I. Barenblatt, A. J. Chorin(Pages 265 - 291)[6]Calculation of Cam-Form Errors(Pages 292 - 299)[7]Computing Geodetic CoordinatesStephen P. Keeler, Yves Nievergelt(Pages 300 - 309)[8]Numerical Verification of Second-Order Sufficiency ConditionsforNonlinear ProgrammingTerrence K. Kelly, Michael Kupferschmid(Pages 310 - 314)[9]Circular BilliardMichael Drexler, Martin J. Gander(Pages 315 - 323)[10]Eliminating Gibb's Effect from Separation of Variables Solutions T. E. Peterson(Pages 324 - 326)[11]ODE Models for the Parachute ProblemDouglas B. Meade(Pages 327 - 332)[12]A Riemann Sum Upper Bound in the Riemann--Lebesgue TheoremMaurice H. P. M. Van Putten(Pages 333 - 334)[13]Real Matrices with Positive Determinant are Homotopic to theIdentityAmit Bhaya(Pages 335 - 340)[14]How to Ride a Wave: Mechanics of SurfingTakeshi Sugimoto(Pages 341 - 343)[15]Solutions of Linear Differential Algebraic EquationsMazi Shirvani, Joseph W.-H. So(Pages 344 - 346)[16]Transmission Line Modeling: A Circuit Theory ApproachPedro L. D. Peres, Ivanil S. Bonatti, Amauri Lopes(Pages 347 - 352)[17]The Poisson Formula Revisited(Pages 353 - 355)[18]Lithotripsy: The Treatment of Kidney Stones with Shock WavesLaurens Howle, David G. Schaeffer, Michael Shearer, Pei Zhong (Pages 356 - 371)[19]Problems and SolutionsEdited by Cecil C. Rousseau and Otto G. Ruehr(Pages 372 - 390)[20]Book ReviewsEdited by R. B. Kellogg(Pages 391 - 431)[21][ Home ] [22][ Search ] [23][ Help ]Copyright © 2002 by Society for Industrial and Applied Mathematics For additional information contact [24]service@. References24. mailto:service@[1]SIAM ReviewVolume 40, Number 3Contents[2]Thin Films with High Surface TensionT. G. Myers(Pages 441 - 462)[3]On the Asymptotic and Numerical Solution of Linear OrdinaryDifferential EquationsA. B. Olde Daalhuis, F. W. J. Olver(Pages 463 - 495)[4]Well-Solvable Special Cases of the Traveling Salesman Problem: A SurveyRainer E. Burkard, Vladimir G. Deineko, RenÈ van Dal, Jack A.A. van der Veen, Gerhard J. Woeginger(Pages 496 - 546)[5]From Potential Theory to Matrix Iterations in Six StepsTobin A. Driscoll, Kim-Chuan Toh, Lloyd N. Trefethen(Pages 547 - 578)[6]Collective Coordinates and Length-Scale Competition in SpatiallyInhomogeneous Soliton-Bearing EquationsAngel S·nchez, A. R. Bishop(Pages 579 - 615)[7]A Similarity Approach to the Numerical Solution of Free BoundaryProblemsRiccardo Fazio(Pages 616 - 635)[8]Solving Ill-Conditioned and Singular Linear Systems: A Tutorial on RegularizationArnold Neumaier(Pages 636 - 666)[9]Classroom Note:A Model of DietingRonald E. Mickens, Denise N. Brewley, Matasha L. Russell(Pages 667 - 672)[10]Classroom Note:Note on the Optimal Intercept Time of Vessels to a NonzeroRangeMartin J. Gander(Pages 673 - 673)[11]Classroom Note:What Makes a Good Friend? The Mathematics of Rock ClimbingMatthew Bonney, Joshua Coaplen, Erik Doeff(Pages 674 - 679)[12]Classroom Note:On Particular Solutions of Linear Difference Equations withConstant CoefficientsRamesh C. Gupta(Pages 680 - 684)[13]Classroom Note:Calculation of Weights in Finite Difference FormulasBengt Fornberg(Pages 685 - 691)[14]Classroom Note:The Global Positioning System and the Implicit Function TheoremGail Nord, David Jabon, John Nord(Pages 693 - 697)[15]Classroom Note:Centrosymmetric MatricesAlan L. Andrew(Pages 697 - 698)[16]Three Notes on the Exponential of a Matrix andApplicationsJack Macki(Pages 699 - 699)[17]Classroom Note:A Note on the Matrix ExponentialEduardo Liz(Pages 700 - 702)[18]Classroom Note:The Power of a MatrixM. Kwapisz(Pages 703 - 705)[19]Classroom Note:A Simple Proof of the Leverrier--Faddeev CharacteristicPolynomial AlgorithmShui-Hung Hou(Pages 706 - 709)[20]Problems and SolutionsCecil C. Rousseau and Otto G. Ruehr, editors(Pages 710 - 731)[21]Book ReviewsR. B. Kellogg, editor(Pages 732 - 760)[22][ Home ] [23][ Search ] [24][ Help ]Copyright © 2002 by Society for Industrial and Applied Mathematics For additional information contact [25]service@. References25. mailto:service@[1]SIAM ReviewVolume 40, Number 4Contents[2]Generating Quasi-Random Paths for Stochastic ProcessesWilliam J. Morokoff(Pages 765 - 788)[3]Finite Element Methods of Least-Squares TypePavel B. Bochev, Max D. Gunzburger(Pages 789 - 837)[4]Fast Approximate Fourier Transforms for Irregularly Spaced DataAntony F. Ware(Pages 838 - 856)[5]Some Nonoverlapping Domain Decomposition MethodsJinchao Xu, Jun Zou(Pages 857 - 914)[6]A Simple Derivation of a Result in ElectrostaticsEduardo S·nchez-Velasco(Pages 915 - 917)[7]Similarity Solution to a Heat Exchange ProblemJ. David Logan(Pages 918 - 921)[8]On Deriving the Quasi-Minimal Residual MethodM. Sauren, H. M. B¸cker(Pages 922 - 926)[9]The Simple Pendulum is not so SimpleStuart S. Antman(Pages 927 - 930)[10]Stability Implications of Bendixson's CriterionC. Connell McCluskey, James S. Muldowney(Pages 931 - 934)[11]A Motivational Example for the Numerical Solution of the Algebraic Eigenvalue ProblemStephen M. Alessandrini(Pages 935 - 940)[12]A Dynamical Proof of the Method of LagrangeAlan P. Knoerr(Pages 941 - 944)[13]An Elementary Demonstration of the Existence of $s\ell(3,R)$Symmetry for all Second-Order Linear Ordinary DifferentialEquationsK. S. Govinder, P. G. L. Leach(Pages 945 - 946)[14]Jordan Normal Form via Elementary TransformationsA. Bujosa, R. Criado, C. Vega(Pages 947 - 956)[15]Turn Performance of Aircraft, RevisitedJudah Milgram(Pages 957 - 958)[16]Basis of Eigenvectors and Principal Vectors Associated withGauss--Seidel Matrix of A = tridiag [-1 2 -1]L. Kohaupt(Pages 959 - 964)[17]On the Computation of A^nSaber N. Elaydi, William A. Harris Jr.(Pages 965 - 971)[18]An Integral with Three ParametersGeorge Boros, Victor H. Moll(Pages 972 - 980)[19]Solutions(Pages 981 - 997)[20]Book Reviews(Pages 998 - 1023)[21]Notice: Erratum and Comments on"Initialization of the SimplexAlgorithm: An Artificial-Free Approach"Jack Macki(Pages 1024 - 1024)[22][ Home ] [23][ Search ] [24][ Help ]Copyright © 2002 by Society for Industrial and Applied Mathematics For additional information contact [25]service@. References25. mailto:service@[1]SIAM ReviewVolume 41, Number 1Contents[2]PrefaceMargaret H. Wright(Pages ix - x)[3]Survey and ReviewNick Trefethen, Section Editor(Pages 1 - 2)[4]Mathematical Analysis of HIV-1 Dynamics in VivoAlan S. Perelson, Patrick W. Nelson(Pages 3 - 44)[5]Iterated Random FunctionsPersi Diaconis, David Freedman(Pages 45 - 76)[6]Graphical Illustration of Some Examples Related to the Article "Iterated Random Functions" by Diaconis and FreedmanThe Editors(Pages 77 - 82)[7]Problems and TechniquesJoe Flaherty, Section Editor(Pages 83 - 84)[8]Electrical Impedance TomographyMargaret Cheney, David Isaacson, Jonathan C. Newell(Pages 85 - 101)[9]Ill-Conditioned Matrices Are Componentwise Near to Singularity Siegfried M. Rump(Pages 102 - 112)[10]SIGESTThe Editors(Pages 113 - 113)[11]Periodic Folding of Thin SheetsL. Mahadevan, Joseph B. Keller(Pages 115 - 131)[12]EducationBobby Schnabel, Section Editor(Pages 133 - 134)[13]The Discrete Cosine TransformGilbert Strang(Pages 135 - 147)[14]Optimization Case Studies in the NEOS GuideJoseph Czyzyk, Timothy Wisniewski, Stephen J. Wright(Pages 148 - 163)[15]Book ReviewsR. Bruce Kellogg, Section Editor(Pages 165 - 195)[16][ Home ] [17][ Search ] [18][ Help ]Copyright © 2002 by Society for Industrial and Applied Mathematics For additional information contact [19]service@. References19. mailto:service@[1]SIAM ReviewVolume 41, Number 2Contents[2]Survey and ReviewNick Trefethen, Section Editor(Pages 197 - 197)[3]Fast Marching MethodsJ. A. Sethian(Pages 199 - 235)[4]The Riemann Zeros and Eigenvalue AsymptoticsM. V. Berry, J. P. Keating(Pages 236 - 266)[5]Problems and TechniquesJoe Flaherty, Section Editor(Pages 267 - 268)[6]On the Approximate and Null Controllability of the Navier--Stokes EquationsEnrique Fern·ndez-Cara(Pages 269 - 277)[7]Parallel Multilevel series k-Way Partitioning Scheme for Irregular GraphsGeorge Karypis, Vipin Kumar(Pages 278 - 300)[8]SIGESTThe Editors(Pages 301 - 301)[9]Polynomial-Time Algorithms for Prime Factorization and DiscreteLogarithms on a Quantum ComputerPeter W. Shor(Pages 303 - 332)[10]EducationBobby Schnabel, Section Editor(Pages 333 - 333)[11]Matrices, Vector Spaces, and Information RetrievalMichael W. Berry, Zlatko Drmac, Elizabeth R. Jessup(Pages 335 - 362)[12]Bubbles in Wet, Gummed Wine LabelsP. Broadbridge, G. R. Fulford, N. D. Fowkes, D. Y. C. Chan, C. Lassig(Pages 363 - 372)[13]Book ReviewsR. Bruce Kellogg, Section Editor(Pages 373 - 373)[14][ Home ] [15][ Search ] [16][ Help ]Copyright © 2002 by Society for Industrial and Applied Mathematics For additional information contact [17]service@. References17. mailto:service@[1]SIAM ReviewVolume 41, Number 3Contents[2]Survey and ReviewNick Trefethen, Section Editor(Pages 415 - 415)[3]On the Theory of Complex RaysS. J. Chapman, J. M. H. Lawry, J. R. Ockendon, R. H. Tew(Pages 417 - 509)[4]Problems and TechniquesJoe Flaherty, Section Editor(Pages 511 - 511)[5]Robust Parameter Estimation in Computer VisionCharles V. Stewart(Pages 513 - 537)[6]Solving Index-1 DAEs in MATLAB and SimulinkLawrence F. Shampine, Mark W. Reichelt, Jacek A. Kierzenka(Pages 538 - 552)[7]SIGESTThe Editors(Pages 553 - 553)[8]Configuration Controllability of Simple Mechanical Control Systems Andrew D. Lewis, Richard M. Murray(Pages 555 - 574)[9]EducationBobby Schnabel, Section Editor(Pages 575 - 575)[10]Derivation of Numerical Methods Using Computer AlgebraWalter Gander, Dominik Gruntz(Pages 577 - 593)[11]Integer Programming and Conway's Game of LifeRobert A. Bosch(Pages 594 - 604)[12]Book ReviewsR. Bruce Kellogg, Section Editor(Pages 605 - 633)[13][ Home ] [14][ Search ] [15][ Help ]Copyright © 2002 by Society for Industrial and Applied Mathematics For additional information contact [16]service@. References16. mailto:service@[1]SIAM ReviewVolume 41, Number 4Contents[2]Survey and ReviewNick Trefethen, Section Editor(Pages 635 - 635)[3]Centroidal Voronoi Tessellations: Applications and AlgorithmsQiang Du, Vance Faber, Max Gunzburger(Pages 637 - 676)[4]Stochastic Spatial ModelsRick Durrett(Pages 677 - 718)[5]Problems and TechniquesJoe Flaherty, Section Editor(Pages 719 - 719)[6]Optimizing the Delivery of Radiation Therapy to Cancer Patients David M. Shepard, Michael C. Ferris, Gustavo H. Olivera, T. Rockwell Mackie(Pages 721 - 744)[7]Green's Functions for Multiply Connected Domains via ConformalMappingMark Embree, Lloyd N. Trefethen(Pages 745 - 761)[8]On the Reversion of an Asymptotic Expansion and the Zeros of the Airy FunctionsBruce R. Fabijonas, F. W. J. Olver(Pages 762 - 773)[9]SIGESTThe Editors(Pages 775 - 775)[10]The Ring Loading ProblemAlexander Schrijver, Paul Seymour, Peter Winkler(Pages 777 - 791)[11]EducationBobby Schnabel, Section Editor(Pages 793 - 793)[12]Pseudolinear ProgrammingSerge Kruk, Henry Wolkowicz(Pages 795 - 805)[13]Book Reviews IntroductionR. Bruce Kellogg, Section Editor(Pages 807 - 807)[14]Book Reviews(Pages 809 - 851)[15][ Home ] [16][ Search ] [17][ Help ]Copyright © 2002 by Society for Industrial and Applied Mathematics For additional information contact [18]service@. References18. mailto:service@[1]SIAM ReviewVolume 42, Number 1Contents[2]Survey and ReviewNick Trefethen, Section Editor(Pages 1 - 1)[3]Rigid-Body Dynamics with Friction and ImpactDavid E. Stewart(Pages 3 - 39)[4]Problems and TechniquesJoe Flaherty, Section Editor(Pages 41 - 41)[5]Fractional Splines and WaveletsMichael Unser, Thierry Blu(Pages 43 - 67)[6]Two Purposes for Matrix Factorization: A Historical Appraisal Lawrence Hubert, Jacqueline Meulman, Willem Heiser(Pages 68 - 82)[7]Extension of the Herglotz Algorithm to Nonautonomous Canonical TransformationsJ. C. Orum, R. T. Hudspeth, W. Black, R. B. Guenther(Pages 83 - 90)[8]SIGESTThe Editors(Pages 91 - 91)[9]Blowup in Reaction-Diffusion Systems with Dissipation of Mass Michel Pierre, Didier Schmitt(Pages 93 - 106)[10]EducationBobby Schnabel, Section Editor(Pages 107 - 107)[11]Equal Area World Maps: A Case StudyTimothy G. Feeman(Pages 109 - 114)[12]Planetary Motion and the Duality of Force LawsRachel W. Hall, Kresimir Josic(Pages 115 - 124)[13]Book ReviewsR. Bruce Kellogg, Section Editor(Pages 125 - 157)[14][ Home ] [15][ Search ] [16][ Help ]Copyright © 2002 by Society for Industrial and Applied Mathematics For additional information contact [17]service@. References17. mailto:service@[1]SIAM ReviewVolume 42, Number 2Contents[2]Survey and ReviewNick Trefethen, Section Editor(Pages 159 - 159)[3]Front Propagation in Heterogeneous MediaJack Xin(Pages 161 - 230)[4]Problems and TechniquesJoe Flaherty, Section Editor(Pages 231 - 231)[5]Extremal Characterizations of the Schur Complement and Resulting InequalitiesChi-Kwong Li, Roy Mathias(Pages 233 - 246)[6]Adjoint Recovery of Superconvergent Functionals from PDEApproximationsNiles A. Pierce, Michael B. Giles(Pages 247 - 264)[7]SIGESTThe Editors(Pages 265 - 265)[8]A Jacobi--Davidson Iteration Method for Linear Eigenvalue Problems Gerard L. G. Sleijpen, Henk A. Van der Vorst(Pages 267 - 293)[9]EducationBobby Schnabel, Section Editor(Pages 295 - 295)[10]Applications of ContouringThomas A. Grandine(Pages 297 - 316)[11]An Alternative Example of the Method of Multiple ScalesD. A. Edwards(Pages 317 - 332)[12]Book ReviewsBob O'Malley, Section Editor(Pages 333 - 365)[13][ Home ] [14][ Search ] [15][ Help ]Copyright © 2002 by Society for Industrial and Applied Mathematics For additional information contact [16]service@. References16. mailto:service@[1]SIAM ReviewVolume 42, Number 3Contents[2]Survey and ReviewNick Trefethen, Section Editor(Pages 367 - 367)[3]Recent Developments in Inverse Acoustic Scattering TheoryDavid Colton, Joe Coyle, Peter Monk(Pages 369 - 414)[4]Problems and TechniquesJoe Flaherty, Section Editor(Pages 415 - 415)[5]Numerical Study of Flows of Two Immiscible Liquids at Low Reynolds NumberJie Li, Yuriko Renardy(Pages 417 - 439)[6]An Efficient Method for a Class of Continuous Nonlinear Knapsack ProblemsA. Melman, G. Rabinowitz(Pages 440 - 448)[7]SIGESTThe Editors(Pages 449 - 449)[8]Is the Pollution Effect of the FEM Avoidable for the HelmholtzEquation Considering High Wave Numbers?Ivo M. Babuska, Stefan A. Sauter(Pages 451 - 484)[9]EducationBobby Schnabel, Section Editor(Pages 485 - 485)[10]The Many Proofs and Applications of Perron's TheoremC. R. MacCluer(Pages 487 - 498)[11]Orthogonal Sampling Formulas: A Unified ApproachAntonio G. Garcia(Pages 499 - 512)[12]Book ReviewsBob O'Malley, Section Editor(Pages 513 - 552)[13][ Home ] [14][ Search ] [15][ Help ]Copyright © 2002 by Society for Industrial and Applied Mathematics For additional information contact [16]service@. References16. mailto:service@[1]SIAM ReviewVolume 42, Number 4Contents[2]Survey and ReviewNick Trefethen, Section Editor(Pages 553 - 553)[3]A Hierarchy of Models for Type-II SuperconductorsS. J. Chapman(Pages 555 - 598)[4]The Mathematics of Infectious DiseasesHerbert W. Hethcote(Pages 599 - 653)[5]Problems and TechniquesJoe Flaherty, Section Editor(Pages 655 - 655)[6]On Periodic Billiard Trajectories in Obtuse TrianglesLorenz Halbeisen, Norbert Hungerb¸hler(Pages 657 - 670)[7]Reduction of Polynomial Planar Hamiltonians with QuadraticUnperturbed PartJes˙s Palaci·n, Patricia Yanguas(Pages 671 - 691)[8]SIGESTThe Editors(Pages 693 - 693)[9]Free Material Design via Semidefinite Programming: The Multiload Case with Contact ConditionsA. Ben-Tal, M. Kovara, A. Nemirovski, J. Zowe(Pages 695 - 715)[10]EducationBobby Schnabel, Section Editor(Pages 717 - 717)[11]Hydrodynamics of a Water RocketJoseph M. Prusa(Pages 719 - 726)[12]Exploring Reflection: Designing Light Reflectors for UniformIlluminationGary W. De Young(Pages 727 - 735)[13]Book ReviewsBob O'Malley, Section Editor(Pages 737 - 768)。