Artificial Intelligence techniques
- 格式:pdf
- 大小:721.58 KB
- 文档页数:22
ARTIFICIAL INTELLIGENCE——人工智能1 Artificial intelligence (AI) is, in theory, the ability of an artificial mechanism to demonstrate some form of intelligent behavior equivalent to the behaviors observed in intelligent living organisms. Artificial intelligence is also the name of the field of science and technology in which artificial mechanisms that exhibit behavior resembling intelligence are developed and studied.2 The term AI itself, and the phenomena actually observed, invite --- indeed demand --- philosophical speculation about what in fact constitutes the mind or intelligence. These kinds of questions can be considered separately, however, from a description of the various endeavors to construct increasingly sophisticated mechanisms that exhibit “intelligence.”3 Research into all aspects of AI is vigorous. Some concern exists among workers in the field, however, that both the progress and expectations of AI have been overstated. AI programs are primitive when compared to the kinds of intuitive reasoning and induction of which the human brain or even the brains of much less advanced organisms are capable. AI has indeed shown great promise in the area of expert systems --- that is, knowledge-based expert programs --- but while these programs are powerful when answering questions within a specific domain, they are nevertheless incapable of any type of adaptable, or truly intelligent, reasoning.4 Examples of AI systems include computer programs that perform such tasks as medical diagnoses and mineral prospecting. Computers have also been programmed to display some degree of legal reasoning, speech understanding, vision interpretation, natural-language processing, problem solving, and learning. Although most of these systems have proved valuable either as research vehicles or in specific, practical applications, most of them are also still very far from being perfected.5 CHARACTERISTICS OF AI: No generally accepted theories have yet emerged within the field of AI, owing in part to the fact that AI is a very young science. It is assumed, however, that on the highest level an AI system must receive input from its environment, determine an action or response, and deliver an output to its environment. A mechanism for interpreting the input is needed. This need has led to research in speech understanding, vision, and natural language. The interpretation must be represented in some form that can be manipulated by the machine.6 In order to achieve this goal, techniques of knowledge representation are invoked. The AI interpretation of this, together with knowledge obtained previously, ismanipulated within the system under study by means of some mechanism or algorithm. The system thus arrives at an internal representation of the response or action. The development of such processes requires techniques of expert reasoning, common-sense reasoning, problem solving, planning, signal interpretation, and learning. Finally, the system must网construct an effective response. This requires techniques of natural-language generation.7 THE FIFTH-GENERATION ATTEMPT: In the 1980s, in an attempt to develop an expert system on a very large scale, the Japanese government began building powerful computers with hardware that made logical inferences in the computer language PROLOG. (Following the idea of representing knowledge declaratively, the logic programming PROLOG had been developed in England and France. PROLOG is actually an inference engine that searches declared facts and rules to confirm or deny a hypothesis. A drawback of PROLOG is that it cannot be altered by the programmer.) The Japanese referred to such machines as “fifth-generation” computers.8 By the early 1990s, however, Japan had forsaken this plan and even announced that they were ready to release its software. Although they did not detail reasons for their abandonment of the fifth-generation program, U.S scientists faulted their efforts at AI as being too much in the direction of computer-type logic and too little in the direction of human thinking processes. The choice of PROLOG was also criticized. Other nations were by then not developing software in that computer language and were showing little further enthusiasm for it. Furthermore, the Japanese were not making much progress in parallel processing, a kind of computer architecture involving many independent processors working together in parallel—a method increasingly important in the field of computer science. The Japanese have now defined a “sixth-generation” goal instead, called the Real World Computing Project, that veers away from the expert-systems approach that works only by built-in logical rules.9 THE FUTURE OF AI RESEARCH: One impediment to building even more useful expert systems has been, from the start, the problem of input---in particular, the feeding of raw data into an AI system. To this end, much effort has been devoted to speech recognition, character recognition, machine vision, and natural-language processing. A second problem is in obtaining knowledge. It has proved arduous toextract knowledge from an expert and then code it for use by the machine, so a great deal of effort is also being devoted to learning and knowledge acquisition.10 One of the most useful ideas that has emerged from AI research, however, is that facts and rules (declarative knowledge) can be represented separately from decision-making algorithms (procedural knowledge). This realization has had a profound effect both on the way that scientists approach problems and on the engineering techniques used to produce AI systems. By adopting a particular procedural element, called an inference engine, development of an AI system is reduced to obtaining and codifying sufficient rules and facts from the problem domain. This codification process is called knowledge engineering. Reducing system development to knowledge engineering has opened the door to non-AI practitioners. In addition, business and industry have been recruiting AI scientists to build expert systems.11 In particular, a large number of these problems in the AI field have been associated with robotics. There are, first of all, the mechanical problems of getting a machine to make very precise or delicate movements. Beyond that are the much more difficult problems of programming sequences of movements that will enable a robot to interact effectively with a natural environment, rather than some carefully designed laboratory setting. Much work in this area involves problem solving and planning.12 A radical approach to such problems has been to abandon the aim of developing “reasoning” AI systems and to produce, instead, robots that function “reflexively”. A leading figure in this field has been Rodney Brooks of the Massachusetts Institute of Technology. These AI researchers felt that preceding efforts in robotics were doomed to failure because the systems produced could not function in the real world. Rather than trying to construct integrated networks that operate under a centralizing control and maintain a logically consistent model of the world, they are pursuing a behavior-based approach named subsumption architecture.13 Subsumption architecture employs a design technique called “layering,”---a form of parallel processing in which each layer is a separate behavior-producing network that functions on its own, with no central control. No true separation exists, in these layers, between data and computation. Both of them are distributed over the same networks. Connections between sensors and actuators in these systems are kept short as well. The resulting robots might be called “mindless,” but in fact they have demonstrated remarkable abilities to learn and to adapt to real-life circumstances.14 The apparent successes of this new approach have not convinced many supporters of integrated-systems development that the alternative is a valid one for drawing nearer to the goal of producing true AI. The arguments that have arisen between practitioners of the two different methodologies are in fact profound ones. They have implications about the nature of intelligence in general, whether natural or artificial。
高三英语人工智能单选题40题1. Artificial intelligence is a branch of computer science that aims to create intelligent _____.A.machinesB.devicesC.equipmentsD.instruments答案解析:A。
“machines”通常指机器,在人工智能领域常指具有一定智能的机器;“devices”一般指设备、装置;“equipments”是“equipment”的复数形式,指装备、器材;“instruments”指仪器、工具。
2. In the field of artificial intelligence, algorithms are used to train _____.A.modelsB.patternsC.shapesD.forms答案解析:A。
“models”在人工智能中常指模型,可以通过算法进行训练;“patterns”指模式、图案;“shapes”指形状;“forms”指形式。
3. Deep learning is a powerful technique in artificial intelligence that uses neural _____.worksB.systemsC.structuresanizations答案解析:A。
“networks”指网络,在深度学习中常指神经网络;“systems”指系统;“structures”指结构;“organizations”指组织。
4. Artificial intelligence can process large amounts of data to make accurate _____.A.predictionsB.forecastsC.projectionsD.expectations答案解析:A。
人工智能技术的英语artificial intelligence technology常见释义:英[ˌɑːtɪˈfɪʃl ɪnˈtelɪdʒəns tekˈnɒlədʒi]美[ˌɑːrtɪˈfɪʃl ɪnˈtelɪdʒəns tekˈnɑːlədʒi]例句:虽然现在针对选择题和判断正误题的自动评分系统已经非常普遍,但利用人工智能技术对短文进行评分尚未得到教育工作者的广泛认可,而且批评声也很多。
Although automated grading systems for multiple-choice and true-false tests are now widespread, the use of artificial intelligence technology to grade essay answers has not yet received widespread acceptance by educators and has many critics.将CAD和人工智能技术引入组合夹具设计中,可以提高生产效率、减轻劳动强度、缩短生产准备周期和加快产品上市时间。
Introducing CAD and artificial intelligence technology into modular fixture design can improve production efficiency, lighten working intensity, reduce manufacturing lead-time and marketing time.听上去够酷的了,不过该公司已经开始研发包含摄像头和人工智能技术的升级版鞋子,不仅可以探测到障碍物,还可以检测出是何种障碍物。
That sounds impressive enough, but the company is already working on a much more advanced version that incorporates cameras and artificial intelligence to not only detectobstacles but also their nature.Java语言特点及其对人工智能技术的影响和促进Characteristics of Java Language and the Action of Influence and Promotion of it for AI Technology。
科技与创新┃Science and Technology&Innovation2021年第07期文章编号:2095-6835(2021)07-0172-02浅析人工智能技术在5G时代的发展与应用王君1,4,廖华杰1,宋泽生2,欧子龙1,梁薇薇3,廖君1,夏愚然1(1.中山大学南方学院,广东广州510970;2.南昌大学,江西南昌330027;3.重庆邮电大学,重庆400065;4.广州恒通智联科技有限公司,广东广州510630)摘要:随着社会科技的不断发展,人工智能已经从最初的起步发展期发展到了现在的飞速发展期,它最主要的集中发展时期可以分为应用发展期和飞速发展期。
随着5G时代的到来,医疗保障、教育培训、交通运输等诸多领域的服务水平难以满足人类对智能化的需求。
为了解决这些问题,满足日益增长的服务和效率需求,人工智能技术随之产生。
人工智能技术是研究、开发用于模拟、延伸和扩展人的智能的理论、方法、技术及应用系统的一门新的技术科学。
人工智能的现状发展主要在人脸识别、机器人、专家系统等方面,人工智能与5G网络的相互结合,加快了数字时代的发展,人工智能带给人们很多好处的同时,也带来了许多不足,比如核心技术成熟度不足,对于应用场景的依赖性较强,存在数据孤岛和数据碎片化。
关键词:人工智能;5G时代;技术革命;智能机器中图分类号:TN929.5文献标志码:A DOI:10.15913/ki.kjycx.2021.07.073随着现代高科技的发展,人工智能技术占据了主导地位,人工智能融合了智能机器以及机器智能的技术。
以前的人工智能已经不满足于现代技术的发展需求,因此人们提出了新的人工智能技术,被广泛应用在各大领域当中。
它是计算机与信息网络的真正结合,成为了新一轮的技术革命。
人工智能技术是研究、开发用于模拟、延伸和扩展人的智能的理论、方法、技术及应用系统的一门新的技术科学。
它是计算机与互联网的真正结合以及它在计算机的基础上实现了智能化,从而更加追求智能机器到高水平的人机、脑机相互协同和融合[1-4]。
智能科学与技术专业英语一、单词1. Artificial Intelligence (AI)- 英语释义:The theory and development ofputer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision - making, and translation between languages.- 用法:“Artificial Intelligence” is often abbreviated as “AI” and can be used as a subject or in phrases like “AI technology” or “the field of AI”.- 双语例句:- Artificial Intelligence has made great progress in recent years. (近年来,人工智能取得了巨大的进展。
)- Manypanies are investing heavily in artificial intelligence research. (许多公司正在大力投资人工智能研究。
)2. Algorithm- 英语释义:A set ofputational steps and rules for performing a specific task.- 用法:Can be used as a countable noun, e.g. “T his algorithm is very efficient.”- 双语例句:- The new algorithm can solve the problem much faster. (新算法可以更快地解决这个问题。
市场调研方法外文文献及翻译1. Market Research Methods: Incorporating Social Media into Traditional Approaches文章介绍了如何在市场调研中运用社交媒体,以帮助企业更好地了解消费者。
研究人员将社交媒体与传统的定量调研和定性调研相结合,以获得更全面的信息。
通过采集社交媒体的数据分析消费者的行为和偏好,以及对产品或服务的反馈意见。
2. Using Eye Tracking in Market Research: A Guide to Best Practices该文献介绍了视觉追踪技术在市场调研中的应用。
作者指出,视觉追踪技术可以帮助研究人员理解消费者在浏览产品或服务时的注意力分配和行为模式。
文章介绍了适用于市场调研的视觉追踪应用程序的最佳实践和测试方法。
3. Conjoint Analysis in Marketing: New Developments with Implications for Research and Practice这篇文章介绍了一种被称为 "共轭分析" 的调研方法,该方法可以帮助研究人员了解消费者在购买某种产品或服务时的偏好和决策过程。
文献称,共轭分析已经成为市场营销领域最为普遍的工具之一。
文章还介绍了最新的研究和在实践中的应用,并探讨了一些特定情况下共轭分析的限制。
4. Qualitative Market Research: An International Journal这个杂志专注于定性市场调研方法。
它包括与确定消费者需求、分析竞争对手、建立品牌等相关的研究。
文章强调定性市场调研可以提供深入的见解和对产品或服务的更清晰的理解,帮助企业做出更明智的营销和业务决策。
每一期都包括来自该领域的专家的文章,并提供案例研究和最佳实践。
5. Use of Artificial Intelligence Techniques in Market Research: A Review该文献介绍了如何使用人工智能技术进行市场调研。
APPLICATION OF ARTIFICIAL INTELLIGENCE (AI) PROGRAMMINGTECHNIQUES TO TACTICAL GUIDANCE FOR FIGHTER AIRCRAFTJohn W. McManusandKenneth H. GoodrichNASA Langley Research CenterMail Stop 489Hampton, Virginia 23665-5225(804)864-4037/(804)864-4009AIAA Guidance, Navigation, and Control ConferenceAugust 14-16, 1989Boston, MassachusettsAutonomous Systems and Mission PlanningABSTRACTA research program investigating the use of Artificial Intelligence (AI) techniques to aid in the development of a Tactical Decision Generator (TDG) for Within-Visual-Range (WVR) air combat engagements is discussed. The application of AI methods for development and implementation of the TDG is presented. The history of the Adaptive Maneuvering Logic (AML) program is traced and current versions of the AML program are compared and contrasted with the TDG system. The Knowledge-Based Systems (KBS) used by the TDG to aid in the decision-making process are outlined in detail and example rules are presented. The results of tests to evaluate the performance of the TDG versus a version of AML and versus human pilots in the Langley Differential Maneuvering Simulator (DMS) are presented. To date, these results have shown significant performance gains in one-versus-one air combat engagements, and the AI-based TDG software has proven to be much easier to modify than the updated FORTRAN AML programs.INTRODUCTIONThe development of all-aspect and "fire and forget" weapons has increased the complexity of the air-to-air combat environment. Modern sensors provide critical tactical information to the aircraft a range and precision that were impossible 20 years ago. This increased complexity, combined with the expanded capabilities of high performance aircraft, has changed the future of air combat engagements. The need for a modern, realistic air combat simulation that can be used to evaluate the current and future air combat environment has been well documented [Burgin 1975, 1986, 1988; Hankins 1979]. Existing tools such as the Adaptive Maneuvering Logic program (AML) [Burgin 1975, 1986, 1988], TAC Brawler [Kerchner 1985], and AASPEM have generally centered their efforts on the development and refinement of high-fidelity aircraft dynamics modeling techniques and not on the developmentand refinement of tactical decision generation logic for WVR engagements. In support of the study of superagile aircraft at Langley Research Center (LaRC) a Tactical Guidance Research and Evaluation System (TGRES, pronounced "tigress") is being developed [Goodrich 1989].Figure 1. TGRES SYSTEM.TGRES DESCRIPTIONThe TGRES system, shown in figure 1, provides a means by which researchers can develop and evaluate, in a tactically significant environment, various systems for high performance aircraft. While TGRES is aimed specifically at the development and evaluation of maneuvering strategies and advanced guidance/control systems for superagile aircraft, TGRES's modularity will make it easily adaptable to the analysis of other types of aircraft systems. TGRES is composed of three main elements--the TDG, the Tactical Maneuver Simulator (TMS) , and the DMS.The TDG is a knowledge-based guidance system designed to provide insight into the tactical benefits and costs of enhanced aircraft controllability and maneuverability throughout an expanded flight envelope (i.e. superagility). The two remaining elements of TGRES, the TMS and the DMS, provide simulation environments in which the TDG is exercised. The TMS simulation environment was developed using conventional computer languages on a VAXStation 3200. The TDG was developed on a Symbolics 3650 workstation. The separation of the aircraft simulation and decision logic components allows each module to be developed using hardware and programming techniques specifically designed for its function. This separation of tasks also increases the efficiency of the simulation by allowing some parallel processing. The two processes are executed as co-tasks and communicate via an EtherNet connection. (See fig. 2.)Figure 2. CURRENT HARDWARE CONFIGURATION.The user interface system consists of a color graphics package designed to replayboth TMS and DMS engagements, and a mouse sensitive representation of the TDG aircraft and its basic systems that allows the user to interact with the TDG aircraft during the execution of TMS runs. The Engagement Replay System (ERS) software is available for a VAX color workstation and a Symbolics color workstation. The ERS display, shown in figure 3, displays the two aircraft on a three-dimensional axis and has dedicated windows used to display several aircraft variables including the thrust, Mach number, and deviation angles of the two aircraft. The viewing angle for each engagement can be rotated 360°around both the X and Z axis to provide the most information to the user. The interactive TMS display includes a graphical representation of the TDG aircraft's major systems suchas engines, offensive and defensive systems, and a system status display. During the simulation run the user can enable and/or disable the aircraft's systems using the mouse sensitive display and evaluate how the changes effect the TDG's decision generation process.Figure 3. ERS DISPLAY.The final element of TGRES is the Differential Maneuvering Simulator. The DMS consistsof two 40' diameter domes located at Langley. The facility is intended for the real-time simulation of engagements between piloted aircraft. By using the TDG to drive one of the airplanes, it is possible to test the TDG against a human opponent. This feature allows the guidance logic to be evaluated against an unpredictable and adaptive opponent. A thirddome (20' in diameter) is being added to the DMS facility. This addition will allow the guidance logic to be evaluated in one-versus-two or two-versus-one scenarios, further enhancing the tactical capability of the DMS environment.THE AML PROGRAMThe TDG is being developed as a KBS incorporating some of the features first outlined in the AML program [Burgin 1975, Hankins 1979]. The AML program was selected as a baseline for several reasons, including its past performance as a real-time WVR tactical adversary in the Langley DMS and the modular design of the FORTRAN source code. The tactical decision generation method developed for the original AML program, outlined in figure 4, is a unique approach that attempts to model the goal-seeking behavior of a pilot by mapping the physical situation between the two aircraft into a finite, abstract situation space. A set of the three basic control variables (bank angle, load factor, and thrust) can be determined to maximize some performance index in the situation space [Burgin 76]. Each triplet of controlvariables defines an "elemental maneuver," and a sequence of these elemental maneuvers may form classical or "text book" air combat maneuvers.Figure 4. HOW AML WORKS.Although the logic and geometry used by AML to make tactical decisions is complex,the basic concepts it uses are simple. At each decision interval, the "attacking" aircraft predicts the future position and velocity of its opponent using a curve-fitting algorithm and past known positions of the opponent. The attacker then uses a set of elemental maneuvers (described above) to predict a set of positions that it can reach from its current state.The AML program forms a "situation state vector" for each trial maneuver evaluated.The vector is used to represent the responses to a set of questions about the current situation.Figure 5 shows the binary scoring method (0 = NO, 1 = YES) used to determine the value of each each cell in the vector.This vector is multiplied by a "scoring weight" vector to form a scalar product that represents the situation space value for the current maneuver. A detailed description of the trial maneuver generation and scoring process and an explanation of how the scoring weights have evolved can be found in [Burgin 1988]. The questions used to form the situation state vector were obtained from several sources including air combat maneuvering manuals, interviews with fighter pilots, and detailed analysis of the original DMS engagements. In the original version of AML, each question had a positive, non-zero weight. The questions were formulated so that a "YES" answer reflects a favorable condition, increasing the score for the maneuver. It is important to note that in the original AML research "no systematic investigation was made to optimize these weight factors; they were usually all set to one." The early AML versions [Burgin 1975; Hankins 1979] were designed to perform as a conservative opponent. The scoring rules rated offense and defense evenly and risked giving up some positional advantage to the opponent only when there was reasonable assurance the attacker would gain at least as much offsetting advantage. This conservative approach may be the product of a philosophy stated in [Burgin 1988],"The objective of the decision-making process is to derive maneuvers which will bring one's own weapons to bear on the target while at the same time minimizing exposure to the other side's weapons."This is a one-dimensional approach to the problem. It outlines a logic that handles only the neutral and aggressive cases effectively and does not recognize that there are several Modes of Operation (MO), outlined in figure 6, that a pilot may use during an engagement. In many situations when the opponent has a distinct positional advantage, the AML aircraft will perform "kamikaze" maneuvers, giving up one or more clear shots to the opponent while it maneuvers to a position of "advantage." In these situations the AML aircraft would not survive to exploit the positional advantage, having been "killed" while obtaining it.•AGGRESSIVE•DEFENSIVE•NEUTRAL•EVASIVE•EVADING OPPONENTS' "LOCK AND FIRE"•EVADING MISSILE (AAM & SAM)•GROUND / STALL EVASIONFigure 6. TDG MODES OF OPERATION.The existing trial maneuver versions of AML do a good job of getting behind an opponent, but due to the grain of the trial maneuvers, lack the ability to fine-track the opponent. Several changes were made to the AML program [Burgin 1988] to address this problem. The requirement that only the opponents positional data be passed to the algorithm was relaxed and "complete and accurate information about the the opponent's past and present states" is now provided. The 1986 version of the AML program, AML86 [Burgin 1988], also made several major changes to the tactical decision generation process, abandoning the trial maneuver concept for a rule-based approach and a set of canned "Basic Fighter Maneuvers." [Burgin 1988] contains an extensive history of the "trial maneuver" concept and a description of how the new rule-based version of the program, AML68, was developed. A "pointing" control system was also developed to aid the fine-tracking process. The pointing control system directly commands roll and pitch rates to point the aircraft's longitudinal axis at the opponent. AML86 is a first step towards a multi-dimensional approach and is similar to the decision logic incorporated in the TDG.THE DEVELOPMENT OF THE TDG SYSTEMThe development of the TDG has been a multi-stage process using the COSMIC version of AML as a starting point. The COSMIC version of AML was updated by Dynamics Engineering Incorporated (DEI) while under contract to NASA Langley. This version of AML, (DEI-AML), has a scoring module that uses a set of 15 binary questions and a fixed set of weights to evaluate the trial maneuvers.DEI installed aerodynamic data and engine characteristics provided by the Aircraft Guidance and Controls Branch (AGCB) into the AML data tables and made all changes to the AML software outlined in [Burgin, 1986]. DEI-AML was tested by AGCB to insure symmetry of the engagements given symmetric initial conditions. During the testing process several software bugs were found and corrected. A full description of the bugs and corrections are outlined in [McManus 1989]. The resulting code, dubbed AML´, was again tested for symmetry and a DMS ready version of the code, DMS-AML´ was prepared. AML´ and a DMS ready version, DMS-AML´, are being used as the baseline during development of the TDG system.Figure 7. HOW TDG WORKS.The TDG system, outlined in figure 7, currently uses the trial maneuver concept outlined in the AML program with several extensions. The original set of five to nine trial maneuvers has been expanded to include over 40 trial maneuvers. Although this is a "brute force" solution the new trial maneuvers allow the TDG to perform target tracking more effectively and improve the system's overall performance. The TDG uses an object-oriented programming approach to represent each aircraft and the current state of its offensive systems, defensive systems, and engines. This information is used to help guide the TDG's reasoning process. The original FORTRAN AML throttle controller and the maneuver scoring modules have been redesigned using a rule-based programming approach and ported to the AI workstation. Examples of rules for each of the KBS modules are shown in figure 8.((AND (EQUAL (GET-MISSION PALADIN) *AGGRESSIVE*)(EQUAL (GET-POSITION PALADIN) *NEUTRAL*))((SETF (GET-MODE PALADIN) *AGGRESSIVE*)(AGGRESSIVE-WEIGHT)))EXAMPLE MODE SELECTION RULE.((AND I-SEE-HIM I-CAN-FIRE HE-CANT-FIRE (<= RANGE 12500))((SETF THROTTLE 0.94)) )EXAMPLE THROTTLE CONTROL RULE.((OR (≤ (ABS HIM-UNABLE-TO-FIRE) (I-CAN-FIRE))(AND (> HIM-UNABLE-TO-FIRE O.O) (≥ I-CAN-FIRE 0.0) )(= GUNA 1.0)(= ALLA 1.0)(= HEATA 1.0))(SETF (GET-POSITION PALADIN) *AGGRESSIVE*))EXAMPLE SITUATION ASSESSMENT RULE.Figure 8. Example Rules.KBS MODULES OF THE TDGThe TDG system has a knowledge-based Situation Assessment (SA) module that is executed at each decision interval before the trial maneuvers are evaluated. The SA module is used to determine the TDG's current MO. The SA is executed at each interval, before the maneuver scoring module, and determines the TDG's MO. This determination is based on the TDG's current mission, the current state of the aircraft's systems, the relative geometry between the aircraft and its opponent, and the opponent's instantaneous-intent (*in-int*). Each of the modes shown in figure 6 has a unique set of scoring weights and a decision interval associated with it. The weights for each mode have been adjusted during the design and testing process to maximize the TDG's performance in that mode. Test results have shown that a short decision interval, (0.5 sec.), improves the TDG's fine-tracking performance. The same short decision interval results in a "thrashing" motion in neutral situations resulting in degraded system performance. A longer decision interval, (1.0 sec.), is used in neutral situations. The opponent's *in-int* is defined to be an estimation of the opponent's intent at the current point in time based on available sensor, positional, and geometric data. Currently, there is no attempt to use a history of *in-int* to derive a long-term opponent intent. The flexibility provided by the use of MO's allows the system to more closely model the pilots changing strategies during the engagement. The COSMIC version of AML, and most AML variations before AML86, do not have the ability to change their decision generation strategy based on the changing environment. The TDG Scoring Module (SM) is a KBS that uses a set of 17 fuzzy logicquestions with responses ranging from [0 = NO, ..., 1.0 = YES], (fig. 8), and the set of mode-specific scoring weights selected by the SA module to score each of the trial maneuvers.A rule-based active Throttle Controller (TC) has been developed to replace the existing throttle control subroutine. The TC is called at the start of each decision interval and can set the throttle at any position from idle to full afterburner [0, .., 2]. The logic for the existing AML throttle control subroutine had only three positions (0 = idle, 1 = military, 2 = full afterburner) and had been turned off in the COSMIC version--all engagements were being flown with the throttle set at full afterburner.T D GFigure 10. SET OF 64 INITIAL CONDITIONS.A statistics module is used to calculate the amount of time that each aircraft has its weapons locked on its opponent and the deviation angle and angle-off. The Line-Of-Sight (LOS) vector is defined as the vector between ownship c.g. and opponent's c.g. The Line-Of-Sight (LOS) angle is defined as the angle between the LOS vector and ownship body x-axis; the deviation angle is defined as the angle between the LOS vector and ownship velocity vector; and the angle-off is defined as the angle between the LOS vector and opponent's velocity vector (fig.10).AMLAIRPLANETDGAIRPLANEOWNSHIP VELOCITYVECTORFigure 11. ANGLE DEFINITIONS.The weapons cones used represent a generic all-aspect missile, a generic tail-aspect missile,and a 20 mm cannon (fig. 11). Four metrics are currently used to evaluate each engagement.The first metric is calculated every second and computes the total time that each airplane has its weapons locked on the opponent, the probability that the shot will hit, the distance between the opponents, the angle-off, and the deviation angle. The results are printed in a table format at the completion of each run. The second metric computes a Probability of Survival (PS) using the data computed by the first metric. The missiles are treated as a limited resource and aprobability to hit of 0.65 is required to launch the first missile. The firing threshold increases by 0.05 for each missile launched, and all missiles are required to complete their flight to the target before the next missile is fired. The third scoring metric attempts to determine a Lethal Time (LT) value for each engagement. The LT value for a run is equal to ((TDG gun time -AML gun time) / 2) + 2 * (TDG tail-aspect time - AML tail-aspect time) + (TDG all-aspect time - AML all-aspect time). A positive LT value shows TDG with an advantage, a negative LTshows AML with an advantage. The fourth metric is Time on Offense (TOF). TOF is the sum of all weapons lock time for each each airplane. ∆TOF is computed as TDG TOF minus AML TOF.5° GUN CONE. RANGE 0 TO 5000 FT.-1-051122TDG- AML (time on offense)∆ TOF(seconds)Run Number.Figure 13. ∆TOF FOR SET OF 64 ENGAGEMENTS.TEST RESULTSA set of nine engagements presented in [Eidetics 89] were used to compare theperformance of the TDG system with the performance of the AML´ in the lab, and againstpilots in the DMS. AML´ was used as the A airplane in both sets of lab test engagements, and the human pilot flew the A airplane during the DMS runs. Airplanes with identicalperformance characteristics were used in both the DMS and the lab. The set of nine initial initial conditions, fig.13, favor the Aairplane.2 NM SEPARATION.540 KTS AIRSPEED.15,000 ALTITUDEFigure 14. EIDETICS INITIAL CONDITIONS.The B airplane has five neutral starting positions, runs 3, 5, 6, 7, and 9; one offensive starting position, run 8; and 3 defensive starting positions, runs 1, 2, and 4. There is a 2-nautical mile separation between the opponents and each airplane is at an initial altitude of 15,000 feet and an initial airspeed of 540 knots. All of the engagements were run for 60 seconds. The scoring metric used was an Overall Exchange Ratio (OER), defined as the # of A killed / the # of B killed. The Eidetics study was conducted using a modified version of the AASPEM program and produced an OER of ≈ 0.72. The OER was less than 1.0 due to the use of a non-symmetric set of initial conditions. In the first set of engagements the AML´ program wasflown against itself and the produced an OER of 0.75.2468101214Run Number.Time inSeconds.Figure 15. AML´ vs AML´ TOF.In the second set of engagements the TDG was used to control the B airplane and achieved an OER of 1.50, a 100 percent improvement. The test results, (figs. 14, 15), clearly show the superior performance of the TDG system. It is also interesting to note that the maximum OER Eidetics achieved by modifying aircraft performance characteristics was ≈ 0.85 [Eidetics1989].246810121416123456789Run Number.Time inSeconds.Figure 16. TDG vs AML´ TOF.The DMS runs were conducted using the research pilot with the most DMS flight time against the TDG-DMS as the opponent. The pilot flew against the set of initial conditions three times, providing a total of 27 runs. TOF data for the DMS runs is not available at this time. The OER for the set of 27 runs was 0.83. As stated earlier, studies done in the lab have shown that the reduced set of trial maneuvers used by DMS-TDG cannot fine track an opponent as effectively as the expanded set used by the TDG. The reduced set of trial maneuvers used by DMS-TDG may account for most of the performance difference between the TDG and DMS-TDG.FUTURE WORKSeveral enhancements to the existing TDG system are planned. The maneuver selection logic will be expanded to replace the use of the trial maneuvers for modes of operation where conventional guidance algorithms provide better performance. This change to the logic and selection module will improve the TDG's ability to track its opponent. Initial lab results have shown that the development of mode-specific maneuver sets will increase system efficiency by reducing the number of maneuvers evaluated for some MO's. The development of logic for two-vs-one engagements is underway. The third aircraft will be dynamically allocated to either the TDG or the opponent at the start of each run. This feature will allow researchers to evaluate the TDG in both two-vs-one and one-vs-two engagements. A system for connecting the Symbolics workstation directly to the DMS real-time computing facilities is also being investigated. The development of such a link would allow the full TDG system to be tested in the DMS against human pilots.The TGRES system presents an excellent opportunity to evaluate the use of AI programming techniques and knowledge-based systems in a real-time environment. It also clearly shows that the maneuver selection and scoring techniques developed in the late 1960's and early 1970's cannot perform well in the modern tactical environment and are not well suited for evaluating agile aircraft. Figure 16 shows many of the changes in the tactical and simulation environments since the original AML tactical decision generation logic was developed. The use of KBS and AI programming techniques in developing the TDG has allowed a complex tactical decision generation system to be developed that addresses the modern combat environment and agile aircraft in a clear and concise manner.1968HEAT SEEKING WEAPONS DOMINATE TACTICAL SITUATIONLIMITED COMPUTING AND MODELING RESOURCES. SHORT-RANGE RADAR. SHORT-RANGE WEAPONS.1 vs 11989ALL-ASPECT WEAPONS DOMINATE TACTICAL SITUATION. (LONGER RANGE, FIRE AND FORGET,....)BETTER COMPUTING AND MODELING RESOURCES.LONG-RANGE RADARLONG-RANGE WEAPONS.2 vs 1, M vs N SUPERMANEUVERABLE AIRCRAFT, POINT AND SHOOT CAPABILITYFigure 17. 1968 AML vs 1989 TDG.CONCLUDING REMARKSA KBS TDG is being developed to study WVR air combat engagements. The system incorporates modern airplane simulation techniques, sensors, and weapons systems. The system was developed using several concepts first outlined in the AML program originally developed for use in the LaRC DMS. An updated AML system is being used as a baseline to assess the functional and performance tradeoffs between a conventionally coded system and theAI-based system. Test results have shown that the AI-based TDG system has performed better than AML´ in both the TMS and the DMS. The use of a KBS SA module and MO's allows the TDG to more accurately represent the complex decision making process carried out by a pilot. The use of a more extensive set of trial maneuvers and a KBS TC module allows the TDG tofine track the opponent more effectively than AML´. The KBS decision generation logic has proved to be much easier to modify than the AML´ FORTRAN source code. The ability to integrate the TDG into the DMS offers a unique opportunity to evaluate the performance of theAI-based TDG software in a real-time tactical environment against human pilots.REFERENCES1. Burgin, G. H. ; et al. : An Adaptive Maneuvering Logic Computer Program for theSimulation of One-on-One Air-to-Air Combat. Vol I and II. NASA CR-2582, CR-2583, 1975.2. Burgin, G. H. : Improvements to the Adaptive Maneuvering Logic Program. NASA CR-3985, 1986.3. Burgin, G. H. ; and Sidor, L. B. : Rule-Based Air Combat Simulation. NASA CR-4160,1988.4. Hankins III, W. W. : Computer-Automated Opponent for Manned Air-to-Air CombatSimulations. NASA TP-1518, 1979.5. Kerchner, R. M. ; et al. : The TAC Brawler Air Combat Simulation Analyst Manual(Revision 3. 0). DSA Report #668.6. Buttrill, C. S. ; et al. : Draft NASA TM 1989.7.Taylor, Robert T. ; et al. : Simulated Combat for Advanced Technology AssessmentsUtilizing The Adaptive Maneuvering Logic Concepts. NASA Order no. L-24468C, Coastal Dynamics Technical Report No. 87-001.8.McManus, John W. ; Goodrich, Kenneth H. : Draft NASA TM 1989.9.Goodrich, Kenneth H; McManus John W. :AIAA Paper #...。
Available online at Artificial Intelligence techniques:An introduction to theiruse for modelling environmental systemsSerena H.Chen,Anthony J.Jakeman∗,John P.NortonIntegrated Catchment Assessment and Management(iCAM),Fenner School of Environment and Society,Building48a,The Australian National University,Canberra,ACT0200,AustraliaAvailable online20January2008AbstractKnowledge-based or Artificial Intelligence techniques are used increasingly as alternatives to more classical techniques to model environmental systems.We review some of them and their environmental applicability,with examples and a reference list.The techniques covered are case-based reasoning,rule-based systems,artificial neural networks,fuzzy models,genetic algorithms, cellular automata,multi-agent systems,swarm intelligence,reinforcement learning and hybrid systems.©2008IMACS.Published by Elsevier B.V.All rights reserved.Keywords:Case-based reasoning;Cellular automata;Multi-agent;Environmental modelling1.IntroductionUse of Artificial Intelligence(AI)in environmental modelling has increased with recognition of its potential. AI mimics human perception,learning and reasoning to solve complex problems.This paper describes a range of AI techniques:case-based reasoning,rule-based systems,artificial neural networks,genetic algorithms,cellular automata, fuzzy models,multi-agent systems,swarm intelligence,reinforcement learning and hybrid systems.Other arguably AI techniques such as Bayesian networks and data mining[21,148]are not discussed.2.Case-based reasoning2.1.DescriptionCase-based reasoning(CBR)solves a problem by recalling similar past problems[57]assumed to have similar solutions.Numerous past cases are needed to adapt their solutions or methods to the new problem.CBR recognises that problems are easier to solve by repeated attempts,accruing learning.It involves four steps[1](Fig.1):(1)retrieve the most relevant past cases from the database;(2)use the retrieved case to produce a solution of the new problem;(3)revise the proposed solution by simulation or test execution;and(4)retain the solution for future use after successful adaptation.∗Corresponding author.Tel.:+61261254742;fax:+61261258395.E-mail address:tony.jakeman@.au(A.J.Jakeman).0378-4754/$32.00©2008IMACS.Published by Elsevier B.V.All rights reserved.doi:10.1016/j.matcom.2008.01.028380S.H.Chen et al./Mathematics and Computers in Simulation78(2008)379–400Fig.1.The CBR four-step process(adapted from[1]).Retrieval recognises either syntactical(grammatical structure)or semantic(meaning)similarity to the new case.Syntactic similarities tend to be superficial but readily applied.Semantic matching according to context is used by advanced CBR[1].The main retrieval methods are nearest-neighbour,inductive and knowledge-guided [136].Nearest-neighbour retrievalfinds the cases sharing most features with the new one and weights them by importance. Determining the weight is the biggest difficulty[166].The method overlooks the fact that any feature’s importance, influenced by other features,is case-specific[12].Retrieval time increases linearly with database size,so the method is time-consuming[166].The inductive method decides which features discriminate cases best[166].The cases are organised in a decision tree according to these features,reducing retrieval time.However,the method demands a case database of reasonable size and quality[136].Knowledge-guided retrieval applies existing knowledge to identify the important features in each case,assessing all cases independently.It is expensive for large databases and thus often used with others[166].After retrieval,CBR adapts past solutions to the new problem.Adaptation is structural or derivational[166]. Structural adaptation creates a solution to the new problem by modifying the solution of the past case;derivational adaptation applies the algorithms,methods or rules used in the past case to the new case.The proposed solution is then evaluated and revised if necessary.After it is confirmed,the solution is stored in the database.Redundant cases can be removed and existing cases combined[57].2.2.DiscussionBy updating the database,a CBR system continually improves its reasoning capability and accuracy and thus performance.CBR can handle large amounts of data and multiple variables.It organises experience efficiently.However,S.H.Chen et al./Mathematics and Computers in Simulation 78(2008)379–400381CBR cannot draw inferences about problems with no similar past cases.Also,CBR is a black-box approach offering little insight into the system and processes involved.This is not a drawback for complex processes where understanding is neither possible nor necessary.CBR can be employed for diagnosis,prediction,control and planning [57].Environmental applications include deci-sion support [90,121],modelling estuarine behaviour [126],planning fire-fighting [9],managing wastewater treatment plants [138,137,132,164],monitoring air quality [86],minimising environmental impact through chemical process design [96]and weather prediction [139].Box 1An application of case-based reasoning.“Constructed wetlands:performance prediction by case-based reasoning.”[101]Lee et al.[100]tested the efficiency of constructed wetland filters by analysing the quality of water fed through them.Half the filters received inflow contaminated with heavy metals.CBR was applied [101]to improve water-quality monitoring and interpretation.Biochemical oxygen demand (BOD)and suspended solids (SS)concentrations are water-quality indicators commonly used for constructed wetlands.Measuring them is expensive,time-consuming and labour-intensive.To reduce costs,Lee et al.[101]applied CBR to predict BOD and SS concentrations of treated samples.It stored past cases with up to six variables,turbidity,conductivity,redox poten-tial,outflow water temperature,dissolved oxygen and pH,selected for their predictive potential for BOD and SS,cost-effectiveness and ease of ing statistical equations,local similarities between each past case and the new problem were calculated with respect to one variable and global similarity with respect to all variables.The three to five past cases ranked highest by global similarity were selected.The target variables were predicted by combining their values for those cases.The local similarity of variable i for past case c and problem case p is l i =f V ip −V ic MV iwhere V ip and V ic are the values of variable i for the problem case and past case,respectively,MV i is the mean of variable i in the case base;|(V ip −V ic )/MV i |is the local difference;and f maps local difference to local similarity.Global similarity is g = i (l i ∗w i ) i w i,i =1,2,...,nwhere n variables represent a case and w i is the weight of variable i .The proportion P j of the prediction from past case j isP j =g j g T,j =1,2,3,4,5and g T is the sum of the global similarities of the 3–5selected cases.Prediction P is P = j(P j ×TV j )where TV j is target variable of past case j .Outflow BOD and SS were successfully predicted by CBR,with 85%success rate in predicting whether water samples complied with regulatory requirements.Constructed wetlands are con-sidered hard to model because of their highly complex biochemical processes.However,this study showed that CBR can be applied to such systems.382S.H.Chen et al./Mathematics and Computers in Simulation78(2008)379–4003.Rule-based systems3.1.DescriptionRule-based systems(RBS)solve problems by rules derived from expert knowledge[74].The rules have condition and action parts,if and then and are fed to an inference engine,which has a working memory of information about the problem,a pattern matcher and a rule applier.The pattern matcher refers to the working memory to decide which rules are relevant,then the rule applier chooses what rule to apply.New information created by the action(then-)part of the rule applied is added to the working memory and the match-select-act cycle between working memory and knowledge base repeated until no more relevant rules are found[118].The facts and rules are imprecise.Uncertainty can be incorporated in RBS by approaches such as subjective probabil-ity theory,Dempster-Shafer theory,possibility theory,certainty factors and Prospector’s subjective Bayesian method, or qualitative approaches such as Cohen’s theory of endorsements[118].They assign to the facts and rules uncertainty values(probabilities,belief functions,membership values)given by human experts.There are two rule systems,forward and backward chaining[167].Forward chaining is data-driven:from the initial facts it draws conclusions using the rules. Backward chaining is goal-driven:beginning with a hypothesis,it looks for rules allowing it to be sustained.That is, forward chaining discovers what can be derived from the data,backward chaining seeks justification for decisions[56].3.2.DiscussionRBS are easy to understand,implement and maintain,as knowledge is prescribed in a uniform way,as condi-tional rules.However,the solutions are generated from established rules and RBS involve no learning.They cannot automatically add or modify rules.So a rule-based system can only be implemented if comprehensive knowledge is available[41],and its application is quite limited.RBS are well suited to problems where experts can articulate decisions confidently and where variables interact little.They may be difficult to scale up,as interactions then emerge.Ecological systems,with complex interactions and processes often not well understood,do not favour RBS.RBS are often used in narrower areas such as plant and animal identification or disease and pest diagnosis[107,177,52]. RBS can also be employed as an assessment tool,e.g.in evaluating regional environments[91]or assessing the impact of water-regime changes on wetland functions[84].Other studies have applied elements of RBS with other modelling techniques(Section11)in landscape-change modelling based on sea-level-rise scenarios[23]and assessing the environmental life cycle of alternative energy sources[158].Box2An application of rule-based systems.“A diagnostic expert system for honeybee pests.”[106]Mahaman et al.developed a rule-based system to diagnose honeybee pests and recommend treatments.Pests(diseases,insects,mites,birds and mammals)may reduce the quantity and quality of honey production.Threats to bee health must be identified and managed.Thefirst step was to learn from the literature and experts about pest symptoms.The results were converted to conditional rules by backward chaining,specifying a goal then seeking facts to support it.The system asks the user questions from the rules until a diagnosis is reached,e.g.:•If the affected stage of the bee is the larva•and the colour of the infected larva is black or white•and the infected larva appears mummified•or the cell cappings appear scattered•then chalkbrood disease confidence=80%.The user is shown images of the symptoms and given possible control measures.Mahaman et al.[106]checked diagnosis by the expert system with human experts.The results showed that the system was effective,with performance close to that of a human expert.S.H.Chen et al./Mathematics and Computers in Simulation 78(2008)379–4003834.Artificial neural networks4.1.DescriptionArtificial neural networks (ANNs)employ a caricature of the way the human brain processes information.An ANN has many processing units (neurons or nodes)working in unison.They are highly interconnected by links (synapses)with weights [170,133,5].The network has an input layer,an output layer and any number of hidden layers.A neuron is linked to all neurons in the next layer [71],as shown in Fig.2.Neuron x has n inputs and one outputy (x )=g n i =0w i x iwhere (w 0,...,w n )are the input weights and g is the non-linear activation function [119],usually a step (threshold)function or sigmoid (Fig.3).The step-function output is y =1if x ≥θ,0if x ≤θ.The sigmoid function,more commonly used,is asymptotic to 0and 1[83]and antisymmetric about (0,0.5):g (x )=11+e −βx ,β>0ANNs may be feedforward (the commonest)or rmation flow is unidirectional in feedforward ANNs,with no cycles,but in both directions in feedback ANNs so they have cycles,by which their state evolves to equilibrium [151].Fig.2.An example of an artificial neural network [134].Fig.3.(a)Step function and (b)sigmoid function.384S.H.Chen et al./Mathematics and Computers in Simulation78(2008)379–400Fig.4.Seven categories of tasks ANN can perform[83].ANNs require few prior assumptions,learning from examples[133]by adjusting the connection weights.Learning may be supervised or unsupervised.Supervised learning gives the ANN the correct output for every input pattern.The weights are varied to minimise error between ANN output and the given output.One form of supervised learning, reinforcement learning,tells the ANN if its output is right rather than providing the correct value[170].Reinforcement learning is discussed in Section10.Unsupervised learning gives the ANN several input patterns.The ANN then explores the relations between the patterns and learns to categorise the input[83].Some ANNs combine supervised and unsupervised learning.4.2.DiscussionANNs are useful in solving data-intensive problems where the algorithm or rules to solve the problem are unknown or difficult to express[178].The data structure and non-linear computations of ANNs allow goodfits to complex, multivariable data.ANNs process information in parallel and are robust to data errors.They can generalise,finding relations in imperfect data so long as they do not have enough neurons to overfit data imperfections.A disadvantage of ANNs is that they are uninformative black-box models and thus unsuitable for problems requiring process explanation. If an ANN fails to converge,there is no means tofind out why.ANNs can be applied to seven categories of problems[83](Fig.4):pattern classification,clustering,function approximation,prediction,optimisation,retrieval by content and process control.Pattern classification assigns anS.H.Chen et al./Mathematics and Computers in Simulation78(2008)379–400385 input pattern to one of the pre-determined classes,nd classification from satellite imagery[140]or sewage odour classification[122].Clustering is unsupervised pattern classification,e.g.of input patterns to predict ecological status of streams[163].Function approximation,also called regression,generates a function from a given set of training patterns,e.g.modelling river sediment yield[33]or catchment water supply[79],predicting ozone concentration[147], modelling leachateflow rate[88]or estimating the nitrate distribution in groundwater[8].Prediction estimates output from previous samples in a time series,e.g.of weather[95],air quality[7,145]or water quality[178].Optimisation maximises or minimises a cost function subject to constraints,e.g.calibrating infiltration equations[82].Retrieval by content recalls memory,even if the input is partial or distorted,e.g.producing water-quality proxies from satellite imagery[129].An example of process control is engine speed control,keeping speed near-constant under varying load torque by changing throttle angle[83].Box3An example of an application of artificial neural networks.“Use of ANNs for modelling cyanobacteria Anabaena spp.in the River Murray,South Australia.”Maier et al.[108]used ANNs to model the occurrence of cyanobacteria Anabaena spp.in the River Murray at Morgan,South Australia.This species group,the commonest in the lower Murray,is of great concern as Adelaide gets its water from there.Other studies had shown failure of process-based and traditional statistical approaches to predict the size and timing of incidence of algae species.ANNs seemed well suited to modelling ecological data,not requiring the probability distribution of the input data.The concentrations of Anabaena from1985/1986to1992/1993were tested against eight weekly variables:colour,turbidity,temperature,flow,total phosphorus,soluble phosphorus,oxidised nitrogen and total iron.The model included time lags of up to21weeks.Various network geome-tries were examined.The number of hidden layer nodes was limited to ensure the ANN could approximate a wide range of continuous functions yet avoid overfitting the training data:N H≤2N I+1;N H≤N TR N I+1where N H is the number of hidden layer nodes,N I is the number of inputs and N TR is the number of training samples(here365).Training data were from1985/1986to1991/1992.Earlier trials had determined the optimum learn-ing rate and number of training samples.After training,data for November1992to April1993 tested the network’s ability to provide4-week forecasts.Sensitivity analyses were conducted on the best models to determine the relative strength of relations between the input and output variables.The ANN model indicated thatflow,temperature and colour had most influence on the Anabaena spp.population.Addition of nutrients,iron and turbidity did not significantly improve the fore-casts.The ANN successfully forecast the major peak in Anabaena spp.concentration in the River Murray,although it missed a minor peak early in the test period.It was suggested that the dataset used for model calibration contained insufficient data on that type of event.Maier et al.[108] concluded that ANNs were a useful tool for modelling cyanobacteria incidences in rivers.5.Genetic algorithms5.1.DescriptionA genetic algorithm(GA)is a search technique mimicking natural selection[24].The algorithm evolves until it satisfactorily solves the problem,through thefitter solutions in a population surviving and passing their traits to offspring which replace the poorer solutions.Each potential solution is encoded,for example as a binary string,called a chromosome.Successive populations are known as generations[23].386S.H.Chen et al./Mathematics and Computers in Simulation78(2008)379–400The initial population G(0)is generated randomly.Thereafter G(t)produces G(t+1)through selection and repro-duction[24].A proportion of the population is selected to breed and produce new chromosomes.Selection is according tofitness of individual solutions,i.e.proximity to a perfect solution[26],most often by roulette selection and deter-ministic sampling.Roulette selection randomly selects a parent with probability calculated from thefitness f i of each individual by[24]:F i=f iif iDeterministic sampling assigns a value C i=RND(nF i)+1to organism i of the n organisms in the population,where RND rounds its argument to the nearest integer.Each organism is selected as a parent C i times[24].Reproduction is by genetic crossover and mutation.Crossover produces offspring by exchanging chromosome segments from two parents[26].Mutation randomly changes part of one parent’s chromosome.This occurs infre-quently and introduces new genetic material[24].Although mutation plays a smaller part than crossover in advancing the search,it is critical in maintaining genetic diversity.If diversity is lost,evolution is retarded or may stop[76]. In steady-state GAs,offspring generated by the genetic operators replace lessfit members,resulting in higher averagefitness.Simple or generational algorithms replace each entire generation[24].Selection and reproduction are repeated until a stopping criterion is met[24],e.g.all organisms are identical or very similar,a given num-ber of evaluations has been completed,or maximumfitness has been reached;evolution no longer yields better results.5.2.DiscussionGAs are computationally simple and robust,and balance load and efficacy well[67].This partly results from only examiningfitness,ignoring other information such as derivatives[94].GAs treat the model as a black box,an advantage when detailed information is unavailable.An important strength of GAs is implicit parallelism;a much larger number of code sequences are indirectly sampled than are actually tested by the GA[76].Unlike most stochastic search techniques,which adjust a single solution,GA keeps a population of solutions.Maintaining several possible solutions reduces the probability of reaching a false (local)optimum[67].Therefore GAs can be very useful in searching noisy and multimodal relations.However,the latter may take a large computation time.In most cases,GAs use randomisation in selection.They avoid picking only the best individual and thus prevent the population from converging to that individual.However,premature convergence on a local optimum can occur if the GA magnifies a small sampling error[63].If a veryfit individual emerges early and reproduces abundantly,early loss of diversity may lead to convergence on that local optimum.GAs often optimise model parameters or resource management.Some application examples include modelling species distribution[153],air quality forecasting[87],estimating soil bulk density[38],calibrating water-quality models[127],finding the best solution in an integrated management system[157]and water management [80].6.Cellular automata6.1.DescriptionCellular automata are dynamic models,discrete in space,time and state.They consist of a regular lattice of cells which interact with their neighbours.The cell states are synchronously updated in time according to local rules,which calculate the new state of a cell at time t+1using its state and those of neighbouring cells at time t[34].The neighbours are all those in given proximity,as illustrated in Fig.5.The simplest‘elementary’cellular automaton(CA)is a linear chain of n cells.The cell states in a one-dimensional CA area i(t+1)=f(a{i}(t))S.H.Chen et al./Mathematics and Computers in Simulation78(2008)379–400387 Box4An example of an application of genetic algorithms.“Using genetic algorithms to optimise model parameters.”[165]Wang[165]used a GA to calibrate a conceptual rainfall-runoff model of the Bird Creek catchment, Oklahoma.The model had nine parameters,dealing with water balance and routing.The GA optimised the parameters by minimising the sum of squares of differences between computed and observed daily discharges.The GA had a population size of n=100potential parameter solutions.The chromosomes had7 bits,allowing128values.Selection assigned probability p j to point j:p j=p1+(j−1)(p n −p1)n−1The average probability was1/n,the best point’s was1.5times the average and the probabilities summed to1over all points.After ranking,the best point is j=n,with the largest probability value p n.The worst is j=1,with smallest probability value,p1.Two points A and B are selected at random according to the probability distribution.Two bit positions,k1and k2,randomly selected in the chromosome coding,determine where crossover occurs.A new point is produced by taking the values of bits k1to k2−1of A’s coding and values of bits from k2to the end and from1to k1−1of B’s coding.Mutation of new points occurs withprobability0.01.The GA generated10runs with randomly selected initial populations.Each run was stopped after10,000fitness evaluations.For all runs,the best objective function values were very close to the global minimum.When the search converged on a local optimum,the local optimum had a similar objective value to that of the global optimum.Wang[165]concluded that GAs are capable and robust.where a i is the state of cell i at time t and f(a{i}(t))is a function of the states of its neighbouring cells[150].If the neighbourhood is the immediately adjacent cells,thenf(a{i}(t))≡f(a{i−1}(t),a{i}(t),a{i+1}(t)).For a5-neighbour square two-dimensional CA with cells(i,j+1)(i+1,j)(i,j−1)(i−1,j)as the neighbours of cell (i,j)[150],a i,j(t+1)=f(a{i,j}(t),a{i,j+1}(t),a{i+,j}(t),a{i,j−1}(t),a{i−1,j}(t))As the cells are updated,the disordered initial state of the CA usually evolves[150]into one of four patterns I–IV illustrated in Fig.6:homogeneous,simple separated periodic structures,chaotic aperiodic patterns or a complex pattern of local structures.These patterns correspond to four classes of CA described by Wolfram[168].Classes I,II and III are analogous to the limit points,limit cycles and chaotic attractors observed in continuous dynamical systems.Class IV CAs have no direct analogue,exhibiting complex behaviour neither completely repetitive nor completely random.6.2.DiscussionOne of the main problems with cellular automata concerns boundary conditions.In the periodic condition,the cells on one edge are made neighbours with those on the opposite edge,producing a torus.Other possibilities are reflecting and absorbing boundary conditions,which assume no off-lattice neighbours and assign the border states unique values or rules.Unfortunately these boundary conditions are unlikely to reflect real life.The problem could be avoided by making the lattice significantly larger than the area under study,at a higher cost in resources and computation[62].Cellular automata are simple mathematical models that can simulate complex physical systems.They can incorporate interactions and spatial variations,and also spatial expansion with time[54].For some parameter values,they are very388S.H.Chen et al./Mathematics and Computers in Simulation78(2008)379–400mon neighbourhood configurations:(a)neighbourhood{−1,0,1},(b)5-neighbour square,also referred to as‘von Neumann neigh-bourhood’,and(c)9-neighbour square,also referred to as‘Moore neighbourhood’(adapted from[68]).sensitive to the initial state and transition functions.Therefore,they have limited capacity to make precise predictions [81],but are often used to understand real environments.Examples of CA applications include modelling population dynamics[51,152],animal migration[143],unsat-uratedflow[61],landscape changes[55],debrisflow[39],earthquake activity[65],lavaflow[37]andfire spread[75,89].In these examples it is clear how the state of a cell is influenced by the previous state of its neighbours.Fig.6.The four classes of evolved patterns in CA:(I)homogeneous,(II)periodic,(III)chaotic,and(IV)complex pattern of localised structures [114].Box5An example of an application of a cellular automaton.“Simulation of vegetable population dynamics based on Cellular Automata.”[10]Bandini and Pavesi[10]developed a CA-based model that simulated the population dynamics of robiniae,oak and pine trees on the foothills of the Italian Alps.A two-dimensional CA was created,with individual cells representing portions of a given area.The state of each cell was defined by:(i)the presence or absence of a tree,(ii)the amount of resources(water,light,N and K)present,and(iii)if a tree was present,its features(species,size and resource needs)A cell can host a tree if the resources are suitable.The sprouting of a tree also requires that the cell contains a seed and no other tree.Seeds can be introduced into the cell from fruiting trees in neighbouring cells.As the tree grows,it has a greater resource requirement,obtainable from neighbouring cells.The development of this model involved defining rules describing tree sustenance and repro-duction,resource production andflow,and the influence of a tree on neighbouring cells.Two neighbourhood configurations,‘von Neumann’(5-neighbour)and‘Moore’(9-neighbour)were considered.The initial parameters of each cell are defined by the user.As the model runs,the system evolves step by step.Bandini and Pavesi[10]found that the simulations were qualitatively similar to real cases.7.Fuzzy systems7.1.DescriptionFuzzy systems(FS)use fuzzy sets to deal with imprecise and incomplete data.In conventional set theory an objectis a member of a set or not,but fuzzy set membership takes any value between0and1.Thus fuzzy models can describevague statements as in natural language[117].Roberts[131]gives an example of a FS,a vegetation model assigningplant communities to community types,with fuzzy membership indicating the degree to which the vegetation meetseach type definition.The membership values for each plant community sum,over all community types,to1.Forexample,plant communityχhad fuzzy memberships{μi}:μ(PIPO/PIPO)=0.2,μ(PIPO/PSME)=0.3,μ(PIPO/PIPU)=0.1 andμ(PIPO/ABCO)=0.4[131].Fuzzification transforms exact(crisp)input values into fuzzy memberships[93].Fuzzy models are built on priorrules,combined with fuzzified data by the fuzzy inference machine.The resulting fuzzy output is transformed to acrisp number(defuzzification[142]).Techniques include maximum,mean-of-maxima and centroid defuzzification.Fig.7shows the components of a fuzzy system.7.2.DiscussionHuman reasoning handles vague or imprecise information.The ability of fuzzy systems to handle such informa-tion is one of its main strengths over other AI techniques,although they are mostly easier to understand and apply.One of the main difficulties in developing a fuzzy system is determining good membership functions.Fuzzy sys-tems have no learning capability or memory[64].To overcome such limitations,fuzzy modelling is often combinedwith other techniques to form hybrid systems,e.g.with neural networks to form neuro-fuzzy systems(see Section11).Fuzzy systems handle incomplete or imprecise data in applications including function approximation,classifica-tion/clustering,control and prediction.Instances include modelling vegetation dynamics[131],estimating soil hydraulicproperties[58],modelling macroinvertebrate habitat suitability[162],evaluating habitat suitability for riverine forests。