Artificial Intelligence techniques
- 格式:pdf
- 大小:721.58 KB
- 文档页数:22
ARTIFICIAL INTELLIGENCE——人工智能1 Artificial intelligence (AI) is, in theory, the ability of an artificial mechanism to demonstrate some form of intelligent behavior equivalent to the behaviors observed in intelligent living organisms. Artificial intelligence is also the name of the field of science and technology in which artificial mechanisms that exhibit behavior resembling intelligence are developed and studied.2 The term AI itself, and the phenomena actually observed, invite --- indeed demand --- philosophical speculation about what in fact constitutes the mind or intelligence. These kinds of questions can be considered separately, however, from a description of the various endeavors to construct increasingly sophisticated mechanisms that exhibit “intelligence.”3 Research into all aspects of AI is vigorous. Some concern exists among workers in the field, however, that both the progress and expectations of AI have been overstated. AI programs are primitive when compared to the kinds of intuitive reasoning and induction of which the human brain or even the brains of much less advanced organisms are capable. AI has indeed shown great promise in the area of expert systems --- that is, knowledge-based expert programs --- but while these programs are powerful when answering questions within a specific domain, they are nevertheless incapable of any type of adaptable, or truly intelligent, reasoning.4 Examples of AI systems include computer programs that perform such tasks as medical diagnoses and mineral prospecting. Computers have also been programmed to display some degree of legal reasoning, speech understanding, vision interpretation, natural-language processing, problem solving, and learning. Although most of these systems have proved valuable either as research vehicles or in specific, practical applications, most of them are also still very far from being perfected.5 CHARACTERISTICS OF AI: No generally accepted theories have yet emerged within the field of AI, owing in part to the fact that AI is a very young science. It is assumed, however, that on the highest level an AI system must receive input from its environment, determine an action or response, and deliver an output to its environment. A mechanism for interpreting the input is needed. This need has led to research in speech understanding, vision, and natural language. The interpretation must be represented in some form that can be manipulated by the machine.6 In order to achieve this goal, techniques of knowledge representation are invoked. The AI interpretation of this, together with knowledge obtained previously, ismanipulated within the system under study by means of some mechanism or algorithm. The system thus arrives at an internal representation of the response or action. The development of such processes requires techniques of expert reasoning, common-sense reasoning, problem solving, planning, signal interpretation, and learning. Finally, the system must网construct an effective response. This requires techniques of natural-language generation.7 THE FIFTH-GENERATION ATTEMPT: In the 1980s, in an attempt to develop an expert system on a very large scale, the Japanese government began building powerful computers with hardware that made logical inferences in the computer language PROLOG. (Following the idea of representing knowledge declaratively, the logic programming PROLOG had been developed in England and France. PROLOG is actually an inference engine that searches declared facts and rules to confirm or deny a hypothesis. A drawback of PROLOG is that it cannot be altered by the programmer.) The Japanese referred to such machines as “fifth-generation” computers.8 By the early 1990s, however, Japan had forsaken this plan and even announced that they were ready to release its software. Although they did not detail reasons for their abandonment of the fifth-generation program, U.S scientists faulted their efforts at AI as being too much in the direction of computer-type logic and too little in the direction of human thinking processes. The choice of PROLOG was also criticized. Other nations were by then not developing software in that computer language and were showing little further enthusiasm for it. Furthermore, the Japanese were not making much progress in parallel processing, a kind of computer architecture involving many independent processors working together in parallel—a method increasingly important in the field of computer science. The Japanese have now defined a “sixth-generation” goal instead, called the Real World Computing Project, that veers away from the expert-systems approach that works only by built-in logical rules.9 THE FUTURE OF AI RESEARCH: One impediment to building even more useful expert systems has been, from the start, the problem of input---in particular, the feeding of raw data into an AI system. To this end, much effort has been devoted to speech recognition, character recognition, machine vision, and natural-language processing. A second problem is in obtaining knowledge. It has proved arduous toextract knowledge from an expert and then code it for use by the machine, so a great deal of effort is also being devoted to learning and knowledge acquisition.10 One of the most useful ideas that has emerged from AI research, however, is that facts and rules (declarative knowledge) can be represented separately from decision-making algorithms (procedural knowledge). This realization has had a profound effect both on the way that scientists approach problems and on the engineering techniques used to produce AI systems. By adopting a particular procedural element, called an inference engine, development of an AI system is reduced to obtaining and codifying sufficient rules and facts from the problem domain. This codification process is called knowledge engineering. Reducing system development to knowledge engineering has opened the door to non-AI practitioners. In addition, business and industry have been recruiting AI scientists to build expert systems.11 In particular, a large number of these problems in the AI field have been associated with robotics. There are, first of all, the mechanical problems of getting a machine to make very precise or delicate movements. Beyond that are the much more difficult problems of programming sequences of movements that will enable a robot to interact effectively with a natural environment, rather than some carefully designed laboratory setting. Much work in this area involves problem solving and planning.12 A radical approach to such problems has been to abandon the aim of developing “reasoning” AI systems and to produce, instead, robots that function “reflexively”. A leading figure in this field has been Rodney Brooks of the Massachusetts Institute of Technology. These AI researchers felt that preceding efforts in robotics were doomed to failure because the systems produced could not function in the real world. Rather than trying to construct integrated networks that operate under a centralizing control and maintain a logically consistent model of the world, they are pursuing a behavior-based approach named subsumption architecture.13 Subsumption architecture employs a design technique called “layering,”---a form of parallel processing in which each layer is a separate behavior-producing network that functions on its own, with no central control. No true separation exists, in these layers, between data and computation. Both of them are distributed over the same networks. Connections between sensors and actuators in these systems are kept short as well. The resulting robots might be called “mindless,” but in fact they have demonstrated remarkable abilities to learn and to adapt to real-life circumstances.14 The apparent successes of this new approach have not convinced many supporters of integrated-systems development that the alternative is a valid one for drawing nearer to the goal of producing true AI. The arguments that have arisen between practitioners of the two different methodologies are in fact profound ones. They have implications about the nature of intelligence in general, whether natural or artificial。
高三英语人工智能单选题40题1. Artificial intelligence is a branch of computer science that aims to create intelligent _____.A.machinesB.devicesC.equipmentsD.instruments答案解析:A。
“machines”通常指机器,在人工智能领域常指具有一定智能的机器;“devices”一般指设备、装置;“equipments”是“equipment”的复数形式,指装备、器材;“instruments”指仪器、工具。
2. In the field of artificial intelligence, algorithms are used to train _____.A.modelsB.patternsC.shapesD.forms答案解析:A。
“models”在人工智能中常指模型,可以通过算法进行训练;“patterns”指模式、图案;“shapes”指形状;“forms”指形式。
3. Deep learning is a powerful technique in artificial intelligence that uses neural _____.worksB.systemsC.structuresanizations答案解析:A。
“networks”指网络,在深度学习中常指神经网络;“systems”指系统;“structures”指结构;“organizations”指组织。
4. Artificial intelligence can process large amounts of data to make accurate _____.A.predictionsB.forecastsC.projectionsD.expectations答案解析:A。
人工智能技术的英语artificial intelligence technology常见释义:英[ˌɑːtɪˈfɪʃl ɪnˈtelɪdʒəns tekˈnɒlədʒi]美[ˌɑːrtɪˈfɪʃl ɪnˈtelɪdʒəns tekˈnɑːlədʒi]例句:虽然现在针对选择题和判断正误题的自动评分系统已经非常普遍,但利用人工智能技术对短文进行评分尚未得到教育工作者的广泛认可,而且批评声也很多。
Although automated grading systems for multiple-choice and true-false tests are now widespread, the use of artificial intelligence technology to grade essay answers has not yet received widespread acceptance by educators and has many critics.将CAD和人工智能技术引入组合夹具设计中,可以提高生产效率、减轻劳动强度、缩短生产准备周期和加快产品上市时间。
Introducing CAD and artificial intelligence technology into modular fixture design can improve production efficiency, lighten working intensity, reduce manufacturing lead-time and marketing time.听上去够酷的了,不过该公司已经开始研发包含摄像头和人工智能技术的升级版鞋子,不仅可以探测到障碍物,还可以检测出是何种障碍物。
That sounds impressive enough, but the company is already working on a much more advanced version that incorporates cameras and artificial intelligence to not only detectobstacles but also their nature.Java语言特点及其对人工智能技术的影响和促进Characteristics of Java Language and the Action of Influence and Promotion of it for AI Technology。
科技与创新┃Science and Technology&Innovation2021年第07期文章编号:2095-6835(2021)07-0172-02浅析人工智能技术在5G时代的发展与应用王君1,4,廖华杰1,宋泽生2,欧子龙1,梁薇薇3,廖君1,夏愚然1(1.中山大学南方学院,广东广州510970;2.南昌大学,江西南昌330027;3.重庆邮电大学,重庆400065;4.广州恒通智联科技有限公司,广东广州510630)摘要:随着社会科技的不断发展,人工智能已经从最初的起步发展期发展到了现在的飞速发展期,它最主要的集中发展时期可以分为应用发展期和飞速发展期。
随着5G时代的到来,医疗保障、教育培训、交通运输等诸多领域的服务水平难以满足人类对智能化的需求。
为了解决这些问题,满足日益增长的服务和效率需求,人工智能技术随之产生。
人工智能技术是研究、开发用于模拟、延伸和扩展人的智能的理论、方法、技术及应用系统的一门新的技术科学。
人工智能的现状发展主要在人脸识别、机器人、专家系统等方面,人工智能与5G网络的相互结合,加快了数字时代的发展,人工智能带给人们很多好处的同时,也带来了许多不足,比如核心技术成熟度不足,对于应用场景的依赖性较强,存在数据孤岛和数据碎片化。
关键词:人工智能;5G时代;技术革命;智能机器中图分类号:TN929.5文献标志码:A DOI:10.15913/ki.kjycx.2021.07.073随着现代高科技的发展,人工智能技术占据了主导地位,人工智能融合了智能机器以及机器智能的技术。
以前的人工智能已经不满足于现代技术的发展需求,因此人们提出了新的人工智能技术,被广泛应用在各大领域当中。
它是计算机与信息网络的真正结合,成为了新一轮的技术革命。
人工智能技术是研究、开发用于模拟、延伸和扩展人的智能的理论、方法、技术及应用系统的一门新的技术科学。
它是计算机与互联网的真正结合以及它在计算机的基础上实现了智能化,从而更加追求智能机器到高水平的人机、脑机相互协同和融合[1-4]。
智能科学与技术专业英语一、单词1. Artificial Intelligence (AI)- 英语释义:The theory and development ofputer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision - making, and translation between languages.- 用法:“Artificial Intelligence” is often abbreviated as “AI” and can be used as a subject or in phrases like “AI technology” or “the field of AI”.- 双语例句:- Artificial Intelligence has made great progress in recent years. (近年来,人工智能取得了巨大的进展。
)- Manypanies are investing heavily in artificial intelligence research. (许多公司正在大力投资人工智能研究。
)2. Algorithm- 英语释义:A set ofputational steps and rules for performing a specific task.- 用法:Can be used as a countable noun, e.g. “T his algorithm is very efficient.”- 双语例句:- The new algorithm can solve the problem much faster. (新算法可以更快地解决这个问题。
市场调研方法外文文献及翻译1. Market Research Methods: Incorporating Social Media into Traditional Approaches文章介绍了如何在市场调研中运用社交媒体,以帮助企业更好地了解消费者。
研究人员将社交媒体与传统的定量调研和定性调研相结合,以获得更全面的信息。
通过采集社交媒体的数据分析消费者的行为和偏好,以及对产品或服务的反馈意见。
2. Using Eye Tracking in Market Research: A Guide to Best Practices该文献介绍了视觉追踪技术在市场调研中的应用。
作者指出,视觉追踪技术可以帮助研究人员理解消费者在浏览产品或服务时的注意力分配和行为模式。
文章介绍了适用于市场调研的视觉追踪应用程序的最佳实践和测试方法。
3. Conjoint Analysis in Marketing: New Developments with Implications for Research and Practice这篇文章介绍了一种被称为 "共轭分析" 的调研方法,该方法可以帮助研究人员了解消费者在购买某种产品或服务时的偏好和决策过程。
文献称,共轭分析已经成为市场营销领域最为普遍的工具之一。
文章还介绍了最新的研究和在实践中的应用,并探讨了一些特定情况下共轭分析的限制。
4. Qualitative Market Research: An International Journal这个杂志专注于定性市场调研方法。
它包括与确定消费者需求、分析竞争对手、建立品牌等相关的研究。
文章强调定性市场调研可以提供深入的见解和对产品或服务的更清晰的理解,帮助企业做出更明智的营销和业务决策。
每一期都包括来自该领域的专家的文章,并提供案例研究和最佳实践。
5. Use of Artificial Intelligence Techniques in Market Research: A Review该文献介绍了如何使用人工智能技术进行市场调研。
APPLICATION OF ARTIFICIAL INTELLIGENCE (AI) PROGRAMMINGTECHNIQUES TO TACTICAL GUIDANCE FOR FIGHTER AIRCRAFTJohn W. McManusandKenneth H. GoodrichNASA Langley Research CenterMail Stop 489Hampton, Virginia 23665-5225(804)864-4037/(804)864-4009AIAA Guidance, Navigation, and Control ConferenceAugust 14-16, 1989Boston, MassachusettsAutonomous Systems and Mission PlanningABSTRACTA research program investigating the use of Artificial Intelligence (AI) techniques to aid in the development of a Tactical Decision Generator (TDG) for Within-Visual-Range (WVR) air combat engagements is discussed. The application of AI methods for development and implementation of the TDG is presented. The history of the Adaptive Maneuvering Logic (AML) program is traced and current versions of the AML program are compared and contrasted with the TDG system. The Knowledge-Based Systems (KBS) used by the TDG to aid in the decision-making process are outlined in detail and example rules are presented. The results of tests to evaluate the performance of the TDG versus a version of AML and versus human pilots in the Langley Differential Maneuvering Simulator (DMS) are presented. To date, these results have shown significant performance gains in one-versus-one air combat engagements, and the AI-based TDG software has proven to be much easier to modify than the updated FORTRAN AML programs.INTRODUCTIONThe development of all-aspect and "fire and forget" weapons has increased the complexity of the air-to-air combat environment. Modern sensors provide critical tactical information to the aircraft a range and precision that were impossible 20 years ago. This increased complexity, combined with the expanded capabilities of high performance aircraft, has changed the future of air combat engagements. The need for a modern, realistic air combat simulation that can be used to evaluate the current and future air combat environment has been well documented [Burgin 1975, 1986, 1988; Hankins 1979]. Existing tools such as the Adaptive Maneuvering Logic program (AML) [Burgin 1975, 1986, 1988], TAC Brawler [Kerchner 1985], and AASPEM have generally centered their efforts on the development and refinement of high-fidelity aircraft dynamics modeling techniques and not on the developmentand refinement of tactical decision generation logic for WVR engagements. In support of the study of superagile aircraft at Langley Research Center (LaRC) a Tactical Guidance Research and Evaluation System (TGRES, pronounced "tigress") is being developed [Goodrich 1989].Figure 1. TGRES SYSTEM.TGRES DESCRIPTIONThe TGRES system, shown in figure 1, provides a means by which researchers can develop and evaluate, in a tactically significant environment, various systems for high performance aircraft. While TGRES is aimed specifically at the development and evaluation of maneuvering strategies and advanced guidance/control systems for superagile aircraft, TGRES's modularity will make it easily adaptable to the analysis of other types of aircraft systems. TGRES is composed of three main elements--the TDG, the Tactical Maneuver Simulator (TMS) , and the DMS.The TDG is a knowledge-based guidance system designed to provide insight into the tactical benefits and costs of enhanced aircraft controllability and maneuverability throughout an expanded flight envelope (i.e. superagility). The two remaining elements of TGRES, the TMS and the DMS, provide simulation environments in which the TDG is exercised. The TMS simulation environment was developed using conventional computer languages on a VAXStation 3200. The TDG was developed on a Symbolics 3650 workstation. The separation of the aircraft simulation and decision logic components allows each module to be developed using hardware and programming techniques specifically designed for its function. This separation of tasks also increases the efficiency of the simulation by allowing some parallel processing. The two processes are executed as co-tasks and communicate via an EtherNet connection. (See fig. 2.)Figure 2. CURRENT HARDWARE CONFIGURATION.The user interface system consists of a color graphics package designed to replayboth TMS and DMS engagements, and a mouse sensitive representation of the TDG aircraft and its basic systems that allows the user to interact with the TDG aircraft during the execution of TMS runs. The Engagement Replay System (ERS) software is available for a VAX color workstation and a Symbolics color workstation. The ERS display, shown in figure 3, displays the two aircraft on a three-dimensional axis and has dedicated windows used to display several aircraft variables including the thrust, Mach number, and deviation angles of the two aircraft. The viewing angle for each engagement can be rotated 360°around both the X and Z axis to provide the most information to the user. The interactive TMS display includes a graphical representation of the TDG aircraft's major systems suchas engines, offensive and defensive systems, and a system status display. During the simulation run the user can enable and/or disable the aircraft's systems using the mouse sensitive display and evaluate how the changes effect the TDG's decision generation process.Figure 3. ERS DISPLAY.The final element of TGRES is the Differential Maneuvering Simulator. The DMS consistsof two 40' diameter domes located at Langley. The facility is intended for the real-time simulation of engagements between piloted aircraft. By using the TDG to drive one of the airplanes, it is possible to test the TDG against a human opponent. This feature allows the guidance logic to be evaluated against an unpredictable and adaptive opponent. A thirddome (20' in diameter) is being added to the DMS facility. This addition will allow the guidance logic to be evaluated in one-versus-two or two-versus-one scenarios, further enhancing the tactical capability of the DMS environment.THE AML PROGRAMThe TDG is being developed as a KBS incorporating some of the features first outlined in the AML program [Burgin 1975, Hankins 1979]. The AML program was selected as a baseline for several reasons, including its past performance as a real-time WVR tactical adversary in the Langley DMS and the modular design of the FORTRAN source code. The tactical decision generation method developed for the original AML program, outlined in figure 4, is a unique approach that attempts to model the goal-seeking behavior of a pilot by mapping the physical situation between the two aircraft into a finite, abstract situation space. A set of the three basic control variables (bank angle, load factor, and thrust) can be determined to maximize some performance index in the situation space [Burgin 76]. Each triplet of controlvariables defines an "elemental maneuver," and a sequence of these elemental maneuvers may form classical or "text book" air combat maneuvers.Figure 4. HOW AML WORKS.Although the logic and geometry used by AML to make tactical decisions is complex,the basic concepts it uses are simple. At each decision interval, the "attacking" aircraft predicts the future position and velocity of its opponent using a curve-fitting algorithm and past known positions of the opponent. The attacker then uses a set of elemental maneuvers (described above) to predict a set of positions that it can reach from its current state.The AML program forms a "situation state vector" for each trial maneuver evaluated.The vector is used to represent the responses to a set of questions about the current situation.Figure 5 shows the binary scoring method (0 = NO, 1 = YES) used to determine the value of each each cell in the vector.This vector is multiplied by a "scoring weight" vector to form a scalar product that represents the situation space value for the current maneuver. A detailed description of the trial maneuver generation and scoring process and an explanation of how the scoring weights have evolved can be found in [Burgin 1988]. The questions used to form the situation state vector were obtained from several sources including air combat maneuvering manuals, interviews with fighter pilots, and detailed analysis of the original DMS engagements. In the original version of AML, each question had a positive, non-zero weight. The questions were formulated so that a "YES" answer reflects a favorable condition, increasing the score for the maneuver. It is important to note that in the original AML research "no systematic investigation was made to optimize these weight factors; they were usually all set to one." The early AML versions [Burgin 1975; Hankins 1979] were designed to perform as a conservative opponent. The scoring rules rated offense and defense evenly and risked giving up some positional advantage to the opponent only when there was reasonable assurance the attacker would gain at least as much offsetting advantage. This conservative approach may be the product of a philosophy stated in [Burgin 1988],"The objective of the decision-making process is to derive maneuvers which will bring one's own weapons to bear on the target while at the same time minimizing exposure to the other side's weapons."This is a one-dimensional approach to the problem. It outlines a logic that handles only the neutral and aggressive cases effectively and does not recognize that there are several Modes of Operation (MO), outlined in figure 6, that a pilot may use during an engagement. In many situations when the opponent has a distinct positional advantage, the AML aircraft will perform "kamikaze" maneuvers, giving up one or more clear shots to the opponent while it maneuvers to a position of "advantage." In these situations the AML aircraft would not survive to exploit the positional advantage, having been "killed" while obtaining it.•AGGRESSIVE•DEFENSIVE•NEUTRAL•EVASIVE•EVADING OPPONENTS' "LOCK AND FIRE"•EVADING MISSILE (AAM & SAM)•GROUND / STALL EVASIONFigure 6. TDG MODES OF OPERATION.The existing trial maneuver versions of AML do a good job of getting behind an opponent, but due to the grain of the trial maneuvers, lack the ability to fine-track the opponent. Several changes were made to the AML program [Burgin 1988] to address this problem. The requirement that only the opponents positional data be passed to the algorithm was relaxed and "complete and accurate information about the the opponent's past and present states" is now provided. The 1986 version of the AML program, AML86 [Burgin 1988], also made several major changes to the tactical decision generation process, abandoning the trial maneuver concept for a rule-based approach and a set of canned "Basic Fighter Maneuvers." [Burgin 1988] contains an extensive history of the "trial maneuver" concept and a description of how the new rule-based version of the program, AML68, was developed. A "pointing" control system was also developed to aid the fine-tracking process. The pointing control system directly commands roll and pitch rates to point the aircraft's longitudinal axis at the opponent. AML86 is a first step towards a multi-dimensional approach and is similar to the decision logic incorporated in the TDG.THE DEVELOPMENT OF THE TDG SYSTEMThe development of the TDG has been a multi-stage process using the COSMIC version of AML as a starting point. The COSMIC version of AML was updated by Dynamics Engineering Incorporated (DEI) while under contract to NASA Langley. This version of AML, (DEI-AML), has a scoring module that uses a set of 15 binary questions and a fixed set of weights to evaluate the trial maneuvers.DEI installed aerodynamic data and engine characteristics provided by the Aircraft Guidance and Controls Branch (AGCB) into the AML data tables and made all changes to the AML software outlined in [Burgin, 1986]. DEI-AML was tested by AGCB to insure symmetry of the engagements given symmetric initial conditions. During the testing process several software bugs were found and corrected. A full description of the bugs and corrections are outlined in [McManus 1989]. The resulting code, dubbed AML´, was again tested for symmetry and a DMS ready version of the code, DMS-AML´ was prepared. AML´ and a DMS ready version, DMS-AML´, are being used as the baseline during development of the TDG system.Figure 7. HOW TDG WORKS.The TDG system, outlined in figure 7, currently uses the trial maneuver concept outlined in the AML program with several extensions. The original set of five to nine trial maneuvers has been expanded to include over 40 trial maneuvers. Although this is a "brute force" solution the new trial maneuvers allow the TDG to perform target tracking more effectively and improve the system's overall performance. The TDG uses an object-oriented programming approach to represent each aircraft and the current state of its offensive systems, defensive systems, and engines. This information is used to help guide the TDG's reasoning process. The original FORTRAN AML throttle controller and the maneuver scoring modules have been redesigned using a rule-based programming approach and ported to the AI workstation. Examples of rules for each of the KBS modules are shown in figure 8.((AND (EQUAL (GET-MISSION PALADIN) *AGGRESSIVE*)(EQUAL (GET-POSITION PALADIN) *NEUTRAL*))((SETF (GET-MODE PALADIN) *AGGRESSIVE*)(AGGRESSIVE-WEIGHT)))EXAMPLE MODE SELECTION RULE.((AND I-SEE-HIM I-CAN-FIRE HE-CANT-FIRE (<= RANGE 12500))((SETF THROTTLE 0.94)) )EXAMPLE THROTTLE CONTROL RULE.((OR (≤ (ABS HIM-UNABLE-TO-FIRE) (I-CAN-FIRE))(AND (> HIM-UNABLE-TO-FIRE O.O) (≥ I-CAN-FIRE 0.0) )(= GUNA 1.0)(= ALLA 1.0)(= HEATA 1.0))(SETF (GET-POSITION PALADIN) *AGGRESSIVE*))EXAMPLE SITUATION ASSESSMENT RULE.Figure 8. Example Rules.KBS MODULES OF THE TDGThe TDG system has a knowledge-based Situation Assessment (SA) module that is executed at each decision interval before the trial maneuvers are evaluated. The SA module is used to determine the TDG's current MO. The SA is executed at each interval, before the maneuver scoring module, and determines the TDG's MO. This determination is based on the TDG's current mission, the current state of the aircraft's systems, the relative geometry between the aircraft and its opponent, and the opponent's instantaneous-intent (*in-int*). Each of the modes shown in figure 6 has a unique set of scoring weights and a decision interval associated with it. The weights for each mode have been adjusted during the design and testing process to maximize the TDG's performance in that mode. Test results have shown that a short decision interval, (0.5 sec.), improves the TDG's fine-tracking performance. The same short decision interval results in a "thrashing" motion in neutral situations resulting in degraded system performance. A longer decision interval, (1.0 sec.), is used in neutral situations. The opponent's *in-int* is defined to be an estimation of the opponent's intent at the current point in time based on available sensor, positional, and geometric data. Currently, there is no attempt to use a history of *in-int* to derive a long-term opponent intent. The flexibility provided by the use of MO's allows the system to more closely model the pilots changing strategies during the engagement. The COSMIC version of AML, and most AML variations before AML86, do not have the ability to change their decision generation strategy based on the changing environment. The TDG Scoring Module (SM) is a KBS that uses a set of 17 fuzzy logicquestions with responses ranging from [0 = NO, ..., 1.0 = YES], (fig. 8), and the set of mode-specific scoring weights selected by the SA module to score each of the trial maneuvers.A rule-based active Throttle Controller (TC) has been developed to replace the existing throttle control subroutine. The TC is called at the start of each decision interval and can set the throttle at any position from idle to full afterburner [0, .., 2]. The logic for the existing AML throttle control subroutine had only three positions (0 = idle, 1 = military, 2 = full afterburner) and had been turned off in the COSMIC version--all engagements were being flown with the throttle set at full afterburner.T D GFigure 10. SET OF 64 INITIAL CONDITIONS.A statistics module is used to calculate the amount of time that each aircraft has its weapons locked on its opponent and the deviation angle and angle-off. The Line-Of-Sight (LOS) vector is defined as the vector between ownship c.g. and opponent's c.g. The Line-Of-Sight (LOS) angle is defined as the angle between the LOS vector and ownship body x-axis; the deviation angle is defined as the angle between the LOS vector and ownship velocity vector; and the angle-off is defined as the angle between the LOS vector and opponent's velocity vector (fig.10).AMLAIRPLANETDGAIRPLANEOWNSHIP VELOCITYVECTORFigure 11. ANGLE DEFINITIONS.The weapons cones used represent a generic all-aspect missile, a generic tail-aspect missile,and a 20 mm cannon (fig. 11). Four metrics are currently used to evaluate each engagement.The first metric is calculated every second and computes the total time that each airplane has its weapons locked on the opponent, the probability that the shot will hit, the distance between the opponents, the angle-off, and the deviation angle. The results are printed in a table format at the completion of each run. The second metric computes a Probability of Survival (PS) using the data computed by the first metric. The missiles are treated as a limited resource and aprobability to hit of 0.65 is required to launch the first missile. The firing threshold increases by 0.05 for each missile launched, and all missiles are required to complete their flight to the target before the next missile is fired. The third scoring metric attempts to determine a Lethal Time (LT) value for each engagement. The LT value for a run is equal to ((TDG gun time -AML gun time) / 2) + 2 * (TDG tail-aspect time - AML tail-aspect time) + (TDG all-aspect time - AML all-aspect time). A positive LT value shows TDG with an advantage, a negative LTshows AML with an advantage. The fourth metric is Time on Offense (TOF). TOF is the sum of all weapons lock time for each each airplane. ∆TOF is computed as TDG TOF minus AML TOF.5° GUN CONE. RANGE 0 TO 5000 FT.-1-051122TDG- AML (time on offense)∆ TOF(seconds)Run Number.Figure 13. ∆TOF FOR SET OF 64 ENGAGEMENTS.TEST RESULTSA set of nine engagements presented in [Eidetics 89] were used to compare theperformance of the TDG system with the performance of the AML´ in the lab, and againstpilots in the DMS. AML´ was used as the A airplane in both sets of lab test engagements, and the human pilot flew the A airplane during the DMS runs. Airplanes with identicalperformance characteristics were used in both the DMS and the lab. The set of nine initial initial conditions, fig.13, favor the Aairplane.2 NM SEPARATION.540 KTS AIRSPEED.15,000 ALTITUDEFigure 14. EIDETICS INITIAL CONDITIONS.The B airplane has five neutral starting positions, runs 3, 5, 6, 7, and 9; one offensive starting position, run 8; and 3 defensive starting positions, runs 1, 2, and 4. There is a 2-nautical mile separation between the opponents and each airplane is at an initial altitude of 15,000 feet and an initial airspeed of 540 knots. All of the engagements were run for 60 seconds. The scoring metric used was an Overall Exchange Ratio (OER), defined as the # of A killed / the # of B killed. The Eidetics study was conducted using a modified version of the AASPEM program and produced an OER of ≈ 0.72. The OER was less than 1.0 due to the use of a non-symmetric set of initial conditions. In the first set of engagements the AML´ program wasflown against itself and the produced an OER of 0.75.2468101214Run Number.Time inSeconds.Figure 15. AML´ vs AML´ TOF.In the second set of engagements the TDG was used to control the B airplane and achieved an OER of 1.50, a 100 percent improvement. The test results, (figs. 14, 15), clearly show the superior performance of the TDG system. It is also interesting to note that the maximum OER Eidetics achieved by modifying aircraft performance characteristics was ≈ 0.85 [Eidetics1989].246810121416123456789Run Number.Time inSeconds.Figure 16. TDG vs AML´ TOF.The DMS runs were conducted using the research pilot with the most DMS flight time against the TDG-DMS as the opponent. The pilot flew against the set of initial conditions three times, providing a total of 27 runs. TOF data for the DMS runs is not available at this time. The OER for the set of 27 runs was 0.83. As stated earlier, studies done in the lab have shown that the reduced set of trial maneuvers used by DMS-TDG cannot fine track an opponent as effectively as the expanded set used by the TDG. The reduced set of trial maneuvers used by DMS-TDG may account for most of the performance difference between the TDG and DMS-TDG.FUTURE WORKSeveral enhancements to the existing TDG system are planned. The maneuver selection logic will be expanded to replace the use of the trial maneuvers for modes of operation where conventional guidance algorithms provide better performance. This change to the logic and selection module will improve the TDG's ability to track its opponent. Initial lab results have shown that the development of mode-specific maneuver sets will increase system efficiency by reducing the number of maneuvers evaluated for some MO's. The development of logic for two-vs-one engagements is underway. The third aircraft will be dynamically allocated to either the TDG or the opponent at the start of each run. This feature will allow researchers to evaluate the TDG in both two-vs-one and one-vs-two engagements. A system for connecting the Symbolics workstation directly to the DMS real-time computing facilities is also being investigated. The development of such a link would allow the full TDG system to be tested in the DMS against human pilots.The TGRES system presents an excellent opportunity to evaluate the use of AI programming techniques and knowledge-based systems in a real-time environment. It also clearly shows that the maneuver selection and scoring techniques developed in the late 1960's and early 1970's cannot perform well in the modern tactical environment and are not well suited for evaluating agile aircraft. Figure 16 shows many of the changes in the tactical and simulation environments since the original AML tactical decision generation logic was developed. The use of KBS and AI programming techniques in developing the TDG has allowed a complex tactical decision generation system to be developed that addresses the modern combat environment and agile aircraft in a clear and concise manner.1968HEAT SEEKING WEAPONS DOMINATE TACTICAL SITUATIONLIMITED COMPUTING AND MODELING RESOURCES. SHORT-RANGE RADAR. SHORT-RANGE WEAPONS.1 vs 11989ALL-ASPECT WEAPONS DOMINATE TACTICAL SITUATION. (LONGER RANGE, FIRE AND FORGET,....)BETTER COMPUTING AND MODELING RESOURCES.LONG-RANGE RADARLONG-RANGE WEAPONS.2 vs 1, M vs N SUPERMANEUVERABLE AIRCRAFT, POINT AND SHOOT CAPABILITYFigure 17. 1968 AML vs 1989 TDG.CONCLUDING REMARKSA KBS TDG is being developed to study WVR air combat engagements. The system incorporates modern airplane simulation techniques, sensors, and weapons systems. The system was developed using several concepts first outlined in the AML program originally developed for use in the LaRC DMS. An updated AML system is being used as a baseline to assess the functional and performance tradeoffs between a conventionally coded system and theAI-based system. Test results have shown that the AI-based TDG system has performed better than AML´ in both the TMS and the DMS. The use of a KBS SA module and MO's allows the TDG to more accurately represent the complex decision making process carried out by a pilot. The use of a more extensive set of trial maneuvers and a KBS TC module allows the TDG tofine track the opponent more effectively than AML´. The KBS decision generation logic has proved to be much easier to modify than the AML´ FORTRAN source code. The ability to integrate the TDG into the DMS offers a unique opportunity to evaluate the performance of theAI-based TDG software in a real-time tactical environment against human pilots.REFERENCES1. Burgin, G. H. ; et al. : An Adaptive Maneuvering Logic Computer Program for theSimulation of One-on-One Air-to-Air Combat. Vol I and II. NASA CR-2582, CR-2583, 1975.2. Burgin, G. H. : Improvements to the Adaptive Maneuvering Logic Program. NASA CR-3985, 1986.3. Burgin, G. H. ; and Sidor, L. B. : Rule-Based Air Combat Simulation. NASA CR-4160,1988.4. Hankins III, W. W. : Computer-Automated Opponent for Manned Air-to-Air CombatSimulations. NASA TP-1518, 1979.5. Kerchner, R. M. ; et al. : The TAC Brawler Air Combat Simulation Analyst Manual(Revision 3. 0). DSA Report #668.6. Buttrill, C. S. ; et al. : Draft NASA TM 1989.7.Taylor, Robert T. ; et al. : Simulated Combat for Advanced Technology AssessmentsUtilizing The Adaptive Maneuvering Logic Concepts. NASA Order no. L-24468C, Coastal Dynamics Technical Report No. 87-001.8.McManus, John W. ; Goodrich, Kenneth H. : Draft NASA TM 1989.9.Goodrich, Kenneth H; McManus John W. :AIAA Paper #...。
电脑专业英语1500词Computer ScienceIntroduction:Computer Science is a rapidly growing field that encompasses the study of computer systems, software, and algorithms. It involves the theory, design, development, and application of computers and computer systems. In this article, we will explore various aspectsof computer science, including its history, key concepts, and career opportunities.History:Computer science has its roots in mathematics and physics. In the 1800s, pioneers such as Charles Babbage and Ada Lovelace laid the foundation for modern computing by developing early mechanical computers. However, it was not until the mid-20th century that computer science emerged as a distinct discipline.Key Concepts:1. Programming: Programming is a fundamental component of computer science. It involves writing instructions in a programming language to solve problems and perform tasks. Popular programming languages include Java, Python, and C++.2. Data Structures: Data structures refer to the organization and management of data in computer systems. Examples include arrays, linked lists, stacks, and queues.3. Algorithms: Algorithms are step-by-step procedures used to solve problems. They are an essential part of computer science andare used in various applications, such as sorting data, searching for information, and optimizing processes.4. Artificial Intelligence: Artificial Intelligence (AI) is a branch of computer science that focuses on creating intelligent machines capable of performing complex tasks, such as speech recognition, image processing, and decision making.5. Networking: Networking is the practice of connecting computers and other devices to enable communication and data exchange. It involves the design, implementation, and management of computer networks, including local area networks (LANs) and wide area networks (WANs).Career Opportunities:1. Software Engineer: Software engineers design, develop, and test computer software. They work on various projects, such as developing mobile apps, creating web applications, and designing computer games.2. Data Scientist: Data scientists analyze and interpret complex data using statistical techniques and machine learning algorithms. They work with large datasets to derive valuable insights and make informed business decisions.3. Cybersecurity Analyst: Cybersecurity analysts protect computer systems and networks from cyber threats and attacks. They implement security measures, monitor network activity, and investigate any potential breaches.4. Network Administrator: Network administrators are responsible for the maintenance and troubleshooting of computer networks. They ensure that networks are secure, efficient, and running smoothly.Conclusion:Computer science is an exciting and dynamic field with numerous opportunities for those interested in technology and innovation. From programming and data structures to artificial intelligence and networking, computer science offers a wide range of career paths. As technology continues to advance, the need for computer science professionals will only increase. Whether you're interested in software development, data analysis, or network administration, computer science will provide you with the knowledge and skillsto succeed in the digital age.Emerging Trends in Computer Science:Computer science is a field that is constantly evolving and adapting to new technologies and trends. In this section, we will explore some of the emerging trends in computer science that are shaping the future of the industry.1. Artificial Intelligence and Machine Learning:Artificial intelligence (AI) and machine learning (ML) have gained tremendous popularity in recent years. AI refers to the development of intelligent machines that can perform tasks that typically require human intelligence. Machine learning, on the other hand, is a subset of AI that focuses on enabling machines to learn and improve from experience without explicit programming. These technologies have applications in various fields such as healthcare, finance, and transportation. AI and ML are drivinginnovations such as autonomous vehicles, speech recognition systems, and personalized recommendations.2. Internet of Things (IoT):The Internet of Things (IoT) is a network of interconnected devices that communicate and share data with each other. IoT has the potential to revolutionize various industries by connecting everyday objects such as cars, appliances, and wearable devices to the internet. This enables these devices to collect and exchange data, leading to increased efficiency and convenience. The applications of IoT are far-reaching, from smart homes and cities to industrial automation and healthcare monitoring.3. Big Data Analytics:With the exponential growth of data in today's digital age, the need for efficient storage and analysis of vast amounts of information has become increasingly important. Big data analytics refers to the process of examining large datasets to uncover patterns, correlations, and insights that can inform decision-making. This field combines techniques from computer science, statistics, and mathematics to process and analyze structured and unstructured data. Big data analytics has applications in various industries, such as marketing, finance, and healthcare, enabling organizations to make data-driven decisions.4. Cloud Computing:Cloud computing has become an integral part of modern computing infrastructure. It involves the delivery of computing services over the internet, including storage, processing power, and software applications. Cloud computing offers numerousadvantages, such as scalability, flexibility, and cost-effectiveness. It allows businesses and individuals to access and store their data and applications remotely, eliminating the need for on-site infrastructure. With the increasing demand for data storage and processing power, cloud computing is expected to continue to grow in popularity.5. Blockchain Technology:Blockchain technology is a decentralized and distributed ledger that records transactions across multiple computers. It provides transparency, security, and immutability by utilizing cryptographic techniques. Originally developed for cryptocurrencies such as Bitcoin, blockchain technology has now found applications beyond digital currencies. It has the potential to revolutionize various sectors such as finance, supply chain management, and healthcare. Blockchain technology offers secure and transparent record-keeping, reducing the risk of fraud and improving operational efficiency.Conclusion:Computer science is a dynamic and evolving field that continues to shape the future of technology and innovation. Artificial intelligence, machine learning, IoT, big data analytics, cloud computing, and blockchain technology are just a few of the emerging trends that are driving advancements in the field. As these technologies continue to develop, they will create new opportunities and challenges for computer science professionals. Staying up-to-date with the latest trends and continuously learning and adapting to new technologies is essential for anyone pursuing acareer in computer science. By embracing these emerging trends, computer scientists can make significant contributions to various industries and drive innovation in the digital age.。
人工智能技术英语Artificial Intelligence technologyArtificial Intelligence (AI) technology refers to the development of computer systems that are capable of performing tasks that would typically require human intelligence. AI technology aims to mimic human cognitive functions such as learning, problem-solving, and decision making.There are various subfields within AI technology, including machine learning, natural language processing, computer vision, and robotics. Machine learning algorithms enable computers to learn from data and make predictions or decisions without being explicitly programmed. Natural language processing focuses on enabling computers to understand and process human language, enabling capabilities such as speech recognition and language translation. Computer vision technology aims to teach computers to interpret and understand visual information, empowering them to recognize objects or scenes. Robotics integrates AI technology into physical systems, enabling machines to interact with and manipulate their surroundings.AI technology is being applied in various industries and sectors, including healthcare, finance, transportation, and entertainment. It is used for tasks such as diagnosing diseases, predicting stock prices, autonomous driving, and creating personalized recommendations on streaming platforms.Overall, AI technology is revolutionizing numerous aspects of ourdaily lives and has the potential to significantly impact how we work, communicate, and make decisions in the future.。
附件四英文文献原文Artificial Intelligence"Artificial intelligence" is a word was originally Dartmouth in 1956 to put forward. From then on, researchers have developed many theories and principles, the concept of artificial intelligence is also expands. Artificial intelligence is a challenging job of science, the person must know computer knowledge, psychology and philosophy. Artificial intelligence is included a wide range of science, it is composed of different fields, such as machine learning, computer vision, etc, on the whole, the research on artificial intelligence is one of the main goals of the machine can do some usually need to perform complex human intelligence. But in different times and different people in the "complex" understanding is different. Such as heavy science and engineering calculation was supposed to be the brain to undertake, now computer can not only complete this calculation, and faster than the human brain can more accurately, and thus the people no longer put this calculation is regarded as "the need to perform complex human intelligence, complex tasks" work is defined as the development of The Times and the progress of technology, artificial intelligence is the science of specific target and nature as The Times change and development. On the one hand it continues to gain new progress on the one hand, and turning to more meaningful, the more difficult the target. Current can be used to study the main material of artificial intelligence and artificial intelligence technology to realize the machine is a computer, the development history of artificial intelligence is computer science and technology and the development together. Besides the computer science and artificial intelligence also involves information, cybernetics, automation, bionics, biology, psychology, logic, linguistics, medicine and philosophy and multi-discipline. Artificial intelligence research include: knowledge representation, automatic reasoning and search method, machine learning and knowledge acquisition and processing of knowledge system, natural language processing, computer vision, intelligent robot, automatic program design, etc.Practical application of machine vision: fingerprint identification, face recognition, retina identification, iris identification, palm, expert system, intelligent identification, search, theorem proving game, automatic programming, and aerospace applications.Artificial intelligence is a subject categories, belong to the door edge discipline of natural science and social science.Involving scientific philosophy and cognitive science, mathematics, neurophysiological, psychology, computer science, information theory, cybernetics, not qualitative theory, bionics.The research category of natural language processing, knowledge representation, intelligent search, reasoning, planning, machine learning, knowledge acquisition, combined scheduling problem, perception, pattern recognition, logic design program, soft calculation, inaccurate and uncertainty, the management of artificial life, neural network, and complex system, human thinking mode of genetic algorithm.Applications of intelligent control, robotics, language and image understanding, genetic programming robot factory.Safety problemsArtificial intelligence is currently in the study, but some scholars think that letting computers have IQ is very dangerous, it may be against humanity. The hidden danger in many movie happened.The definition of artificial intelligenceDefinition of artificial intelligence can be divided into two parts, namely "artificial" or "intelligent". "Artificial" better understanding, also is controversial. Sometimes we will consider what people can make, or people have high degree of intelligence to create artificial intelligence, etc. But generally speaking, "artificial system" is usually significance of artificial system.What is the "smart", with many problems. This involves other such as consciousness, ego, thinking (including the unconscious thoughts etc. People only know of intelligence is one intelligent, this is the universal view of our own. But we are very limited understanding of the intelligence of the intelligent people constitute elements are necessary to find, so it is difficult to define what is "artificial" manufacturing "intelligent". So the artificial intelligence research often involved in the study of intelligent itself. Other about animal or other artificial intelligence system is widely considered to be related to the study of artificial intelligence.Artificial intelligence is currently in the computer field, the more extensive attention. And in the robot, economic and political decisions, control system, simulation system application. In other areas, it also played an indispensable role.The famous American Stanford university professor nelson artificial intelligence research center of artificial intelligence under such a definition: "artificial intelligence about the knowledge of the subject is and how to represent knowledge -- how to gain knowledge and use of scientific knowledge. But another American MIT professor Winston thought: "artificial intelligence is how to make the computer to do what only can do intelligent work." These comments reflect the artificial intelligence discipline basic ideas and basic content. Namely artificial intelligence is the study of human intelligence activities, has certain law, research of artificial intelligence system, how to make the computer to complete before the intelligence needs to do work, also is to study how the application of computer hardware and software to simulate human some intelligent behavior of the basic theory, methods and techniques.Artificial intelligence is a branch of computer science, since the 1970s, known as one of the three technologies (space technology, energy technology, artificial intelligence). Also considered the 21st century (genetic engineering, nano science, artificial intelligence) is one of the three technologies. It is nearly three years it has been developed rapidly, and in many fields are widely applied, and have made great achievements, artificial intelligence has gradually become an independent branch, both in theory and practice are already becomes a system. Its research results are gradually integrated into people's lives, and create more happiness for mankind.Artificial intelligence is that the computer simulation research of some thinking process and intelligent behavior (such as study, reasoning, thinking, planning, etc.), including computer to realize intelligent principle, make similar to that of human intelligence, computer can achieve higher level of computer application. Artificial intelligence will involve the computer science, philosophy and linguistics, psychology, etc. That was almost natural science and social science disciplines, the scope of all already far beyond the scope of computer science and artificial intelligence and thinking science is the relationship between theory and practice, artificial intelligence is in the mode of thinking science technology application level, is one of its application. From theview of thinking, artificial intelligence is not limited to logical thinking, want to consider the thinking in image, the inspiration of thought of artificial intelligence can promote the development of the breakthrough, mathematics are often thought of as a variety of basic science, mathematics and language, thought into fields, artificial intelligence subject also must not use mathematical tool, mathematical logic, the fuzzy mathematics in standard etc, mathematics into the scope of artificial intelligence discipline, they will promote each other and develop faster.A brief history of artificial intelligenceArtificial intelligence can be traced back to ancient Egypt's legend, but with 1941, since the development of computer technology has finally can create machine intelligence, "artificial intelligence" is a word in 1956 was first proposed, Dartmouth learned since then, researchers have developed many theories and principles, the concept of artificial intelligence, it expands and not in the long history of the development of artificial intelligence, the slower than expected, but has been in advance, from 40 years ago, now appears to have many AI programs, and they also affected the development of other technologies. The emergence of AI programs, creating immeasurable wealth for the community, promoting the development of human civilization.The computer era1941 an invention that information storage and handling all aspects of the revolution happened. This also appeared in the U.S. and Germany's invention is the first electronic computer. Take a few big pack of air conditioning room, the programmer's nightmare: just run a program for thousands of lines to set the 1949. After improvement can be stored procedure computer programs that make it easier to input, and the development of the theory of computer science, and ultimately computer ai. This in electronic computer processing methods of data, for the invention of artificial intelligence could provide a kind of media.The beginning of AIAlthough the computer AI provides necessary for technical basis, but until the early 1950s, people noticed between machine and human intelligence. Norbert Wiener is the study of the theory of American feedback. Most familiar feedback control example is the thermostat. It will be collected room temperature and hope, and reaction temperature compared to open or close small heater, thus controlling environmentaltemperature. The importance of the study lies in the feedback loop Wiener: all theoretically the intelligence activities are a result of feedback mechanism and feedback mechanism is. Can use machine. The findings of the simulation of early development of AI.1955, Simon and end Newell called "a logical experts" program. This program is considered by many to be the first AI programs. It will each problem is expressed as a tree, then choose the model may be correct conclusion that a problem to solve. "logic" to the public and the AI expert research field effect makes it AI developing an important milestone in 1956, is considered to be the father of artificial intelligence of John McCarthy organized a society, will be a lot of interest machine intelligence experts and scholars together for a month. He asked them to Vermont Dartmouth in "artificial intelligence research in summer." since then, this area was named "artificial intelligence" although Dartmouth learn not very successful, but it was the founder of the centralized and AI AI research for later laid a foundation.After the meeting of Dartmouth, AI research started seven years. Although the rapid development of field haven't define some of the ideas, meeting has been reconsidered and Carnegie Mellon university. And MIT began to build AI research center is confronted with new challenges. Research needs to establish the: more effective to solve the problem of the system, such as "logic" in reducing search; expert There is the establishment of the system can be self learning.In 1957, "a new program general problem-solving machine" first version was tested. This program is by the same logic "experts" group development. The GPS expanded Wiener feedback principle, can solve many common problem. Two years later, IBM has established a grind investigate group Herbert AI. Gelerneter spent three years to make a geometric theorem of solutions of the program. This achievement was a sensation.When more and more programs, McCarthy busy emerge in the history of an AI. 1958 McCarthy announced his new fruit: LISP until today still LISP language. In. "" mean" LISP list processing ", it quickly adopted for most AI developers.In 1963 MIT from the United States government got a pen is 22millions dollars funding for research funding. The machine auxiliary recognition from the defense advanced research program, have guaranteed in the technological progress on this plan ahead of the Soviet union. Attracted worldwide computer scientists, accelerate the pace of development of AIresearch.Large programAfter years of program. It appeared a famous called "SHRDLU." SHRDLU "is" the tiny part of the world "project, including the world (for example, only limited quantity of geometrical form of research and programming). In the MIT leadership of Minsky Marvin by researchers found, facing the object, the small computer programs can solve the problem space and logic. Other as in the late 1960's STUDENT", "can solve algebraic problems," SIR "can understand the simple English sentence. These procedures for handling the language understanding and logic.In the 1970s another expert system. An expert system is a intelligent computer program system, and its internal contains a lot of certain areas of experience and knowledge with expert level, can use the human experts' knowledge and methods to solve the problems to deal with this problem domain. That is, the expert system is a specialized knowledge and experience of the program system. Progress is the expert system could predict under certain conditions, the probability of a solution for the computer already has. Great capacity, expert systems possible from the data of expert system. It is widely used in the market. Ten years, expert system used in stock, advance help doctors diagnose diseases, and determine the position of mineral instructions miners. All of this because of expert system of law and information storage capacity and become possible.In the 1970s, a new method was used for many developing, famous as AI Minsky tectonic theory put forward David Marr. Another new theory of machine vision square, for example, how a pair of image by shadow, shape, color, texture and basic information border. Through the analysis of these images distinguish letter, can infer what might be the image in the same period. PROLOGE result is another language, in 1972. In the 1980s, the more rapid progress during the AI, and more to go into business. 1986, the AI related software and hardware sales $4.25 billion dollars. Expert system for its utility, especially by demand. Like digital electric company with such company XCON expert system for the VAX mainframe programming. Dupont, general motors and Boeing has lots of dependence of expert system for computer expert. Some production expert system of manufacture software auxiliary, such as Teknowledge and Intellicorp established. In order to find and correct the mistakes, existing expert system and some other experts system was designed,such as teach userslearn TVC expert system of the operating system.From the lab to daily lifePeople began to feel the computer technique and artificial intelligence. No influence of computer technology belong to a group of researchers in the lab. Personal computers and computer technology to numerous technical magazine now before a people. Like the United States artificial intelligence association foundation. Because of the need to develop, AI had a private company researchers into the boom. More than 150 a DEC (it employs more than 700 employees engaged in AI research) that have spent 10 billion dollars in internal AI team.Some other AI areas in the 1980s to enter the market. One is the machine vision Marr and achievements of Minsky. Now use the camera and production, quality control computer. Although still very humble, these systems have been able to distinguish the objects and through the different shape. Until 1985 America has more than 100 companies producing machine vision systems, sales were us $8 million.But the 1980s to AI and industrial all is not a good year for years. 1986-87 AI system requirements, the loss of industry nearly five hundred million dollars. Teknowledge like Intellicorp and two loss of more than $6 million, about one-third of the profits of the huge losses forced many research funding cuts the guide led. Another disappointing is the defense advanced research programme support of so-called "intelligent" this project truck purpose is to develop a can finish the task in many battlefield robot. Since the defects and successful hopeless, Pentagon stopped project funding.Despite these setbacks, AI is still in development of new technology slowly. In Japan were developed in the United States, such as the fuzzy logic, it can never determine the conditions of decision making, And neural network, regarded as the possible approaches to realizing artificial intelligence. Anyhow, the eighties was introduced into the market, the AI and shows the practical value. Sure, it will be the key to the 21st century. "artificial intelligence technology acceptance inspection in desert storm" action of military intelligence test equipment through war. Artificial intelligence technology is used to display the missile system and warning and other advanced weapons. AI technology has also entered family. Intelligent computer increase attracting public interest. The emergence of network game, enriching people's life.Some of the main Macintosh and IBM for application softwaresuch as voice and character recognition has can buy, Using fuzzy logic, AI technology to simplify the camera equipment. The artificial intelligence technology related to promote greater demand for new progress appear constantly. In a word ,Artificial intelligence has and will continue to inevitably changed our life.附件三英文文献译文人工智能“人工智能”一词最初是在1956 年Dartmouth在学会上提出来的。
人工智能能否代替艺术家英语作文全文共3篇示例,供读者参考篇1Can Artificial Intelligence Replace Artists?Artificial intelligence (AI) has made significant advancements in recent years, with the ability to perform tasks that were once thought to be exclusive to human beings. From driving cars to diagnosing diseases, AI has proven to be incredibly versatile and capable. However, when it comes to art, the question arises: can AI truly replace artists?On the one hand, AI has already shown its ability to create art in various forms. For example, there are AI programs that can compose music, write poetry, and even paint images. These creations can be quite impressive and often indistinguishable from those produced by human artists. In some cases, AI has even been praised for its innovative and unique approach to art.One of the main arguments in favor of AI replacing artists is its efficiency and speed. AI can generate art at a rapid pace, continuously producing new works without the need for breaks or inspiration. This can be advantageous in industries where timeis of the essence, such as advertising or graphic design. Additionally, AI can analyze vast amounts of data and extract insights that human artists may overlook, leading to new and exciting artistic possibilities.Furthermore, AI does not have the same limitations as human artists. It is not bound by emotions, biases, or physical constraints, allowing it to explore artistic avenues that may be beyond the reach of human creativity. This freedom can lead to groundbreaking and unconventional works of art that challenge traditional norms and perceptions.However, despite these advantages, there are those who argue that AI can never truly replace artists. Art is a deeply personal and emotional expression of the human experience, and many believe that AI lacks the capacity to convey this depth and complexity. While AI may be able to replicate certain aspects of art, such as technique and style, it cannot capture the essence of what it means to be human.Human artists bring their unique perspectives, experiences, and emotions to their work, infusing it with a sense of authenticity and soul. This intangible quality is what sets human-created art apart from AI-generated art and resonates with audiences on a profound level. In essence, art is not just theend product but also the process of creation, which involves intuition, imagination, and emotion.Furthermore, the idea of AI replacing artists raises ethical concerns about the role of creativity and expression in society. If AI can produce art as well as or even better than humans, what place is there for human artists in the world? Will creativity become commodified and standardized, leading to a loss of diversity and individuality in art?In conclusion, while AI has the potential to create remarkable works of art, it is unlikely to fully replace human artists. The essence of art lies in its humanity, and this is something that AI cannot replicate. Instead of viewing AI as a threat to artists, it can be seen as a tool that complements and enhances human creativity. By combining the strengths of both AI and human artists, new and exciting possibilities can emerge in the world of art.篇2Can Artificial Intelligence Replace Artists?IntroductionThe rise of artificial intelligence (AI) has sparked debates about its impact on various industries, including the field of arts.Can AI truly replace artists in creating compelling and innovative works of art? This question has stirred much discussion among art enthusiasts, creators, and technologists. In this essay, we will explore the capabilities of AI in art creation and analyze whether it can truly replace human artists.The Role of AI in Art CreationAI has made significant advancements in various fields, including the arts. With the development of algorithms and machine learning techniques, AI systems can now generate music, paintings, sculptures, and even literature. TheseAI-generated artworks have gained considerable attention and recognition from the public and the art community.One of the most notable AI applications in art creation is the use of generative adversarial networks (GANs). GANs are a type of deep learning model that consists of two neural networks: a generator and a discriminator. The generator generates new artworks based on existing data, while the discriminator evaluates the generated artworks for authenticity. Through this process of iteration and feedback, GANs can produce realistic and unique artworks that mimic the style of human artists.AI-generated artworks have been featured in exhibitions, galleries, and even auctions. For example, the painting "Portraitof Edmond de Belamy" created by a GAN was sold at Christie's auction house for $432,500. This event raised questions about the role of AI in art creation and its potential to replace human artists.The Limitations of AI in Art CreationWhile AI has shown promise in generating artworks, it still has limitations that prevent it from fully replacing human artists. One of the main challenges of AI-generated art is its lack of emotional depth and creativity. AI systems rely on algorithms and data sets to generate artworks, which may result in predictable and repetitive creations. Human artists, on the other hand, draw inspiration from their emotions, experiences, and imagination to create unique and meaningful works of art.Another limitation of AI-generated art is its inability to capture the complexity and nuances of human expression. Art is a form of communication that conveys emotions, thoughts, and ideas. Human artists infuse their artworks with personal insights and perspectives, making them authentic and relatable to viewers. AI-generated art lacks this human touch, which may limit its ability to connect with audiences on a deeper level.The Debate on AI vs. ArtistsThe debate on whether AI can replace artists is ongoing and multifaceted. Some argue that AI has the potential to revolutionize the art world by expanding creative possibilities and pushing the boundaries of traditional art forms.AI-generated artworks have introduced new styles, techniques, and aesthetics that challenge conventional notions of art and creativity.On the other hand, critics argue that AI-generated art lacks the soul and authenticity that are essential to human creativity. Art is a form of expression that reflects the human experience, and AI may struggle to replicate the depth and complexity of human emotions in its creations. While AI can generate artworks based on existing data, it may struggle to produce original and innovative works that capture the essence of human creativity.ConclusionIn conclusion, AI has made significant advancements in art creation, but it still has limitations that prevent it from fully replacing human artists. While AI-generated artworks have gained recognition and acceptance in the art world, they lack the emotional depth, creativity, and authenticity that human artists bring to their creations. Art is a form of expression that reflects the human experience, and AI may struggle to replicate thecomplexity and nuance of human emotions in its creations. Ultimately, while AI can enhance the creative process and inspire new forms of art, it cannot replace the unique talents and insights of human artists. The future of art will likely involve a collaboration between AI and artists, blending technology with human creativity to create innovative and compelling works of art.篇3Can Artificial Intelligence Replace Artists?IntroductionArtificial intelligence (AI) has made significant advancements in recent years, with machines now capable of performing a wide range of tasks that were once thought to be exclusive to humans. From autonomous vehicles to advanced medical diagnosis, AI has proven its capabilities in various fields. However, the question remains: can AI replace artists? This essay will explore both sides of the argument and provide insight into the potential role of AI in the field of art.Arguments for AI replacing artistsSupporters of AI argue that machines can effectively replicate the creative process of artists. AI algorithms can analyzevast amounts of data and generate original artworks based on patterns and trends. For example, the AI-generated painting “Portrait of Edmond de Belamy” sold for $432,500 at auction in 2018, showcasing the potential of machines in producing valuable artistic creations.Furthermore, AI can also assist artists in their creative process by providing suggestions and feedback. For instance, AI-powered software like Adobe Sensei can analyze images and offer design recommendations to users, helping artists enhance their work. This collaborative approach between humans and machines can lead to the creation of innovative and compelling artworks.Arguments against AI replacing artistsOn the other hand, critics argue that AI lacks the emotional depth and authenticity that human artists bring to their work. Art is often a reflection of human experiences, emotions, and perspectives, elements that machines may struggle to capture. While AI can mimic artistic styles and techniques, it may not possess the same level of creativity and originality that human artists exhibit.Additionally, the process of creating art is not solely about the end product but also the journey and personal growth thatartists experience. The act of painting, sculpting, or composing music is a form of self-expression and self-discovery that may be difficult for AI to replicate. Without the human touch and intuition, artworks produced by machines may lack the soul and essence that make art truly impactful.The role of AI in the field of artDespite the debate on whether AI can replace artists, it is evident that machines have the potential to revolutionize the art world. AI can be used as a tool to enhance creativity, streamline the creative process, and push the boundaries of artistic expression. By combining the strengths of AI with the unique abilities of human artists, new forms of art that blend technology and creativity can emerge.For example, AI can be used to generate initial design concepts or assist artists in experimenting with different styles and techniques. This collaborative approach can open up new possibilities for artistic exploration and innovation. Additionally, AI-powered artworks can serve as a catalyst for discussions on the intersection of technology and art, sparking new ideas and perspectives in the creative community.ConclusionIn conclusion, while AI may have the ability to replicate certain aspects of artistic creation, it is unlikely to fully replace human artists. Art is a deeply personal and emotive form of expression that stems from the human experience, something that machines may struggle to emulate. Instead of viewing AI as a threat to artists, we should embrace it as a tool to enhance creativity and push the boundaries of artistic exploration. By leveraging the capabilities of AI alongside human ingenuity, we can create a more vibrant and dynamic art world that celebrates the fusion of technology and creativity.。
人工智能(artificial intelligence, AI) Artificial Intelligence is a rapidly growing field of computer science that encompasses a wide range of technologies and techniques. AI systems are designed to solve complex problems, and can often outperform humans in terms of speed, accuracy, and efficiency. AI is used in a variety of applications, ranging from natural language processing and computer vision to robotics and medical diagnosis. AI systems have the potential to revolutionize the way people live, work, and interact with each other, and are likely to lead to numerous advances in the coming years.about the potential applications of AI in healthcare. AI has several potential applications in healthcare. One example is the use of AI for diagnosing and treating diseases. AI can analyze medical images such as X-rays or CT scans and provide more accurate diagnoses than humans. AI can also be used to monitor patients remotely, allowing doctors to keep track of their condition without having to be in the same room. AI has also been used to create personalized treatments for diseases using big data analysis, allowing for more effective treatments for a range of conditions. Additionally, AI can be used to optimize resources in hospitals, helping reduce costs and improve overall efficiency. Finally, AI can help automate mundane tasks such as filling out forms or scheduling appointments, freeing up time for healthcare professionals to focus on more important work.about how AI can be used for drug discovery and development. AI can play a huge role in drug discovery and development. AI systems can be used to analyze large datasets of genetic and molecular data, helping to identify new targets for drug development or to suggest alternative treatments for existing drugs. AI can also be used to quickly test thousands of compounds and combinations, selecting the most promising ones for further study. AI can be used to predict the safety and efficacy of potential drugs, helping to reduce the time and cost associated with drug development and making it easier to get drugs to market. Finally, AI can help to identify adverse reactions to drugs, helping to find better treatments and potentially reducing the risk of serious side effects.about how AI can be used to improve patient care. AI can be used to improve patient care in many ways. AI can be used to provide more accurate diagnoses and recommend personalized treatments for patients based on their individual needs. AI can analyze medical images, such as X-rays or CT scans, to quickly identify issues that may not be visible to the naked eye. AI can also be used to monitor the vital signs of patients in real time, allowing healthcare professionals to react quickly to any changes in their condition. Additionally, AI can assist in the creation of preventive care plans and provide reminders to ensure patients are staying on track with their treatment. Finally, AI can help automate mundane tasks such as filling out forms or scheduling appointments, freeing up time for healthcare professionals to focus on more important work.。
人工智能核心技术原理Artificial intelligence (AI) is a branch of computer science that aims to create intelligent machines that can mimic and perform tasks that typically require human intelligence. 人工智能 (AI) 是计算机科学的一个分支,旨在创建能够模仿和执行通常需要人类智慧的任务的智能机器。
The core technologies that underpin artificial intelligence include machine learning, neural networks, natural language processing, and robotics. 这些支撑人工智能的核心技术包括机器学习、神经网络、自然语言处理和机器人技术。
Machine learning is a subfield of AI that involves developing algorithms and statistical models that enable computers to improve their performance on a specific task through experience. 机器学习是人工智能的一个子领域,涉及开发能够让计算机通过经验在特定任务上提高性能的算法和统计模型。
Neural networks are a set of algorithms that are designed to recognize patterns and interpret data in a way that is similar to thehuman brain. 神经网络是一组旨在识别模式并以类似于人脑的方式解释数据的算法。
Available online at Artificial Intelligence techniques:An introduction to theiruse for modelling environmental systemsSerena H.Chen,Anthony J.Jakeman∗,John P.NortonIntegrated Catchment Assessment and Management(iCAM),Fenner School of Environment and Society,Building48a,The Australian National University,Canberra,ACT0200,AustraliaAvailable online20January2008AbstractKnowledge-based or Artificial Intelligence techniques are used increasingly as alternatives to more classical techniques to model environmental systems.We review some of them and their environmental applicability,with examples and a reference list.The techniques covered are case-based reasoning,rule-based systems,artificial neural networks,fuzzy models,genetic algorithms, cellular automata,multi-agent systems,swarm intelligence,reinforcement learning and hybrid systems.©2008IMACS.Published by Elsevier B.V.All rights reserved.Keywords:Case-based reasoning;Cellular automata;Multi-agent;Environmental modelling1.IntroductionUse of Artificial Intelligence(AI)in environmental modelling has increased with recognition of its potential. AI mimics human perception,learning and reasoning to solve complex problems.This paper describes a range of AI techniques:case-based reasoning,rule-based systems,artificial neural networks,genetic algorithms,cellular automata, fuzzy models,multi-agent systems,swarm intelligence,reinforcement learning and hybrid systems.Other arguably AI techniques such as Bayesian networks and data mining[21,148]are not discussed.2.Case-based reasoning2.1.DescriptionCase-based reasoning(CBR)solves a problem by recalling similar past problems[57]assumed to have similar solutions.Numerous past cases are needed to adapt their solutions or methods to the new problem.CBR recognises that problems are easier to solve by repeated attempts,accruing learning.It involves four steps[1](Fig.1):(1)retrieve the most relevant past cases from the database;(2)use the retrieved case to produce a solution of the new problem;(3)revise the proposed solution by simulation or test execution;and(4)retain the solution for future use after successful adaptation.∗Corresponding author.Tel.:+61261254742;fax:+61261258395.E-mail address:tony.jakeman@.au(A.J.Jakeman).0378-4754/$32.00©2008IMACS.Published by Elsevier B.V.All rights reserved.doi:10.1016/j.matcom.2008.01.028380S.H.Chen et al./Mathematics and Computers in Simulation78(2008)379–400Fig.1.The CBR four-step process(adapted from[1]).Retrieval recognises either syntactical(grammatical structure)or semantic(meaning)similarity to the new case.Syntactic similarities tend to be superficial but readily applied.Semantic matching according to context is used by advanced CBR[1].The main retrieval methods are nearest-neighbour,inductive and knowledge-guided [136].Nearest-neighbour retrievalfinds the cases sharing most features with the new one and weights them by importance. Determining the weight is the biggest difficulty[166].The method overlooks the fact that any feature’s importance, influenced by other features,is case-specific[12].Retrieval time increases linearly with database size,so the method is time-consuming[166].The inductive method decides which features discriminate cases best[166].The cases are organised in a decision tree according to these features,reducing retrieval time.However,the method demands a case database of reasonable size and quality[136].Knowledge-guided retrieval applies existing knowledge to identify the important features in each case,assessing all cases independently.It is expensive for large databases and thus often used with others[166].After retrieval,CBR adapts past solutions to the new problem.Adaptation is structural or derivational[166]. Structural adaptation creates a solution to the new problem by modifying the solution of the past case;derivational adaptation applies the algorithms,methods or rules used in the past case to the new case.The proposed solution is then evaluated and revised if necessary.After it is confirmed,the solution is stored in the database.Redundant cases can be removed and existing cases combined[57].2.2.DiscussionBy updating the database,a CBR system continually improves its reasoning capability and accuracy and thus performance.CBR can handle large amounts of data and multiple variables.It organises experience efficiently.However,S.H.Chen et al./Mathematics and Computers in Simulation 78(2008)379–400381CBR cannot draw inferences about problems with no similar past cases.Also,CBR is a black-box approach offering little insight into the system and processes involved.This is not a drawback for complex processes where understanding is neither possible nor necessary.CBR can be employed for diagnosis,prediction,control and planning [57].Environmental applications include deci-sion support [90,121],modelling estuarine behaviour [126],planning fire-fighting [9],managing wastewater treatment plants [138,137,132,164],monitoring air quality [86],minimising environmental impact through chemical process design [96]and weather prediction [139].Box 1An application of case-based reasoning.“Constructed wetlands:performance prediction by case-based reasoning.”[101]Lee et al.[100]tested the efficiency of constructed wetland filters by analysing the quality of water fed through them.Half the filters received inflow contaminated with heavy metals.CBR was applied [101]to improve water-quality monitoring and interpretation.Biochemical oxygen demand (BOD)and suspended solids (SS)concentrations are water-quality indicators commonly used for constructed wetlands.Measuring them is expensive,time-consuming and labour-intensive.To reduce costs,Lee et al.[101]applied CBR to predict BOD and SS concentrations of treated samples.It stored past cases with up to six variables,turbidity,conductivity,redox poten-tial,outflow water temperature,dissolved oxygen and pH,selected for their predictive potential for BOD and SS,cost-effectiveness and ease of ing statistical equations,local similarities between each past case and the new problem were calculated with respect to one variable and global similarity with respect to all variables.The three to five past cases ranked highest by global similarity were selected.The target variables were predicted by combining their values for those cases.The local similarity of variable i for past case c and problem case p is l i =f V ip −V ic MV iwhere V ip and V ic are the values of variable i for the problem case and past case,respectively,MV i is the mean of variable i in the case base;|(V ip −V ic )/MV i |is the local difference;and f maps local difference to local similarity.Global similarity is g = i (l i ∗w i ) i w i,i =1,2,...,nwhere n variables represent a case and w i is the weight of variable i .The proportion P j of the prediction from past case j isP j =g j g T,j =1,2,3,4,5and g T is the sum of the global similarities of the 3–5selected cases.Prediction P is P = j(P j ×TV j )where TV j is target variable of past case j .Outflow BOD and SS were successfully predicted by CBR,with 85%success rate in predicting whether water samples complied with regulatory requirements.Constructed wetlands are con-sidered hard to model because of their highly complex biochemical processes.However,this study showed that CBR can be applied to such systems.382S.H.Chen et al./Mathematics and Computers in Simulation78(2008)379–4003.Rule-based systems3.1.DescriptionRule-based systems(RBS)solve problems by rules derived from expert knowledge[74].The rules have condition and action parts,if and then and are fed to an inference engine,which has a working memory of information about the problem,a pattern matcher and a rule applier.The pattern matcher refers to the working memory to decide which rules are relevant,then the rule applier chooses what rule to apply.New information created by the action(then-)part of the rule applied is added to the working memory and the match-select-act cycle between working memory and knowledge base repeated until no more relevant rules are found[118].The facts and rules are imprecise.Uncertainty can be incorporated in RBS by approaches such as subjective probabil-ity theory,Dempster-Shafer theory,possibility theory,certainty factors and Prospector’s subjective Bayesian method, or qualitative approaches such as Cohen’s theory of endorsements[118].They assign to the facts and rules uncertainty values(probabilities,belief functions,membership values)given by human experts.There are two rule systems,forward and backward chaining[167].Forward chaining is data-driven:from the initial facts it draws conclusions using the rules. Backward chaining is goal-driven:beginning with a hypothesis,it looks for rules allowing it to be sustained.That is, forward chaining discovers what can be derived from the data,backward chaining seeks justification for decisions[56].3.2.DiscussionRBS are easy to understand,implement and maintain,as knowledge is prescribed in a uniform way,as condi-tional rules.However,the solutions are generated from established rules and RBS involve no learning.They cannot automatically add or modify rules.So a rule-based system can only be implemented if comprehensive knowledge is available[41],and its application is quite limited.RBS are well suited to problems where experts can articulate decisions confidently and where variables interact little.They may be difficult to scale up,as interactions then emerge.Ecological systems,with complex interactions and processes often not well understood,do not favour RBS.RBS are often used in narrower areas such as plant and animal identification or disease and pest diagnosis[107,177,52]. RBS can also be employed as an assessment tool,e.g.in evaluating regional environments[91]or assessing the impact of water-regime changes on wetland functions[84].Other studies have applied elements of RBS with other modelling techniques(Section11)in landscape-change modelling based on sea-level-rise scenarios[23]and assessing the environmental life cycle of alternative energy sources[158].Box2An application of rule-based systems.“A diagnostic expert system for honeybee pests.”[106]Mahaman et al.developed a rule-based system to diagnose honeybee pests and recommend treatments.Pests(diseases,insects,mites,birds and mammals)may reduce the quantity and quality of honey production.Threats to bee health must be identified and managed.Thefirst step was to learn from the literature and experts about pest symptoms.The results were converted to conditional rules by backward chaining,specifying a goal then seeking facts to support it.The system asks the user questions from the rules until a diagnosis is reached,e.g.:•If the affected stage of the bee is the larva•and the colour of the infected larva is black or white•and the infected larva appears mummified•or the cell cappings appear scattered•then chalkbrood disease confidence=80%.The user is shown images of the symptoms and given possible control measures.Mahaman et al.[106]checked diagnosis by the expert system with human experts.The results showed that the system was effective,with performance close to that of a human expert.S.H.Chen et al./Mathematics and Computers in Simulation 78(2008)379–4003834.Artificial neural networks4.1.DescriptionArtificial neural networks (ANNs)employ a caricature of the way the human brain processes information.An ANN has many processing units (neurons or nodes)working in unison.They are highly interconnected by links (synapses)with weights [170,133,5].The network has an input layer,an output layer and any number of hidden layers.A neuron is linked to all neurons in the next layer [71],as shown in Fig.2.Neuron x has n inputs and one outputy (x )=g n i =0w i x iwhere (w 0,...,w n )are the input weights and g is the non-linear activation function [119],usually a step (threshold)function or sigmoid (Fig.3).The step-function output is y =1if x ≥θ,0if x ≤θ.The sigmoid function,more commonly used,is asymptotic to 0and 1[83]and antisymmetric about (0,0.5):g (x )=11+e −βx ,β>0ANNs may be feedforward (the commonest)or rmation flow is unidirectional in feedforward ANNs,with no cycles,but in both directions in feedback ANNs so they have cycles,by which their state evolves to equilibrium [151].Fig.2.An example of an artificial neural network [134].Fig.3.(a)Step function and (b)sigmoid function.384S.H.Chen et al./Mathematics and Computers in Simulation78(2008)379–400Fig.4.Seven categories of tasks ANN can perform[83].ANNs require few prior assumptions,learning from examples[133]by adjusting the connection weights.Learning may be supervised or unsupervised.Supervised learning gives the ANN the correct output for every input pattern.The weights are varied to minimise error between ANN output and the given output.One form of supervised learning, reinforcement learning,tells the ANN if its output is right rather than providing the correct value[170].Reinforcement learning is discussed in Section10.Unsupervised learning gives the ANN several input patterns.The ANN then explores the relations between the patterns and learns to categorise the input[83].Some ANNs combine supervised and unsupervised learning.4.2.DiscussionANNs are useful in solving data-intensive problems where the algorithm or rules to solve the problem are unknown or difficult to express[178].The data structure and non-linear computations of ANNs allow goodfits to complex, multivariable data.ANNs process information in parallel and are robust to data errors.They can generalise,finding relations in imperfect data so long as they do not have enough neurons to overfit data imperfections.A disadvantage of ANNs is that they are uninformative black-box models and thus unsuitable for problems requiring process explanation. If an ANN fails to converge,there is no means tofind out why.ANNs can be applied to seven categories of problems[83](Fig.4):pattern classification,clustering,function approximation,prediction,optimisation,retrieval by content and process control.Pattern classification assigns anS.H.Chen et al./Mathematics and Computers in Simulation78(2008)379–400385 input pattern to one of the pre-determined classes,nd classification from satellite imagery[140]or sewage odour classification[122].Clustering is unsupervised pattern classification,e.g.of input patterns to predict ecological status of streams[163].Function approximation,also called regression,generates a function from a given set of training patterns,e.g.modelling river sediment yield[33]or catchment water supply[79],predicting ozone concentration[147], modelling leachateflow rate[88]or estimating the nitrate distribution in groundwater[8].Prediction estimates output from previous samples in a time series,e.g.of weather[95],air quality[7,145]or water quality[178].Optimisation maximises or minimises a cost function subject to constraints,e.g.calibrating infiltration equations[82].Retrieval by content recalls memory,even if the input is partial or distorted,e.g.producing water-quality proxies from satellite imagery[129].An example of process control is engine speed control,keeping speed near-constant under varying load torque by changing throttle angle[83].Box3An example of an application of artificial neural networks.“Use of ANNs for modelling cyanobacteria Anabaena spp.in the River Murray,South Australia.”Maier et al.[108]used ANNs to model the occurrence of cyanobacteria Anabaena spp.in the River Murray at Morgan,South Australia.This species group,the commonest in the lower Murray,is of great concern as Adelaide gets its water from there.Other studies had shown failure of process-based and traditional statistical approaches to predict the size and timing of incidence of algae species.ANNs seemed well suited to modelling ecological data,not requiring the probability distribution of the input data.The concentrations of Anabaena from1985/1986to1992/1993were tested against eight weekly variables:colour,turbidity,temperature,flow,total phosphorus,soluble phosphorus,oxidised nitrogen and total iron.The model included time lags of up to21weeks.Various network geome-tries were examined.The number of hidden layer nodes was limited to ensure the ANN could approximate a wide range of continuous functions yet avoid overfitting the training data:N H≤2N I+1;N H≤N TR N I+1where N H is the number of hidden layer nodes,N I is the number of inputs and N TR is the number of training samples(here365).Training data were from1985/1986to1991/1992.Earlier trials had determined the optimum learn-ing rate and number of training samples.After training,data for November1992to April1993 tested the network’s ability to provide4-week forecasts.Sensitivity analyses were conducted on the best models to determine the relative strength of relations between the input and output variables.The ANN model indicated thatflow,temperature and colour had most influence on the Anabaena spp.population.Addition of nutrients,iron and turbidity did not significantly improve the fore-casts.The ANN successfully forecast the major peak in Anabaena spp.concentration in the River Murray,although it missed a minor peak early in the test period.It was suggested that the dataset used for model calibration contained insufficient data on that type of event.Maier et al.[108] concluded that ANNs were a useful tool for modelling cyanobacteria incidences in rivers.5.Genetic algorithms5.1.DescriptionA genetic algorithm(GA)is a search technique mimicking natural selection[24].The algorithm evolves until it satisfactorily solves the problem,through thefitter solutions in a population surviving and passing their traits to offspring which replace the poorer solutions.Each potential solution is encoded,for example as a binary string,called a chromosome.Successive populations are known as generations[23].386S.H.Chen et al./Mathematics and Computers in Simulation78(2008)379–400The initial population G(0)is generated randomly.Thereafter G(t)produces G(t+1)through selection and repro-duction[24].A proportion of the population is selected to breed and produce new chromosomes.Selection is according tofitness of individual solutions,i.e.proximity to a perfect solution[26],most often by roulette selection and deter-ministic sampling.Roulette selection randomly selects a parent with probability calculated from thefitness f i of each individual by[24]:F i=f iif iDeterministic sampling assigns a value C i=RND(nF i)+1to organism i of the n organisms in the population,where RND rounds its argument to the nearest integer.Each organism is selected as a parent C i times[24].Reproduction is by genetic crossover and mutation.Crossover produces offspring by exchanging chromosome segments from two parents[26].Mutation randomly changes part of one parent’s chromosome.This occurs infre-quently and introduces new genetic material[24].Although mutation plays a smaller part than crossover in advancing the search,it is critical in maintaining genetic diversity.If diversity is lost,evolution is retarded or may stop[76]. In steady-state GAs,offspring generated by the genetic operators replace lessfit members,resulting in higher averagefitness.Simple or generational algorithms replace each entire generation[24].Selection and reproduction are repeated until a stopping criterion is met[24],e.g.all organisms are identical or very similar,a given num-ber of evaluations has been completed,or maximumfitness has been reached;evolution no longer yields better results.5.2.DiscussionGAs are computationally simple and robust,and balance load and efficacy well[67].This partly results from only examiningfitness,ignoring other information such as derivatives[94].GAs treat the model as a black box,an advantage when detailed information is unavailable.An important strength of GAs is implicit parallelism;a much larger number of code sequences are indirectly sampled than are actually tested by the GA[76].Unlike most stochastic search techniques,which adjust a single solution,GA keeps a population of solutions.Maintaining several possible solutions reduces the probability of reaching a false (local)optimum[67].Therefore GAs can be very useful in searching noisy and multimodal relations.However,the latter may take a large computation time.In most cases,GAs use randomisation in selection.They avoid picking only the best individual and thus prevent the population from converging to that individual.However,premature convergence on a local optimum can occur if the GA magnifies a small sampling error[63].If a veryfit individual emerges early and reproduces abundantly,early loss of diversity may lead to convergence on that local optimum.GAs often optimise model parameters or resource management.Some application examples include modelling species distribution[153],air quality forecasting[87],estimating soil bulk density[38],calibrating water-quality models[127],finding the best solution in an integrated management system[157]and water management [80].6.Cellular automata6.1.DescriptionCellular automata are dynamic models,discrete in space,time and state.They consist of a regular lattice of cells which interact with their neighbours.The cell states are synchronously updated in time according to local rules,which calculate the new state of a cell at time t+1using its state and those of neighbouring cells at time t[34].The neighbours are all those in given proximity,as illustrated in Fig.5.The simplest‘elementary’cellular automaton(CA)is a linear chain of n cells.The cell states in a one-dimensional CA area i(t+1)=f(a{i}(t))S.H.Chen et al./Mathematics and Computers in Simulation78(2008)379–400387 Box4An example of an application of genetic algorithms.“Using genetic algorithms to optimise model parameters.”[165]Wang[165]used a GA to calibrate a conceptual rainfall-runoff model of the Bird Creek catchment, Oklahoma.The model had nine parameters,dealing with water balance and routing.The GA optimised the parameters by minimising the sum of squares of differences between computed and observed daily discharges.The GA had a population size of n=100potential parameter solutions.The chromosomes had7 bits,allowing128values.Selection assigned probability p j to point j:p j=p1+(j−1)(p n −p1)n−1The average probability was1/n,the best point’s was1.5times the average and the probabilities summed to1over all points.After ranking,the best point is j=n,with the largest probability value p n.The worst is j=1,with smallest probability value,p1.Two points A and B are selected at random according to the probability distribution.Two bit positions,k1and k2,randomly selected in the chromosome coding,determine where crossover occurs.A new point is produced by taking the values of bits k1to k2−1of A’s coding and values of bits from k2to the end and from1to k1−1of B’s coding.Mutation of new points occurs withprobability0.01.The GA generated10runs with randomly selected initial populations.Each run was stopped after10,000fitness evaluations.For all runs,the best objective function values were very close to the global minimum.When the search converged on a local optimum,the local optimum had a similar objective value to that of the global optimum.Wang[165]concluded that GAs are capable and robust.where a i is the state of cell i at time t and f(a{i}(t))is a function of the states of its neighbouring cells[150].If the neighbourhood is the immediately adjacent cells,thenf(a{i}(t))≡f(a{i−1}(t),a{i}(t),a{i+1}(t)).For a5-neighbour square two-dimensional CA with cells(i,j+1)(i+1,j)(i,j−1)(i−1,j)as the neighbours of cell (i,j)[150],a i,j(t+1)=f(a{i,j}(t),a{i,j+1}(t),a{i+,j}(t),a{i,j−1}(t),a{i−1,j}(t))As the cells are updated,the disordered initial state of the CA usually evolves[150]into one of four patterns I–IV illustrated in Fig.6:homogeneous,simple separated periodic structures,chaotic aperiodic patterns or a complex pattern of local structures.These patterns correspond to four classes of CA described by Wolfram[168].Classes I,II and III are analogous to the limit points,limit cycles and chaotic attractors observed in continuous dynamical systems.Class IV CAs have no direct analogue,exhibiting complex behaviour neither completely repetitive nor completely random.6.2.DiscussionOne of the main problems with cellular automata concerns boundary conditions.In the periodic condition,the cells on one edge are made neighbours with those on the opposite edge,producing a torus.Other possibilities are reflecting and absorbing boundary conditions,which assume no off-lattice neighbours and assign the border states unique values or rules.Unfortunately these boundary conditions are unlikely to reflect real life.The problem could be avoided by making the lattice significantly larger than the area under study,at a higher cost in resources and computation[62].Cellular automata are simple mathematical models that can simulate complex physical systems.They can incorporate interactions and spatial variations,and also spatial expansion with time[54].For some parameter values,they are very388S.H.Chen et al./Mathematics and Computers in Simulation78(2008)379–400mon neighbourhood configurations:(a)neighbourhood{−1,0,1},(b)5-neighbour square,also referred to as‘von Neumann neigh-bourhood’,and(c)9-neighbour square,also referred to as‘Moore neighbourhood’(adapted from[68]).sensitive to the initial state and transition functions.Therefore,they have limited capacity to make precise predictions [81],but are often used to understand real environments.Examples of CA applications include modelling population dynamics[51,152],animal migration[143],unsat-uratedflow[61],landscape changes[55],debrisflow[39],earthquake activity[65],lavaflow[37]andfire spread[75,89].In these examples it is clear how the state of a cell is influenced by the previous state of its neighbours.Fig.6.The four classes of evolved patterns in CA:(I)homogeneous,(II)periodic,(III)chaotic,and(IV)complex pattern of localised structures [114].Box5An example of an application of a cellular automaton.“Simulation of vegetable population dynamics based on Cellular Automata.”[10]Bandini and Pavesi[10]developed a CA-based model that simulated the population dynamics of robiniae,oak and pine trees on the foothills of the Italian Alps.A two-dimensional CA was created,with individual cells representing portions of a given area.The state of each cell was defined by:(i)the presence or absence of a tree,(ii)the amount of resources(water,light,N and K)present,and(iii)if a tree was present,its features(species,size and resource needs)A cell can host a tree if the resources are suitable.The sprouting of a tree also requires that the cell contains a seed and no other tree.Seeds can be introduced into the cell from fruiting trees in neighbouring cells.As the tree grows,it has a greater resource requirement,obtainable from neighbouring cells.The development of this model involved defining rules describing tree sustenance and repro-duction,resource production andflow,and the influence of a tree on neighbouring cells.Two neighbourhood configurations,‘von Neumann’(5-neighbour)and‘Moore’(9-neighbour)were considered.The initial parameters of each cell are defined by the user.As the model runs,the system evolves step by step.Bandini and Pavesi[10]found that the simulations were qualitatively similar to real cases.7.Fuzzy systems7.1.DescriptionFuzzy systems(FS)use fuzzy sets to deal with imprecise and incomplete data.In conventional set theory an objectis a member of a set or not,but fuzzy set membership takes any value between0and1.Thus fuzzy models can describevague statements as in natural language[117].Roberts[131]gives an example of a FS,a vegetation model assigningplant communities to community types,with fuzzy membership indicating the degree to which the vegetation meetseach type definition.The membership values for each plant community sum,over all community types,to1.Forexample,plant communityχhad fuzzy memberships{μi}:μ(PIPO/PIPO)=0.2,μ(PIPO/PSME)=0.3,μ(PIPO/PIPU)=0.1 andμ(PIPO/ABCO)=0.4[131].Fuzzification transforms exact(crisp)input values into fuzzy memberships[93].Fuzzy models are built on priorrules,combined with fuzzified data by the fuzzy inference machine.The resulting fuzzy output is transformed to acrisp number(defuzzification[142]).Techniques include maximum,mean-of-maxima and centroid defuzzification.Fig.7shows the components of a fuzzy system.7.2.DiscussionHuman reasoning handles vague or imprecise information.The ability of fuzzy systems to handle such informa-tion is one of its main strengths over other AI techniques,although they are mostly easier to understand and apply.One of the main difficulties in developing a fuzzy system is determining good membership functions.Fuzzy sys-tems have no learning capability or memory[64].To overcome such limitations,fuzzy modelling is often combinedwith other techniques to form hybrid systems,e.g.with neural networks to form neuro-fuzzy systems(see Section11).Fuzzy systems handle incomplete or imprecise data in applications including function approximation,classifica-tion/clustering,control and prediction.Instances include modelling vegetation dynamics[131],estimating soil hydraulicproperties[58],modelling macroinvertebrate habitat suitability[162],evaluating habitat suitability for riverine forests。