Agent Modelling Language (AML) A comprehensive approach to modelling MAS
- 格式:pdf
- 大小:258.42 KB
- 文档页数:10
This article was downloaded by: [Tsinghua University]On: 19 July 2011, At: 17:06Publisher: Taylor & FrancisInforma Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UKInternational Journal of Computer IntegratedManufacturingPublication details, including instructions for authors and subscription information:/loi/tcim20Numerical control machining simulation: acomprehensive surveyYu Zhang a , Xun Xu b & Yongxian Liu aa School of Mechanical Engineering and Automation, Northeastern University, Shenyang,110004, PRCb Department of Mechanical Engineering, School of Engineering, University of Auckland,Auckland, 1010, New ZealandAvailable online: 31 May 2011PLEASE SCROLL DOWN FOR ARTICLENumerical control machining simulation:a comprehensive surveyYu Zhang a,b *,Xun Xu b and Yongxian Liu aaSchool of Mechanical Engineering and Automation,Northeastern University,Shenyang 110004,PRC;b Department of MechanicalEngineering,School of Engineering,University of Auckland,Auckland 1010,New Zealand(Received 20July 2010;final version received 22February 2011)Since the first numerical control (NC)machine tool was created at Massachusetts Institute of Technology in the 1950s,productivity and quality of machined parts have been increased through using NC and later computer numerical control (CNC)machine tools.Like other computer programs,errors may occur in a CNC program,which may lead to scraps or even accidents.Therefore,NC programs need to be verified before actual machining puter-based NC machining simulation is an economic and safe verification method.So far,much research effort concerning NC machining simulation has been made.This paper aims to provide a comprehensive review of such research work and a clear understanding of the direction in the field.First,the definition,common errors,programming approaches and verification methods of NC programs are introduced.Then,the definitions of geometric and physical NC machining simulation are presented.Four categories of NC machining simulation methods are discussed.They are solid-based,object space-based,image space-based and Web-based NC machining simulations.Finally,future trends and concluding remarks are presented.Keywords:computer numerical control;CNC machining;machining simulation;NC program1.IntroductionOver the years,technologies such as numerical control (NC),computer numerical control (CNC)and virtual manufacturing (VM)have changed the way products are made.These developments have improved machine tools and forever changed manufacturing processes,so that today it is possible to automatically produce high-quality products quickly,accurately and at lower cost than ever before (Krar et al.2002).As one of these developments,VM technology refers broadly to the modelling of manufacturing systems and components with an effective use of audiovisual and/or other sensory features to simulate or design alternatives for a real manufacturing environment,mainly through computers.The motivation is to enhance our ability to predict potential problems and inefficiencies in product functionality and manufacturability before real manu-facturing occurs (Banerjee and Zetu 2001).NC machin-ing simulation constitutes an important part of VM technology.Since the development of the first NC machine tools in 1952at Massachusetts Institute of Technology,NC machining has become a dominant manufacturing mode.NC machining denotes that the coded numerical information is used to control most of the machining actions such as spindle speed,feed rate and tool path while making the final workpiece.The variousapproaches used to generate these NC codes may be classified into three groups:manual part programming,computer-assisted part programming and computer aided design (CAD)part programming.Simple pro-grams can be created manually,perhaps with the aid of a calculator,while more complex programs are usually created using a computer or an automatically pro-grammed tool.The manual method,while adequate for many simple point-to-point processes,requires the programmer to perform all calculations required to define the cutter-path geometry and can be time consuming.Errors made by the programmer are often not discovered until the program is tested graphically or on the machine tool.Error correction is cumber-some at a machine tool.In addition,because most machine tools have their own languages,the program-mer is required to work with different instruction sets,which further complicates part-program creation.The computer-assisted part-programming language ap-proach simplifies the process because the programmer uses the same language for each program,regardless the target machine tool.Moreover,translation of the program to NC code is made by a post-processor needed for each and every machine tool.These post-processors may not guarantee the correctness of a part program.Although the computer-assisted approach offers advantages over the manual approach,both approaches require the programmer to translate*Corresponding author.Email:yzha540@International Journal of Computer Integrated Manufacturing Vol.24,No.7,July 2011,593–609ISSN 0951-192X print/ISSN 1362-3052online Ó2011Taylor &FrancisDOI:10.1080/0951192X.2011.566283D o w n l o a d e d b y [T s i n g h u a U n i v e r s i t y ] a t 17:06 19 J u l y 2011geometric information from one form (usually an engineering drawing)into another,which can be error prone.Creation of NC programs from a CAD model provides yet another option by allowing the part programmer to access the computational capabilities of a computer via an interactive graphics display console.This allows geometry to be described in the form of points,lines,arcs,and so on,just as it is on an engineering drawing,rather than requiring a transla-tion to a text-oriented e of a graphics display terminal also allows the system to display the resulting cutter-path geometry,allowing earlier verifi-cation of a program,which can avoid costly machine setups for program testing (Chang et al.2006).NC program errors are mainly those related to NC language grammar and machining parameters.Because an NC machining program is used to control NC machine tools,NC program errors may cause workpiece undercut and overcut;collision between the workpiece and a cutter,fixture,and/or machine tool;and even machine damage and personnel injury.Hence,after an NC program is generated,it must be carefully verified before used in real machining.In general,there are three verification methods for an NC program,namely manual verification procedures,shop-floor verification on NC machine tools and NC computer-based machin-ing simulation.Manual verification involves reading and checking an NC program by an operator.This method can only be used to check simple and short NC programs or to correct some easy-to-find errors such as functional errors,grammatical errors and spindle speed errors.With the shop-floor verification method,NC programs are verified by the process of machining wooden,plastic,wax or soft metal workpieces instead of an actual workpiece on a machine tool.Although it is a reliable verification approach and a physical object can be obtained through the verification process,this approach is expensive and time consuming.In addition,it was reported more than a decade ago that the US industry spent $1.8billion each year to prove or verify NC machining programs (Meister 1988).Another verification method is computer-based machining simu-lation.Without consumption of actual material and occupation of machine tools,it can graphically reveal the real machining process,check collisions,evaluate machining parameters,and reveal and iron out the bugs in a computer aided manufacturing (CAM)system.Therefore,it is more intuitive,faster,safer and more cost-effective.In addition,it can also be used for training machine tool operators.NC simulation started to make inroads into commercial systems some 20years ago.They come in three styles.Most of the current commercial CAD/CAM systems have their own NC simulation modules,e.g.Catia’s DELMIA NC machine tool simulationtool,NX’s CAM Integrated Simulation and Verifica-tion software and Pro/E Wildfire’s Vericut.Most of the CAM tools (e.g.MasterCAM,GibbCAM and Smart-CAM)are also equipped with simulation options.The third type of NC simulation tools are more or less standalone tools,such as ICAM’s Virtual Machine,MachineWorks and ModuleWorks.All of these commercial simulation tools have limited functional-ities.This is the reason why research in the field of NC machining simulation is still ongoing.The objective of this paper is to provide a technical review of the computer-based NC machining simula-tion,categorised as geometric and physical simulations.The remainder of this paper is organised as follows.Section 2,the main section,describes different methods of machining simulation,i.e.solid-based NC machining simulation,object space-based NC machining simula-tion,image space-based NC machining simulation and Web-based NC machining simulation.Discussions and future trends are presented in Sections 3and 4,respectively.Concluding remarks are given at the end.2.Research of NC machining simulationIn this paper,machining simulations are divided into two categories,i.e.geometric simulation and physical simulation.As shown in Figure 1,geometric simulation is used to graphically check whether the cutters interfere with fixture,workpiece and machine tools,gouge the part,or leave excess stock behind.In addition,it can provide geometric information such as the entry and exit angle of the cutter to physical simulation.As the name implies,physical simulation of an NC machining process aims to reveal the physical aspects of a machining process,such as cutting force,vibration,surface rough-ness,machining temperature and tool wear.It is based on geometric simulation and conventional metal cutting research (Lorong et al.2006).Considering different methods of realising geometric and physical simulation,this section consists of five subsections,i.e.wireframe-based NC machining simulation,solid-based NC machining simulation,object space-based NC machin-ing simulation,image space-based NC machining simulation and Web-based NC machining simulation.Solid-based NC machining simulation is further classi-fied into constructive solid geometry (CSG)-based and boundary representation (B-rep)-based NC machining simulation.Object space-based NC machining simula-tion is categorised into Z-map-based,vector-based and octree-based NC machining simulation.2.1.Wireframe-based NC machining simulationIn wireframe-based NC machining simulation,tool path and the shape of the machined workpiece are594Y.Zhang et al.D o w n l o a d e d b y [T s i n g h u a U n i v e r s i t y ] a t 17:06 19 J u l y 2011displayed in the form of wireframe.Because wireframe model has a simple data structure and fast data operation,it was widely applied to early NC machining simulations.However,this representation makes com-plex three-dimensional (3D)objects ambiguous and it does not provide actual solid geometric model and information.Hence,wireframe-based NC machining simulation can only be applied to a simple workpiece and produce simple geometric simulation.EMC2(2010)is such free software,which is a descendent of the original EMC software.2.2.Solid-based NC machining simulationSolid modelling is a much more complete 3D model-ling representation.They are useful for the geometric as well as physical simulation of machining process in which in-process workpiece,cutter and chip geometries can be accurately represented.This section discusses two types of solid-based machining simulations,CSG-based machining simulation and B-rep-based machin-ing simulation.2.2.1.CSG-based NC machining simulationIn CSG-based NC machining simulation,parts are represented by a CSG model.An early piece of work was done by Hunt and Voelcker (1982),who used the Part and Assembly Description Language (PADL)modelling system for 2.5-axis machining ter on,through considering the physical aspects of machining operation,Spence et al.(1990)and Spence and Altintas (1994)developed a CSG-based process simulation system for 2.5-axis milling,which consisted of a geometric simulator and physical simulator.In this system,parts were described using a CSG solid model at first.Then along each path,after the cutter representedby a semicircle was intersected with the individual geometric primitives describing the part,the cutter/part immersion geometry was generated.Finally,the cutter-part intersection data from cutter/part immersion geometry were abstracted.The physical simulator used the cutter-part intersection data and mechanistic models (Tlusty and MacNeil 1975)to carry out the cutting-force prediction.Applications in predicting cutting force in face-milling and end-milling operations confirmed the validity of the technique.However,this study is limited to 2.5-axis milling.In addition,based on CSG,two methods on collision detection were proposed (Su et al.1999,Ho et al.2001).Taking advantages of the CSG ‘divide-and-conquer’paradigm and distance-aided collision detection for convex bounding volumes,one of the methods realised efficient and precise collision detection.The other method adopted a heterogeneous representation in that CSG was used to represent the tool,and a cloud of over 10,000points was used to represent the workpiece for rapidly detecting collision and penetration depth.However,this method tends to lose efficiency and requires a substantial amount of memory as the number of sampled points increases.Furthermore,collisions between the tool and other static parts of the NC machine are not handled by the two above-mentioned methods.2.2.2.B-rep-based NC machining simulationThe B-rep technique for solid modelling,where the surfaces,edges and vertices of an object are explicitly represented,has found wide applications in design and manufacturing (Requicha and Rossignac 1992).In an earlier work done by O’Connell and Jabolkow (1993),B-rep solid models of the machined part were constructed from NC programs in a Cutter Location (CL)data format for 3-axis millingsimulation.Figure 1.General architecture of NC machining simulation.International Journal of Computer Integrated Manufacturing 595D o w n l o a d e d b y [T s i n g h u a U n i v e r s i t y ] a t 17:06 19 J u l y 2011A limitation in this study is that there is no considera-tion for physical simulation.To solve this limitation,El-Mounayri et al.(1997,1998,2002),Imani et al.(1998),Imani and Elbestawi (2001)and Bailey et al.(2002a,2002b)integrated geometric simulation with physical simulation.A good agreement between the simulated and experimentally measured results was obtained.El-Mounayri et al.(1997,1998)developed a generic solid modeller-based ball-end milling process simulation system for 3-axis milling.First,a part was described using a B-rep solid model,and the cutting edges of a cutter were fitted with cubic Bezier curves.Second,for every completed tool path,i.e.one NC block,the tool swept volume was generated and intersected with the part to give the corresponding removed volume,in-process parts and final part.Third,the tool cutting edges were intersected with removed volume to produce the tool-part immersion geometry.Finally,after the geometric information was extracted from the tool-part immersion geometry,cutting force was predicted through the cutting-force model for ball-end mills developed by Abrari et al.(1998).This model is based on the concept of equivalent orthogonal cutting conditions and empirical equations for computing shear angle,friction angle,and shear strength of ter on,El-Mounayri et al.(2002)improved this physical simulation using Artificial Neural Network (ANN)technology and realised an integration of prediction and optimisation uses the same ANN model (El-Mounayri and Deng 2010).Imani et al.(1998)did similar research.In contrast with El-Mounayri’s work mentioned above,they represented the cutting edge by the B-spline curves,which can also be used for the representation of any shape of a cutting edge and is better than cubic Bezier curves.In addition,a new three-component mechanistic force model was developed to calculate instantaneous ball-end milling cutting forces.This force model not only takes into account the geometry of cutting edge (i.e.rake and helix angles)but also considers the variations of the chip-flow angle and cutting coefficients in the axial direction.And then the geometric simulation module was extended to simulate the parts with free-form surfaces by using advanced sweeping/skinning techniques,and the phy-sical simulation module was extended to surface roughness prediction (Imani and Elbestawi 2001).Subsequently,Bailey et al.(2002a,2002b)extended Imani’s work by representing an arbitrary cutting edge design using non-uniform rational B-spline curves,which is better than B-spline curves.In addition,to further guard against process planning errors or unexpected factory floor events,geometric simulation and physical simulation were integrated with factory floor monitoring and control (Saturley and Spence 2000,Spence et al.2000).Since B-rep-based machining simulation becomes more time consuming with increased part complexity,parallel processing techniques were used (Fleisig and Spence 2005).To solve the partial limitation and to improve computation efficiency,Spence et al.(1990),Spence and Altintas (1994),Yip-Hoi and Huang (2006)used a semi-cylinder to represent the cutter instead of a semicircle;the B-rep to model the part instead of CSG;and cutter engagement features to characterise cutter/workpiece engagement ter,Aras and Yip-Hoi (2008)and Ferry and Yip-Hoi (2008)extended Yip-Hoi and Huang’s work to 3-axis machining and 5-axis machining,respectively.B-rep-based collision detection for 5-axis NC machining was also reported by Ilushin et al.(2005)and Wein et al.(2005).2.3.Object space-based NC machining simulationIn an object space-based machining simulation,parts are represented by a collection of discrete points (with vectors)or surfaces with vectors or certain volume elements.Since objects are discretised,Boolean operations between objects are two-dimensional,even one-dimensional so that simulation computation is improved.Up to now,there are three major decom-position methods for the models in object space-based NC machining simulation,which are Z-map method,vector method and octree method,respectively.These methods are described below.2.3.1.Z-map methodThe Z-map method is used to decompose the model of a part into many 3D histograms.Each 3D histogram starts with the value of the height of the stock.During the simulation process,each tool movement updates the heights of the 3D histograms it passes over if it cuts lower than the currently stored height.Therefore,Boolean operation in this kind of simulation is one dimensional,which means simulation speed is very fast.Anderson (1978)used a 3D histogram to approximate the billet and cutter assembly shape to detect collisions in NC machining.However,no provision is allowed for 4-or 5-axis machining when the cutter is not orthogonal to the cubes.Subsequently,this representation was called Z-map.To get better geometric accuracy,computation efficiency and gra-phical quality,many researchers have used different technologies to enhance the traditional Z-map model.For example,Hsu and Yang (1993)used isometric projection and raster display to enhance Z-map model for 3-axis milling processes.In 2002,Lee and Ko (2002)and Kang et al.(2002)used the inclined sampling method to enhance the Z-map model.In the same year,Lee and Lee (2002)596Y.Zhang et al.D o w n l o a d e d b y [T s i n g h u a U n i v e r s i t y ] a t 17:06 19 J u l y 2011developed a local mesh decimation method to achieve a smoother rendering as well as a dynamic viewing capability of the Z-map model in a 3-axis milling simulation.The overall algorithm for the periodic local mesh decimation is as follows.(1)The Z-map data of the workpiece are dividedinto small regions,and meshes for each region are generated,decimated and stored;(2)A tool movement command of the NCprogram is read and the Z-map data are updated accordingly.The display mode of a region cut by the tool is set as the Z-map rendering mode;(3)Meshes for regions are rendered by calling therendering functions of a graphics library,such as OpenGL.If a region is set in the Z-map rendering mode,a mesh is generated from the Z-coordinates of the grids in the region,and is passed to the rendering functions.Otherwise,the decimated mesh for the region is passed to the rendering functions;(4)If the current iteration is coincident with thedecimation period,the meshes are generated and decimated for the regions in the Z-map rendering mode,then go to step (2).Otherwise,go to (3)directly.Because of its simple data structure and computa-tion efficiency,the Z-map method has been employed for physical simulation such as cutting-force prediction and surface roughness prediction.It was shown that the method could effectively predict cutting force and surface roughness.The cutting force in ball-end milling of sculptured surfaces was predicted by Kim et al.(2000).In this study,the cutter contact area could be obtained by comparing the Z-map data of the cutter with that of the machined surface.Then,after the cutting edge elements were calculated,the 3D contact area and the cutting edge elements were projected to the cutter plane,which was defined as a circular plane that was perpendicular to the cutter axis.By comparing cutting edge element positions with cutter contact area data on the cutter plane,the cutting edge elements that engaged in cutting process could be identified.Finally,cutting forces acting on the engaged cutting edge elements were calculated using a cutting-force ter,based on the cutting force,cutter deflection and form error was predicted accurately (Kim et al.2003).Instead of the semi-spherical surface used in the research by Kim et al.,Jung et al.(2001)proposed an exact chip engagement surface from cutting edge geometry to more accurately update the workpiece model and predict the cutting force.However,a simple cutting-force model with no consideration of the sizeeffect of the workpiece material was developed and used in this study so that the accuracy of cutting force was ter,Zhu et al.(2001)extended cutting-force prediction to a 5-axis ball-end milling process.In the above research,average cutting coefficients depend-ing on the cutting condition and traditional Z-map model were used to predict the cutting forces in transient cuts,which resulted in inaccurate and inefficient prediction for instantaneous cutting force,in particular at peak or valley points.To overcome these limitations,Ko et al.(2002)and Yun et al.(2002a,2002b)proposed a new method of calculating cutting-condition-independent coefficient considering the size effect and developed the moving edge node Z-map (ME Z-map).As shown in Figure 2,the fundamental idea in the ME Z-map is that the edge node,which refers to the node closest to the cutting edge,is moved towards the boundary of the cutter movement.Subsequently,physical simulation was extended to surface roughness prediction based on the traditional Z-map model (Liu et al.2006).2.3.2.Vector methodThe vector method means that surfaces of a part are approximated by a set of points with direction vectors that are normal and/or vertical to the surface at each point.A vector extends until it reaches the boundary of the original stock or intersects with another surface of the part.To simulate the cutting process,the intersec-tion of each vector with each tool movement’s envelope is calculated.The length of a vector is reduced if it intersects the envelope.An analogy can be made to mowing a field of grass.Each vector in the simulation corresponds to a blade of grass ‘growing’from the desired object.As the simulation progresses,the blades are ‘mowed down’.The lengths of the final vectors correspond to the amount of excess material (if above the surface)or the depth of the gouge (if below the surface)at that point (Jerard et al.1989).Figure 2.Fundamental notions for ME Z-map (Adapted from Yun et al.2002b).International Journal of Computer Integrated Manufacturing597D o w n l o a d e d b y [T s i n g h u a U n i v e r s i t y ] a t 17:06 19 J u l y 2011Therefore,this method is usually used to judge whether the part is undercut or overcut.As shown in Figure 3,Chappel (1983)proposed the ‘point-vector’method in which part surface was approximated by a set of points with vector to simulate the material removal process.However,he did not mention how to select the ter,Oliver and Goodman (1986)developed a system similar to Chappel’s,which used a computer graphics image of the desired surface to select the points (Figure 4).This system was considered as the first module of an NC verification technique.Subsequently,another two modules of this NC verification technique were developed to get an efficient simulation.The second module provided a means of extracting a subset of eligible points (Oliver and Goodman 1990).The third module realised the intersection of normal vectors with swept-volume models (Oliver 1992).Later on,as Figure 5illustrates,Jerard et al.(1989)proposed an object-based surface discretisation modelling method that not only shared the characteristics of the methods of Chappel (1983)and Oliver and Goodman (1986)but also contained features that improved simulation efficiency.To implement highly accurate NCmachining simulation for mould and die parts,Park et al.(2005)divided discrete vector model into discrete normal vector (DNV)and discrete vertical vector (DVV),and hybridly used DNV and DVV in the machining simulation.As shown in Figure 6,the strategy for modelling a shape that consists of the ‘features’and the ‘others’(i.e.smooth and non-steep areas)can be described as follows:(1)Vertical wall,sharp edge and overhang featureson the input design surfaces are identified,as shown Figure 6(a);(2)As shown in Figure 6(b),all feature areas areconverted into a set of DNV elements;(3)As shown in Figure 6(c),the whole surface areais converted into a single DVV model;(4)DNV and DVV models are hybridly used tocompensate for each other’s shortcomings.Figure 6(d)shows the conceptual hybrid model,where the DNV and DVV models are super-imposed for better understanding.2.3.3.Octree-based methodAs illustrated in Figure 7,octree schema represents parts in a tree structure.The nodes of a tree are cuboids and are recursively subdivided into eight mutually disjoint child nodes until all nodes contain no parts of the modelled object,or the desired accuracy to the object is reached.That is,each node is checked to see whether it is fully,partially or not (empty)occupied.If a node is empty or full occupied,it does not need to be subdivided.If it is partially occupied,the node is subdivided further.The subdivision process is repeated until all nodes are either full or not occupied,or until geometric accuracy has been reached (Dyllong and Grimm2008).Figure 3.Point-vector model (Adapted from Chappel1983).Figure 4.Image-based point-vector model (Adapted from Karunakaran et al.2010).(a)Discrete model with outward vectors,(b)discrete model with vertical vectors.598Y.Zhang et al.D o w n l o a d e d b y [T s i n g h u a U n i v e r s i t y ] a t 17:06 19 J u l y 2011Hierarchical octree representations offer an attrac-tive alternative for NC machining simulation because its Boolean operation is simple,and the spatial ordering of the data structure maintains this simplicity even when the local cutting region becomes complex.Karunakaran and Shringi (2007)developed a machin-ing simulation system in which the part was repre-sented by a traditional octree for model creation and modification,and then was represented by B-rep for those downstream applications such as animated display,verification and optimisation.To this end,an algorithm to convert the octree model of the instanta-neous workpiece into B-rep model was presented.This algorithm essentially decomposed the octree model into three quadtree models that store the geometry along the three principal directions (Karunakaran and Shringi 2007).Subsequently,this system was extended to physical simulation for cutting-force prediction using the material removal rate-based average cut-ting-force model (Karunakaran and Shringi 2008).However,using the cutting-force model is inherently incapable of determining the instantaneous cutting force that is essential for optimised cutting and for arriving at optimal values of machining parameters such as feed rate.Therefore,a general instantaneous cutting-force model developed by Altintas and Lee (1996)was used to predict the cutting force (Karuna-karan et al.2010).It was shown that the estimated cutting force agreed well with the experimental results.Traditional octree-based machining simulation demands a large memory and often results in inexact geometric representations.It has been shown that maintenance of a tolerance of 0.01mm over a volume in the order of 100mm per side requires 108octree nodes (Liu et al.1996).Therefore,to decrease memory and improve geometric representation,a number of methods have been proposed.Brunet and Navazo (1990)developed an extended octree model to more accurately represent 3D objects.In the extended octree model,boundary nodes containadditionalFigure 5.Discrete models with vectors (Adapted from Jerard et al.1989).Figure 6.Feature shapes and conceptual hybrid models.(a)Feature shapes,(b)DNV model,(c)DVV model,(d)conceptualhybrid model (Adapted from Park et al.2005).International Journal of Computer Integrated Manufacturing599D o w n l o a d e d b y [T s i n g h u a U n i v e r s i t y ] a t 17:06 19 J u l y 2011。
低碳经济外文翻译外文翻译Low-carbon economyFrom Wikipedia, the free encyclopediaA Low-Carbon Economy LCE or Low-Fossil-Fuel Economy LFFE[1] is an economy which has a minimal output of greenhouse gas GHG emissions into the biosphere, but specifically refers to the greenhouse gas carbon dioxide. Recently, most of scientific and public opinion has come to the conclusion there is such an accumulation of GHGs especially CO2 in the atmosphere due to anthropogenic causes, that the climate is changing. The over-concentrations of these gases is producing global warming that affects long-term climate, with negative impacts on humanity in the foreseeable future.[2] Globally implemented LCE's therefore, are proposed as a means to avoid catastrophic climate change, and as a precursor to the more advanced, zero-carbon society and renewable-energy economy Rationale and aimsNations seek to become low-carbon economies as a part of a national global warming mitigation strategy. A comprehensive strategy to manage global warming is carbon neutrality, geoengineering and adaptation to global warming.The aim of a LCE is to integrate all aspects of itself from its manufacturing, agriculture, transportation and power-generation etc. around technologies that produce energy and materials with little GHG emission; and thus, around populations, buildings, machines and devices which use those energies and materials efficiently, and, dispose of or recycle its wastes so as to have a minimal output of GHGs. Furthermore, it has been proposed that to make the transition to an LCE economically viable we would have to attribute a costper unit output to GHGs through means such as emissions trading and/or a carbon tax.Some nations are presently low carbon: societies which are not heavily industrialised or populated. In order to avoid climate change on a global level, all nations considered carbon intensive societies and societies which are heavily populated might have to become zero-carbon societies and economies. Several of these countries have pledged to cut their emissions by 100% via offsetting emissions rather than ceasing all emissions carbon neutrality; in other words, emitting will not cease but will continue and will be offset to a different geographical area Energy policyA country's energy policy will be immediately impacted by a transition toward a low-carbon economy. Advisory bodies and techno-economic modelling such as the POLES energy model can be used by governments and NGOs in order to study transition pathways.Nuclear power, or, the proposed strategies of carbon capture and storage CCS have been proposed as the primary means to achieve a LCE while continuing to exploit non-renewable resources; there is concern, however, with the matter of spent-nuclear-fuel storage, security and the uncertainty of costs and time needed to successfully implement CCS worldwide and with guarantees that the stored emissions will not leak into the biosphere. Alternatively, many have proposed renewable energy should be the main basis of a LCE, but, they have their associated problems of high-cost and inefficiency; this is changing, however, since investment and production have been growing significantly in recent times.[3] Furthermore, regardless of the effect to the biosphere by GHG emissions, the growing issue of peak oil may also be reason enough for a transition to an LCE.See also: Low carbon dietFoodstuffs should be produced as close as possible to the final consumers preferably within walking/cycling distance. This will reduce the amount of carbon-based energy necessary to transport the foodstuffs. Consumers can also buy fresh food rather than processed food, since carbon-based energy might be used to process the food. Cooking presents another opportunity to conserve energy. Energy could be saved if farmers produced more foods that people would eat raw.[weasel words][citation needed]Also, most of the agricultural facilities in the developed world are mechanized due to rural electrification. Rural electrification has produced significant productivity gains, but it also uses a lot of energy. For this and other reasons such as transport costs in a low-carbon society, rural areas would need available supplies of renewably produced electricity.[citation needed]Irrigation can be one of the main components of an agricultural facility's energy consumption. In parts of California it can be up to 90%.[4] In the low carbon economy, irrigation equipment will be maintained and continually updated and farms will use less irrigation water Crops Different crops require different amounts of energy input. For example, glasshouse crops, irrigated crops, and orchards require a lot of energy to maintain, while row crops and field crops don’t need as much maintenance. Those glasshouse and irrigated crops that do exist will incorporate the following improvements:[5]LivestockLivestock operations can also use a lot of energy depending on how they are run. Feed lots use animal feed made from corn, soybeans, and other crops. Energy must be expended to produce these crops, process and transport them. Free-range animals find their own vegetation to feed on. The farmer may expend energy to take care of that vegetation, but not nearly as much as the farmer who grows cereal and oil-seed crops.Many livestock operations currently use a lot of energy to water theirlivestock. In the low-carbon economy, such operations will use more water conservation methods such as rainwater collection, water cisterns, etc. and they will also pump/distribute that water with on-site renewable energy sources most likely wind and solar.Due to rural electrification, most agricultural facilities in the developed world use a lot of electricity. In a low-carbon economy, farms will be run and equipped to allow for greater energy efficiency. The dairy industry, for example, will incorporate the following changes:[5] Irrigated Dairychemical substitute for hot water wash Hunting and FishingFishing is quite energy intensive. Improvements such as heat recovery on refrigeration and trawl net technology will be common in the low-carbon economy.[5][dead link]ForestryMain article: Wood economyIn the low-carbon economy, forestry operations will be focused on low-impact practices and regrowth. Forest managers will make sure that they do not disturb soil based carbon reserves too much. Specialized tree farms will be the main source of material for many products. Quick maturing tree varieties will be grown on short rotations in order to imize output.[6]MiningMain article: Gas flareFlaring and venting of natural gas in oil wells is a significant sourceof greenhouse gas emissions. Its contribution to greenhouse gases has declined by three-quarters in absolute terms since a peak in the 1970s of approximately 110 million metric tons/year and now accounts for about 1/2 of one percent of all anthropogenic carbon dioxide emissions.[7] The World Bank estimates that 100 billion cubic meters of natural gas are flared or vented annually, an amount equivalent to the combined annual gas consumption of Germany and France, twice the annual gas consumption of Africa, three quarters of Russian gas exports, or enough to supply the entire world with gas for 20 days. This flaring is highly concentrated: 10 countries account for 75% of emissions, and twenty for 90%.[8] The largest flaring operations occur in the Niger Delta region of Nigeria. The leading contributors to gas flaring are in declining order: Nigeria, Russia, Iran, Algeria, Mexico, Venezuela, Indonesia, and the United States.[9] RetailRetail operations in the low-carbon economy will have several new features. One will be high efficiency lighting such as compact fluorescent, halogen, and eventually LED light sources. Many retail stores will also feature roof-top solar panel arrays. These make sense because solar panels produce the most energy during the daytime and during the summer. These are the same times that electricity is the most expensive and also the same times that stores use the most electricity.[10]Transportation ServicesMore energy efficiency and alternative propulsion:o Increased focus on fuel efficient vehicle shapes and configurations, with more vehicle electrification, particularly through plug-in hybridso More alternative and flex-fuel vehicles based on local conditions and availabilityo Driver training for more fuel efficiencyo Low carbon-biofuels cellulosic biodiesel, bioethanol, biobutanolo Petroleum fuel surcharges will be a more significant part of consumer costs? Less international trade of physical objects, despite more overall trade as measure by value of goods Greater use of marine and electric rail transport, less use of air and truck transport?Increased bicycle and public transport usage, less reliance on private motor vehicles? More pipeline capacity for common fluid commodities such as water, ethanol, butanol, natural gas, petroleum, and hydrogen in addition to gasoline and dieselSee [11][12][13]Health Services There have been some moves to investigate the ways and extent to which health systems contribute to greenhouse gas emissions and how they may need to change to become part of a low-carbon world. The Sustainable Development Unit[14] of the NHS in the UK is one of the first official bodies to have been set up in this area, whilst organisations such as the Campaign for Greener Healthcare [15] are also producing influential changes at a clinical level. This work includesQuantification of where the health services emissions stem from? Information on theenvironmental impacts of alternative models of treatment and service provisionSome of the suggested changes needed are:Greater efficiency and lower ecological impact of energy, buildings, and procurement choices e.g. in-patient meals, pharmaceuticals and medical equipment? A shift from focusing solely on cure to prevention, through the promotion of healthier, lower carbon lifestyles, e.g. diets lower in red meat and dairy products, walking or cycling wherever possible, better town planning to encourage more outdoor lifestyles? Improving public transport and liftsharing options for transport to and from hospitals and clinics Initial stepsInternationally, the most prominent early step in the direction of a low-carbon economy was the signing of the Kyoto Protocol, which came into force on February 16, 2005, under which most industrialized countries committed to reduce their carbon emissions.[16][17] Importantly, all member nations of the Organization for Economic Co-operation and Development except the United States have ratified the protocol CountriesCosta RicaCosta Rica sources much of its energy needs from renewables and is undertaking reforestation projects. In 2007 the Costa Rican government announced the commitment for Costa Rica to become the first carbon neutral country by 2021.[18][19][20]IcelandMain article: Renewable energy in IcelandIceland began utilising renewable energy early in the 20th century and so since has been a low-carbon economy. However since dramatic economic growth, Iceland's emissions have increased significantly per capita. As of 2009, Iceland energy is sourced from mostly geothermal energy and hydropower, renewable energy in Iceland, and since 1999, has provided over 70% of the nation's primary energy and 99.9% of Iceland's electricity.[21] As a result of this, Iceland's carbon emissions per capita are 62% lower than those of the United States[22] despite using more primary energy per capita,[23] due to the fact that it is renewable and thus limitless and costs Icelanders almost nothing. Iceland seeks carbon neutrality and expects to use 100% renewable energy by 2050 by generating hydrogen fuel from renewable energy sources Australia Main article: Renewable energy in AustraliaAustralia has implemented schemes to start the transition to a low carbon economy but carbon neutrality has not been mentioned and since the introduction of such scheme emissions have increased. The current government has mentioned the concept but has done little and has pledged to lower emissions by 5-15%. In 2001, The Howard Government introduced a Mandatory Renewable Energy Target MRET scheme. In 2007, the Government revised the MRET - 20 per cent of Australia's electricity supply to come from renewable energy sources by 2020. In 2009, the Rudd Government willlegislate a short-term emissions reduction target, another revision to the Mandatory Renewable Energy Target as well as an emissions trading scheme. Renewable energy sources provide 8-10% of the nation's energy and this figure will increase significantly in the coming years. However coal dependence and exporting conflicts with the concept of Australia as a low-carbon economy. Carbon neutral businesses have received no incentive; they have voluntarily done so. Carbon offset companies offer assessments based on life cycle impacts to businesses that seek carbon neutrality. The Carbon Reduction Institute is one such offset provider, that has produced a Low Carbon Directory to promote a low carbon economy in Australia New ZealandChinaMain article: Renewable energy in ChinaIn China, the city of Dongtan is to be built to produce zero net greenhouse gas emissions.[24]Chinese State Council has announced its aim to cut China's carbon dioxide emission per unit of GDP by 40%-45% in 2020 from 2005 levels.[25]SwedenOil phase-out in SwedenUnited KingdomIn the United Kingdom, the Climate Change Act outlining a framework for the transition to a low-carbon economy became law on November 26, 2008. This legislation requires a 80% cut in the UK's carbon emissions by 2050 compared to 1990 levels, with an intermediate target of between 26% and32% by 2020.[26] Thus, the UK became the first country to set such a long-range and significant carbon reduction target into law.A meeting at the Royal Society on 17?18 November 2008 concluded that an integrated approach, making best use of all available technologies is required to move towards a low carbon future. It was suggested by participants that it would be possible to move to a low carbon economy within a few decades, but that 'urgent and sustained action is needed on several fronts'.[27]United StatesLow Carbon Economy Act of 2007.[28]译文低碳经济从维基百科,免费的百科全书一个低碳经济现状或Low-Fossil-Fuel经济LFFE[1]是一种经济具有最小输出的温室气体排放的温室气体进入生物圈,但具体指的温室气体二氧化碳。
Graph rewritingGraph transformation,or graph rewriting,concerns the technique of creating a new graph out of an origi-nal graph algorithmically.It has numerous applications, ranging from software engineering(software construction and also software verification)to layout algorithms and picture generation.Graph transformations can be used as a computation ab-straction.The basic idea is that the state of a computa-tion can be represented as a graph,further steps in that computation can then be represented as transformation rules on that graph.Such rules consist of an original graph,which is to be matched to a subgraph in the com-plete state,and a replacing graph,which will replace the matched subgraph.Formally,a graph rewriting system usually consists of a set of graph rewrite rules of the form L→R,with L being called pattern graph(or left-hand side)and R be-ing called replacement graph(or right-hand side of the rule).A graph rewrite rule is applied to the host graph by searching for an occurrence of the pattern graph(pattern matching,thus solving the subgraph isomorphism prob-lem)and by replacing the found occurrence by an instance of the replacement graph.Rewrite rules can be further regulated in the case of labeled graphs,such as in string-regulated graph grammars.Sometimes graph grammar is used as a synonym for graph rewriting system,especially in the context of formal languages;the different wording is used to em-phasize the goal of constructions,like the enumeration of all graphs from some starting graph,i.e.the generation of a graph language–instead of simply transforming a given state(host graph)into a new state.1Graph rewriting approaches There are several approaches to graph rewriting.One of them is the algebraic approach,which is based upon category theory.The algebraic approach is divided into some sub approaches,the double-pushout(DPO)ap-proach and the single-pushout(SPO)approach being the most common ones;further on there are the sesqui-pushout and the pullback approach.From the perspective of the DPO approach a graph rewriting rule is a pair of morphisms in the category of graphs with total graph morphisms as arrows:r=(L←K→R)(or L⊇K⊆R)where K→L is injective. The graph K is called invariant or sometimes the gluinggraph.A rewriting step or application of a rule r to a host graph G is defined by two pushout diagrams both origi-nating in the same morphism k:K→G(this is where the name double-pushout comes from).Another graph morphism m:L→G models an occurrence of L in G and is called a match.Practical understanding of this is that L is a subgraph that is matched from G(see subgraph isomorphism problem),and after a match is found,L is replaced with R in host graph G where K serves as an interface,containing the nodes and edges which are pre-served when applying the rule.The graph K is needed to attach the pattern being matched to its context:if it is empty,the match can only designate a whole connected component of the graph G.In contrast a graph rewriting rule of the SPO approach is a single morphism in the category labeled multigraphs with partial graph morphisms as arrows:r:L→R. Thus a rewriting step is defined by a single pushout di-agram.Practical understanding of this is similar to the DPO approach.The difference is,that there is no inter-face between the host graph G and the graph G'being the result of the rewriting step.There is also another algebraic-like approach to graph rewriting,based mainly on Boolean algebra and an alge-bra of matrices,called matrix graph grammars.[1][2] Yet another approach to graph rewriting,known as deter-minate graph rewriting,came out of logic and database theory.In this approach,graphs are treated as database instances,and rewriting operations as a mechanism for defining queries and views;therefore,all rewriting is re-quired to yield unique results(up to isomorphism),and this is achieved by applying any rewriting rule concur-rently throughout the graph,wherever it applies,in such a way that the result is indeed uniquely defined.2Term graph rewritingAnother approach to graph rewriting is term graph rewrit-ing,which involves the processing or transformation of term graphs(also known as abstract semantic graphs)by a set of syntactic rewrite rules.Term graphs are a prominent topic in programming lan-guage research since term graph rewriting rules are ca-pable of formally expressing a compiler’s operational se-mantics.Term graphs are also used as abstract machines capable of modelling chemical and biological compu-tations as well as graphical calculi such as concurrency 123IMPLEMENTATIONS AND APPLICATIONSmodels.Term graphs can perform automated verifica-tion and logical programming since they are well-suited to representing quantified statements in first order logic.Symbolic programming software is another application for term graphs,which are capable of representing and performing computation with abstract algebraic struc-tures such as groups,fields and rings.The TERMGRAPH conference [3]focuses entirely on re-search into term graph rewriting and its applications.3Implementations and applica-tionsExample for graph rewrite rule (optimization from compiler con-struction:multiplication with 2replaced by addition)Graphs are an expressive,visual and mathematically pre-cise formalism for modelling of objects (entities)linked by relations;objects are represented by nodes and rela-tions between them by edges.Nodes and edges are com-monly typed and putations are described in this model by changes in the relations between the en-tities or by attribute changes of the graph elements.They are encoded in graph rewrite/graph transformation rules and executed by graph rewrite systems/graph transforma-tion tools.•Tools that are application domain neutral:• ,the graph rewrite generator,a graph transformation tool emitting C#-code or .NET-assemblies•AGG ,the attributed graph grammar system (Java )•GP (Graph Programs)is a programming lan-guage for computing on graphs by the directed application of graph transformation rules.•GMTE ,the Graph Matching and Transforma-tion Engine for graph matching and transfor-mation.It is an implementation of an exten-sion of Messmer’s algorithm using C++.•GROOVE ,a Java-based tool set for editing graphs and graph transformation rules,explor-ing the state spaces of graph grammars,and model checking those state spaces;can also be used as a graph transformation engine.•Tools that solve software engineering tasks (mainly MDA )with graph rewriting:•eMoflon ,an EMF-compliant model-transformation tool with support for Story-Driven Modeling and Triple Graph Grammars •GReAT •VIATRA•Graph databases often support dynamic rewriting of graphs•Gremlin ,a graph-based programming lan-guage (see Graph Rewriting )•PROGRES ,an integrated environment and very high level language for PROgrammed Graph REwriting Systems•Fujaba uses Story driven modelling,a graph rewrite language based on PROGRES •EMorF and Henshin ,graph rewriting systems based on EMF ,supporting in-place model transformation and model to model transfor-mation•Mechanical engineering tools•GraphSynth is an interpreter and UI environ-ment for creating unrestricted graph grammars as well as testing and searching the resultant language variant.It saves graphs and graph grammar rules as XML files and is written in C#.•Soley Studio ,is an integrated development en-vironment for graph transformation systems.It’s main application focus is data analytics in the field of engineering.•Biology applications•Functional-structural plant modeling with a graph grammar based language•Multicellular development modeling with string-regulated graph grammars•Artificial Intelligence/Natural Language Processing•OpenCog provides a basic pattern matcher (on hypergraphs )which is used to implement var-ious AI algorithms.3•RelEx is an English-language parser that em-ploys graph re-writing to convert a link parseinto a dependency parse.4See also•Category theory•Graph theory•Shape grammar•Term graph5Notes[1]Perez2009covers this approach in detail.[2]This topic is expanded at .[3]“TERMGRAPH”.6References•Rozenberg,Grzegorz(1997),Handbook of GraphGrammars and Computing by Graph Transforma-tions,World Scientific Publishing,volumes1–3,ISBN9810228848.•Perez,P.P.(2009),Matrix Graph Grammars:An Al-gebraic Approach to Graph Dynamics,VDM Verlag,ISBN978-3-639-21255-6.•Heckel,R.(2006).Graph transformation in a nut-shell.Electronic Notes in Theoretical ComputerScience148(1SPEC.ISS.),pp.187–198.•König,Barbara(2004).Analysis and Verifica-tion of Systems with Dynamically Evolving Structure.Habilitation thesis,Universität Stuttgart,pp.65–180.•Lobo,D.et al.(2011).Graph grammars with string-regulated rewriting.Theoretical Computer Science,412(43),pp.6101-6111.47TEXT AND IMAGE SOURCES,CONTRIBUTORS,AND LICENSES 7Text and image sources,contributors,and licenses7.1Text•Graph rewriting Source:/wiki/Graph%20rewriting?oldid=651791188Contributors:Rp,Silverfish,MathMartin, Giftlite,Thv,Matt Crypto,Cmdrjameson,R.S.Shaw,Oleg Alexandrov,Linas,Rjwilmsi,Batztown,Michael Slone,Arthur Rubin,Rtc, Dougher,David Eppstein,Ppablo1812,Addbot,JakobVoss,4th-otaku,AnomieBOT,Gragragra,HanielBarbosa,TechBot,Mattica,2nd-jpeg,FrescoBot,Gwpl,Playmobilonhishorse,Waidanian,Ɯ,Ptrb,Helpful Pixie Bot,Bouassida,Eptified,Loelib,Mark viking,Dokkam, RolandKluge and Anonymous:377.2Images•File:GraphRewriteExample.PNG Source:/wikipedia/commons/4/44/GraphRewriteExample.PNG License: Public domain Contributors:Own work Original artist:Gragra7.3Content license•Creative Commons Attribution-Share Alike3.0。
A user-friendly PC-based GIS for forest entomology: anattempt to combine existing softwareMARIUS GILBERTLaboratoire de Biologie animale et cellulaire, CP 160/12Université Libre de Bruxelles, 50 av. F. D. Roosevelt, 1050 Brussels, BelgiumA BSTRACT We present a combination of existing software which should facilitate the use of GIS by forest entomologists. The use of existing GIS by ecologists who want to take advantage of the spatial component of their analyses is often hindered by the difficulty of use and cost of such tools. Moreover, it might be useful to have a system which can be carried to remote locations where access to a mainframe computer or powerful workstations is unavailable. It is now possible to have a good system running on a single personal computer due to the increasing power and the low cost of this type of hardware. We have gathered programs characterised by their simplicity, low cost, and performances. IDRISI is a simple and inexpensive GIS which has a simple file structure that allows the user to create his/her own analysis scripts with basic knowledge in computer science. VARIOWIN is a geostatistical package which performs exploratory variography and 2D modelling. SURFER is a surface mapping system able to create and to display grids using several methods of spatial interpolations including two dimensional anisotropic kriging. FRAGSTATS is a program which performs landscape analyses and finally, COREL DRAW is a vector-oriented package that can be used for output of both bitmap and vector output. These programs were not designed to work together and problems may occur due to the heterogeneous nature of the system. Conversion file difficulties were encountered and we were faced with the complexity of using different programs to perform one analysis sequence. An additional module is being developed to solve these problems and the whole system should be able to manage data, analyze their spatial component and display map output in a simple and user-friendly way.K EY W ORDS: GIS, Spatial analysis, geostatistics, software.G EOGRAPHICAL INFORMATION SYSTEMS (GIS) are widely used in environmental management (Burrough 1993). In ecology and particularly in forest entomology they may be of considerable interest for the study of spatial pattern and insect spatial distribution (Liebhold et al. 1993, Coulson et al. 1993, Turner 1989). For example, the measurement of spatial dependence is essential in sampling methodology (Rossi et al. 1992, Legendre 1993, Fortin et al. 1989). Moreover, the interaction between insects and their environment always bears a spatial component (Borth and Hubert 1987, Hohn et al. 1993).A GIS is a computer program designed to collect, retrieve, transform, display, and analyze spatial data. GIS can incorporate georeferenced data to produce maps or layers. Usually, a map layer or a theme is composed of only one type of data. GIS have the ability to import and manage data from different sources: mapped data, alphanumeric data, remotely sensed data. These types of data may then be combined to build a GIS database. Using this database, the user may create map outputs or display views relative to specific questions. These systems have recently improved their abilities to carry out spatial analyses integrating new built-in functions (spatial interpolation, spatial autocorrelation, overlay analysis, etc.). Furthermore, the user may create his own analysis functions with personal scripts.Pages 54-61 in J.C. Grégoire, A.M. Liebhold, F.M. Stephen, K.R. Day, and S.M. Salom, editors. 1997. Proceedings: Integrating cultural tactics into the management of bark beetle and reforestation pests. USDA Forest Service General Technical Report NE-236.G ILBERT55New GIS users often must choose between the performances of their system and the time that they will spend to be able to use it. Simple systems are easy to learn, but often lack integrated functions which obliges the user to write his/her own script to achieve specific goals.Powerful systems are created for a diversity of purposes. They bear useful built-in functions, but users must become familiar with the whole GIS environment to use them properly. These powerful systems usually run on workstations. This type of hardware and associated software are expensive and demand knowledge of a new operating system environment. For example, the use of an important application software such ARC/INFO demands knowledge in the UNIX operating system, knowledge of the ARC/INFO file structure and topology, of the ARC/INFO high number of commands and of the AML (Arc Macro Language).However, users who just want to take advantage of the spatial component of their data have different hardware and software requirements than users who are planning complete GIS research projects involving extensive data census. The first kind of users can not afford expensive systems to add a spatial analysis component to their ecological studies. This is why an inexpensive system working on a single PC might interest such users. Moreover, the increasing power of PCs makes GIS more and more efficient on this hardware. Finally, a system running on a single PC might be useful on the field or to remote locations where access to mainframe computer or powerful workstations is unavailable (Carver et al. 1995).We present a combination of inexpensive and user-friendly PC-based software which should help potential users to integrate spatial components to their ecological studies.System PresentationFirst, information about existing software was collected, mostly on the Internet in Web pages. Additionally, user discussions available in newsgroups related to each software were followed.The software applications were tested on a Pentium 90 MHz PC with 8 Mbytes of RAM, under the WINDOWS 95 operating system. ARC/INFO PC, MAPINFO and IDRISI have been tested for GIS functionality. VARIOWIN, Geostatistical Toolbox, Geo-EAS, FRAGSTATS and SURFER were tested as additional spatial analysis tools. These applications were first tested with external sample data and secondly with our own data. GIS were tested for their ability to create a new database, to manage it and especially to exchange data with other sources. Spatial analysis software were tested with the author's instructions.The chosen combination of software is presented in figure 1. IDRISI for WINDOWS (Clark University, Eastman 1988, Cartwright 1991) is a raster GIS which includes vector-data management and display. It is highly user-friendly and bears a lot of built-in spatial analysis functions. Its very simple file structure facilitate the creation of new analysis scripts. IDRISI for Windows includes a database manager which allows users to relate geographic features to a database (DBase, Access, text files). We used it in our system because of its simple file structure in both vector and raster formats and its ability to import and export data in a large range of formats (raster or vector geographical data or database data). This software is very inexpensive and will be used as the central geographical data manager of our system.G ILBERT56VARIOWIN- Exploratory variography- 2D modellingSURFER Surface mapping system EXCELTabular data managementADDITIONAL MODULE- Spatial analysis tools- Conversion tools- Help flow chartsCOREL DRAW Object oriented and bitmappedgraphic designIDRISIRaster and Vector GIS FRAGSTATSSpatial pattern analysis Figure 1: Structure of the system showing inputs and outputs between software.VARIOWIN, written by Yvan Pannatier (University of Lausanne - Switzerland,Pannatier 1994) has been released with a manual (Pannatier 1996). Its aim is to compute geostatistical analyses and variogram modelling in 2D. There are three modules. The first creates a pair-comparison file on the basis of an ASCII file containing XY co-ordinates and attributes. The second module computes variogram surfaces, directional variograms, and a general variogram. It is also possible to estimate the semi-variogram with other estimators like the non-ergotic covariance or the non-ergotic correlogram which are often used in ecology (Sokal and Oden 1978(a), 1978(b), Johnson 1989). Moreover, the user may also create H-scaterplots and identify interactively potential outliers affecting the measure of spatial continuity. The last module offers an excellent tool to interactively model the semi-variogram.SURFER (Golden Software, Inc.) is a surface mapping system designed to manage and display 3D raster-based data. Its first aim is to build surfaces by spatial interpolation on the basis of georeferenced data points. These interpolation methods include kriging but an input model created with another software has to be specified to perform it. The input and output files of VARIOWIN are fully compatible with the SURFER format and their combination has proved to be excellent to practice geostatistics. It is possible to export the interpolated grid in a format readable by IDRISI. The last version of SURFER has the capability to perform 2D anisotropic kriging with three nested structures.FRAGSTATS is a DOS-based spatial analysis program for quantifying landscape structure written by K. McGarigal and B. J. Marks (Oregon State University). Landscape ecology involves the study of the landscape pattern which can be associated with other ecological characteristics, including vertebrate and invertebrate populations (Saunders et al.1991, Turner 1989, Wiens et al. 1993). FRAGSTATS has been developed to quantify landscape structure by offering a comprehensive choice of landscape metrics. The PC version creates IDRISI raster-format files.COREL DRAW (Corel Corporation) is a well known object-oriented vector and bitmapped graphic design software which is able to import and export in most of the graphic file formats. It is very flexible and can be efficiently used to manipulate and print map output.EXCEL (Microsoft) is another well known software which is installed on many PCs.It may be used to manage input tabular data and output graphs and statistics.G ILBERT57Test With Our Own DataThis combination of software has been tested in two study cases. The aim of this test was to assess the system inadequacies in specific analyses and to list any problems encountered. These test studies concerned the spatial distribution of Pulvinaria regalis Canard in the city of Oxford (Speight et al. 1996) and Dendroctonus micans (Kug.) in the Massif central (unpublished). The use of the combined software in both cases is presented in Table 1. In both studies, IDRISI was used to manage the spatial data, VARIOWIN was used to calculate variograms, correlograms, correlogram surfaces, and for 2D isotropic and anisotropic modelling. SURFER was used to process kriging and to generate surfaces and map outputs. COREL DRAW was used to assemble the final map outputs. FRAGSTATS has not yet been tested with actual data.Table 1: Use of the software for different stages of the study cases.Software Pulvinaria regalis in the city of Oxford Dendroctonus micans in the Massif Central EXCEL Data encoding Data encodingExploratory Data Analysis Exploratory Data AnalysisGraph outputs Print output of the correlogramIDRISI Database construction Database constructionPlotting of the samples on a map Plotting of the samples on a mapQuadrat analysis Quadrat analysisMoving-Windows analysis Moving-Windows analysisSURFER Print output of the correlogram surface Isotropic 2D ordinary krigingAnisotropic 2D kriging Creation of a Digital elevation modelMap output of the density distribution Map output of the density distribution VARIOWIN Creation of a Pair Comparison File Creation of a Pair Comparison FileSemi-variogram and semi-variogram Semi-variogram and semi-variogram surfacessurfaces2D anisotropic semi-variogram modelling2D isotropic semi-variogram modelling COREL DRAW Digitizing streets contour lines Map output of the sample pointsMap output of the samples points Digitizing forest stands contour linesDigitizing altitude contour linesResults And DiscussionThe use of this combination of software is very easy to learn and use. The interactivity of the exploratory variography performed with VARIOWIN is an excellent method for becoming familiar with the basics of geostatistics and is highly recommended to beginners. The kriging function of SURFER is elementary and other software must be used to perform other types of kriging (Varekamp et. al. 1996). Moreover, the user must take care not to use the kriging function of SURFER as a black-box tool; users should be aware of the hypotheses and assumptions involved in a kriging process (Isaaks and Srivastava 1989). The surface-management and the map-output abilities of SURFER are highly complementary to a GIS such as IDRISI. FRAGSTATS has not been tested with our data but its use with data with the software provided is very easy. EXCEL and COREL DRAW are well known software58G ILBERTand we do not need to comment on them. Taken as a whole, the system met all the requirements of these two case studies and we plan to carry on with almost the same configuration for further studies. The proposed combination of software allowed us to perform file input, data management, spatial analyses and map outputs.However, considerable time was spent to solve computing problems. Basically, problems were caused by the fact that these applications were not designed to work together. First, some spatial analyses are not covered by this software combination. For example, a simple quadrat analysis might be used to determine whether sample spatial distribution is aggregative or not (Myers 1978). Second, the file formats used by the software are often different and conversion may involve a very detailed knowledge of the different software file structures. Fortunately, IDRISI allows the conversion of data from/to a wide range of formats. However we had problems even with the proper tools. For example, the missing data value used in the *.grd files used by VARIOWIN is not the same in the *.grd SURFER files (these files are described as having the same format). The grid files used by SURFER are binary files and must be first converted to ASCII files to be converted to IDRISI files. These problems are not serious if they can be easily identified, but they may otherwise cause important time losses. Sometimes, the automatic data transformations were not possible and we had to proceed manually within a text file. Third, due to the modular and heterogeneous structure of our system, we had to use different software to carry out an analysis sequence. For example, a basic geostatistical analysis involves exploratory data analysis (Tukey 1977) which can be computed with EXCEL, map output of the samples created with IDRISI, moving-windows analysis to detect sample-point aggregation and proportional effects calculated with separated programs (not included in the IDRISI analyses scripts), calculation of the estimated semi-variogram and its modeling carried out with VARIOWIN, and finally ordinary kriging processed with SURFER. This sequential analysis procedure was split into different small procedures within each software, with its own file input and output characteristics. Without an excellent knowledge of each software use and limitations, much time may be wasted.To meet these problems, we plan to write scripts in Visual Basic gathered in one new additional module. First, they compute spatial analyses which are not covered by the other software (e.g. quadrat and moving-windows analyses). Second, to simplify the file conversions, we will write flow charts indicating steps to convert files. These flow charts should underline sensitive steps (e.g. replacement of one no-data value by another). If a file-conversion tool does not exist in the software combination, we plan to write it. Finally, we plan to write flow charts for typical spatial analysis procedures. These flow charts will be designed to help users to know how and where they can perform a given analysis. This additional module (figure 1) should complement the system in a user-friendly way: the additional tools will complement the existing software, conversion procedures and the flow charts should reduce the wasted time needed to become familiar with all the individual programs.Varekamp et al. (1996) showed that it was possible to perform a wide range of geostatistical procedure with public-domain software available on the Internet (Englund and Sparks 1988, Pebesma 1993, Deutsch and Journel 1992). Their system is based on the use of DOS executable programs or FORTRAN routines which provide more geostatistical analysis functions than our system. We have chosen to focus on user-friendly and basic software. TheG ILBERT59 kriging abilities of SURFER are limited, but if a user wants to do more than two dimensional ordinary kriging, he can refer to other more powerful software (e.g. GSLIB) which is more complex and involves a better knowledge of spatial interpolation methods used in geostatistics. However, we insist on the fact that VARIOWIN is excellent for users to perform basic explorative and interactive variography and can be highly suggested for beginners users with geostatistics. Carver et. al. (1995) showed the advantages of having a GIS like IDRISI installed on a portable PC for expedition fieldwork (i.e. ability to develop sampling strategy as a result of immediate data visualisation). Additional spatial analysis modules enhance these advantages by a direct spatial analysis treatment of the data. For example, a strong spatial discontinuity in a species spatial distribution or a detection of data outliers might reveal interesting local environmental discontinuities. Such discontinuities detected afterwards are difficult to explain or interpret. Such fieldwork use of the GIS might then be of a considerable interest.Our system is basically designed for beginners in GIS and geostatistics, but we believe that even experienced users will find interest in a portable and inexpensive system which allows small budget projects. Moreover, experienced users may use the same combination of software as a base which can be supplemented with additional software whenever they reach limitations. The flexible structure of the system and the simple file structure used should facilitate the integration of additional modules.ConclusionTaken as a whole, this system could be used as a reliable, portable, and inexpensive PC-based GIS and spatial analysis tool. Problems due to the heterogeneous nature of the system have been encountered but they can be solved easily by a good knowledge of each software characteristics. An additional module should facilitate the use of this system by beginners. The flexible and simple structure of the system will facilitate the integration of additional modules by experienced users interested in different aspects of spatial analysis.AcknowledgementThe author would like to thank the Wiener-Anspach foundation for its funding support during all the stages of this work. This work was also funded by the FNRS (Fond National de la Recherche Scientifique) and the Loterie Nationale F 5/4/85 - OL - 9.708.References CitedBorth, P.W., and R.T. Huber.1987. Modelling pink bollworm establishment and dispersion in cotton with the Kriging technique, pp. 264. In Proceedings Beltwide Cotton Production Research Conference. Cotton Council Am. Memphis. 1987. Burrough, P.A.1986. Principle of Geographical Information Systems for land resources assessment. Clarendon Press, Oxford.Cartwright, J.C.1991. IDRISI - spatial analysis at a modest price. Gis-World, 4: 96-99.G ILBERT60Carver, S.J., S.C. Cornelius, D.I. Heywood, and D.A. Sear.1995. Using computer and Geographical Information Systems for expedition fieldwork. The Geographical Journal, 161: 167-176.Coulson R.N., J.W. Fitzgerald, M.C. Saunders, and F.L. Oliveira.1993. Spatial analysis and integrated pest management in a landscape ecological context, pp. 93. In Spatial Analysis and Forest Pest Management U.S. Dep. Agric., 1993.Deutsch, C.V., and A.G. Journel.1992. GSLIB Geostatistical Software Library and User's Guide. Oxford University Press, Oxford.Eastman, J.R.1988. IDRISI: a geographic analysis system for research applications.Operational-Geographer. 15: 17-21.Englund, E., and A. Sparks.1988. Geo-EAS 1.2.1 Users Guide. Report Number 60018-91/008, Environmental Protection Agency, EPA-EMSL, Las-Vegas, Nevada. Fortin, M.J., P. Drapeau, and P. Legendre.1989. Spatial autocorrelation and sampling design in plant ecology. Vegetatio, 83: 209-222.Hohn, M.E., A.M. Liebhold, and L.S. Gribko.1993. Geostatistical model for forecasting spatial dynamic of defoliation caused by the gypsy moth (Lepidoptera: Lymantriidae).Environ. Entomol. 23: 1066-1075.Isaaks, E.H., and R.M. Srivastava.1989. An Introduction to Applied Geostatistics.Oxford University Press, New-York.Jonhson, D.L.1989. Spatial autocorrelation, spatial modeling, and improvement in grasshopper survey methodology. Can. Entomol. 121: 579-588.Legendre, P.1993. Spatial autocorrelation: Trouble or new paradigm? Ecology (tempe), 74: 1659-1673.Liebhold, A.M., R.E. Rossi, and W.P. Kemp.1993. Geostatistics and geographic information systems in applied insect ecology. Ann. Rev. Entomol. 38: 303-327. Myers, J.H.1978. Selecting a measure of dispersion. Environ. Entomol. 7: 619-621. Pannatier, Y.1994. Statistics of Spatial Processes: Theory and Applications. pp. 165-170-In V. Capasso, G. Girone, and D. Posa (eds.), Bari.Pannatier Y.1996. VARIOWIN: Software for Spatial Data Analysis in 2D. Springer-Verlag, New York.Pebesma, E.J.1993. GSTAT 0.99h - Multivariate Geostatistical Toolbox. National Institute for Public Health and Environmental Protection, Bilthoven, The Netherlands. Rossi, R.E., D.J. Mulla, A.G. Journel, and E.H. Franz. 1992. Geostatistical tools for modeling and interpreting ecological spatial dependence. Ecological Monographs, 62: 277-314.Saunders, D., R.J. Hobbs, and C.R. Margules.1991. Biological consequences of ecosystem fragmentation: a review. Conserv. Biol. 5: 18-32.Sokal, R.R., and N.L. Oden.1978. Spatial autocorrelation in biology 1. Methodology.Biol. J. Linnean Soc. 10: 199-228.Sokal, R.R., and N.L. Oden. 1978. Spatial autocorrelation in biology 2. Some biological implication and four applications of evolutionary and ecological interest. Biol. J.Linnean Soc. 10: 229-249.Turner, M.G.1989. Landscape ecology: the effect of pattern on process. Annual Review of Ecology and Systematics, 20: 171-197.G ILBERT61 Varekamp, C., A.K. Skidmore, and P.A. Burrough. 1996. Using public domain geostatistical and GIS software for spatial interpolation. Photogrammetric Engineering & Remote Sensing, 62: 845-854.Wiens, J.A., N.C. Stenseth, and B. Van Horne.1993. Ecological mechanism and landscape ecology. OIKOS, 66: 369-380.。
Informatica29(2005)391–400391 Agent Modeling Language(AML):A Comprehensive Approach to Modeling MASIvan Trencansky and Radovan CervenkaWhitestein Technologies,Panenska28,81103Bratislava,SlovakiaTel+421(2)5443-5502,Fax+421(2)5443-5512E-mail:{itr,rce}@Keywords:agent,multi-agent system,modeling language,agent-oriented software engineeringReceived:May6,2005The Agent Modeling Language(AML)is a semi-formal visual modeling language for specifying,mod-eling and documenting systems that incorporate features drawn from multi-agent systems theory.It isspecified as an extension to UML2.0in accordance with major OMG modeling frameworks(MDA,MOF,UML,and OCL).The ultimate objective of AML is to provide software engineers with a ready-to-use,complete and highly expressive modeling language suitable for the development of commercial softwaresolutions based on multi-agent technologies.This paper presents an overview of AML.The scope of thelanguage,its structure and extensibility mechanisms are discussed,and the core AML modeling constructsand mechanisms are introduced and demonstrated by examples.Povzetek:Opisana je vizualizacija agentnega jezika za modeliranje.1IntroductionThe Agent Modeling Language(AML)[3,5,4]is a semi-formal1visual modeling language for specifying,modeling and documenting systems that incorporate concepts drawn from Multi-Agent Systems(MAS)theory.The most significant motivation driving the development of AML was the extant need for a ready-to-use,com-prehensive,versatile and highly expressive modeling lan-guage suitable for the development of commercial software solutions based on multi-agent technologies.To qualify this more precisely,AML was intended to be a language that:(1)is built on proved technical foundations,(2)in-tegrates best practices from agent-oriented software engi-neering(AOSE)and object-oriented software engineering (OOSE)domains,(3)is well specified and documented, (4)is internally consistent from the conceptual,semantic and syntactic perspectives,(6)is versatile and easy to ex-tend,(7)is independent of any particular theory,software development process or implementation environment,and (8)is supported by Computer-Aided Software Engineering (CASE)tools.Given these requirements,AML is designed to address the most significant deficiencies with current state-of-the-art and practice in the area of MAS oriented model-ing languages,which are often:(1)insufficiently docu-mented and/or specified,or(2)using proprietary and/or non-intuitive modeling constructs,or(3)aimed at model-ing only a limited set of MAS aspects,or(4)applicable only to a specific theory,application domain,MAS archi-1The term“semi-formal”implies that the language offers the means to specify systems using a combination of natural language,graphical nota-tion,and formal language specification.tecture,or technology,or(5)mutually incompatible,or(6) insufficiently supported by CASE tools.The objective of this paper is to present the approach applied to specification of AML,and a brief overview of the various modeling constructs AML provides to model MASs.Due to limitations in paper length,a comprehen-sive description of AML abstract syntax,semantics,and notation is not provided.The rest of the paper is structured as follows:Section2 presents the approach applied to specification of AML and the available extensibility mechanisms.Section3ex-plains the AML fundamental entities and their features, sections4,5,6,7and8present an overview of AML ap-proach to modeling different aspects of agents and MASs, like social aspects,different kinds of interactions,capabil-ities,mobility,and mental attitudes.In the end the conclu-sions are drawn.2The AML ApproachToward achieving the stated goals and overcoming the de-ficiencies associated with many existing approaches,AML has been designed as a language,which:–incorporates and unifies the most significant concepts from the broadest set of existing multi-agent theo-ries and abstract models(e.g.DAI[24],BDI[17], SMART[9]),modeling and specification languages(e.g.AUML[1,11,12],TAO[18],OPM/MAS[20],AOR[23],UML[15],OCL[14],OWL[19],UML-based ontology modeling[7],methodologies(e.g.MESSAGE[10],Gaia[25],TROPOS[2],PASSI[6],392Informatica 29(2005)391–400I.Trencansky et al.Prometheus [16],MaSE [8]),agent platforms (e.g.Jade,FIPA-OS,Jack,Cougaar)and multi-agent driven applications,–extends the above with new modeling concepts to ac-count for aspects of multi-agent systems thus far cov-ered insufficiently,inappropriately or not at all,–assembles them into a consistent framework specified by the AML meta-model (covering abstract syntax and semantics of the language)and notation (cover-ing the concrete syntax),and–is specified as an extension to UML in accordance with the OMG modeling frameworks (MDA,MOF,UML,and OCL).2.1The Language DefinitionAML is built upon the Unified Modeling Language (UML)2.0Superstructure [15],augmenting it with several new modeling concepts appropriate for capturing the typical features of multi-agent systems (see Fig.1).The main advantages of this approach are:–Reuse of well-defined,well-founded,and commonly used concepts of UML.–Use of existing mechanisms for specifying and ex-tending UML-based languages (metamodel exten-sions and UML profiles).–Ease of incorporation into existing UML-based CASE tools.The abstract syntax,semantics and notation of the lan-guage are defined at the AML Metamodel and Notation level.The AML Metamodel is further structured into two main packages:AML Kernel and UML Extension for AML.UML 2.0 Profile of AMLUML 1.* Profile of AMLUML 1.* Profiles Extending AML UML 2.0 Profiles Extending AML AML MetamodelAML NotationAML Kernel UML Extension for AMLFigure 1:Levels of AML definitionThe AML Kernel is a conservative 2extension of UML 2.0,comprising specification of all the AML modeling ele-ments.It is logically structured into several packages,each of which contains specification of modeling elements ded-icated for modeling specific aspect of MAS.The UML Extension for AML package adds some meta-properties and structural constraints to the standard UML2Aconservative extension of UML is an extension of UML which re-tains the standard UML semantics in unaltered form [22].elements.It is thus a non-conservative extension of UML,and therefore an optional part of the language.However,the extensions contained within are simple and can be eas-ily implemented in most existing UML-based CASE tools.Upon the AML Metamodel and Notation two UML pro-files of AML are specified:UML 1.*Profile for AML (based on UML 1.*)and UML 2.0Profile for AML (based on UML 2.0).The primary objective of these profiles is to enable implementation of AML into existing UML 1.*and UML 2.0based CASE tools,respectively.2.2Extensibility of AMLAML is designed to encompass a broad set of relevant the-ories and modeling approaches,it being essentially impos-sible to cover all inclusively.In those cases where AML is insufficient,several mechanisms can be used to extend or customize it as required:–Metamodel extension offers first-class extensibility (as defined by MOF [13])of the AML metamodel and notation.–AML profile extension offers the possibility to adapt AML for a given domain,platform or development method by means of UML Profiles,without the need to modify the underlying AML Metamodel and Nota-tion.–Concrete model extension allows to employ alterna-tive MAS modeling approaches as complementary specifications to the AML model.3Modeling MAS EntitiesIn general,entities are objects that can exist independently of others.In order to maximize reuse and comprehensi-bility of the metamodel AML defines several auxiliary ab-stract metamodeling concepts called semi-entities and their types.Semi-entity types are specialized UML classes used to specify coherent set of features,logically grouped ac-cording to particular aspects of MASs.They are used to specify features of other types of modeling elements.3.1AML Semi-entitiesAML defines the following semi-entities:Behaviored semi-entities represent elements,which can own capabilities,observe and/or effect their environment by means of perceptors and effectors,provide and use ser-vices,and can be (de)composed into behavior fragments.Socialized semi-entities represent elements,which can form societies,can participate in social relationships and can own social properties.Mental semi-entities represent elements which can be characterized in terms of their mental attitudes,e.g.which information they believe in,what are their objectives,AGENT MODELING rmatica29(2005)391–400393 needs,motivations,desires,what goal(s)they are commit-ted to,when and how a particular goal is to be achieved,which plan to execute,etc.3.2AML Fundamental EntitiesThe fundamental entities that compose MASs are:agents, resources,and environments.AML therefore defines three modeling concepts,which can be used to model the above mentioned fundamental entities at both type and instance levels:Agent type is used to specify the type of agents,i.e.self contained entities that are capable of interactions,observa-tions and autonomous behavior within their environment. Resource type is used to model the type of resources within the system,i.e.physical or informational en-tities with which the main concern is their availability (in terms of its quantity,access rights,conditions of us-age/consumption,etc.).Environment type is used to model the type of a system’s inner environment3,i.e.the logical or physical surround-ings of entities which provide conditions under which the entities exist and function.In AML,all the aforementioned entity types are special-ized UML classes,and thus can utilize all the features de-fined for UML classes,i.e.can be instantiated,can own structural and behavioral features,behaviors,can be struc-tured into parts and ports,participate in interactions,can participate in various kinds of relationships(e.g.associa-tions,generalizations,dependencies),etc.The instances of the entity types(called entities)can be modeled by means of UML instance specifications classified according to the corresponding types.Furthermore,all the AML fundamental entity types in-herit features of behaviored semi-entities,and in addition to these,agent and environment types are also socialized and mental semi-entities.Fig.2shows an example of a definition of an abstract class3DObject that represents spatial objects,charac-terized by shape and position,existing inside a containing space.An abstract environment type3DSpace represents a three dimensional space.This is a special3DObject and as such can contain other spatial objects.3DSpace provides a service Motion to the objects contained within (for details about services see Sect.5.4).Three con-crete3DObject s,an agent type Person,a resource type Ball and a class Goal are defined as specialized 3DObject s.3DSpace is further specialized into a con-crete environment type Pitch representing a soccer pitch containing two goals and a ball.3Inner environment is that part of an entity’s environment that is con-tained within the boundaries of the system.Figure2:Example of entities,their relationships,service provision and usage4Modeling Social AspectsMASs are commonly perceived as systems comprised of a number of autonomous agents,situated in a common envi-ronment,and interacting with each other in order that the desired functionality and properties of the systems could emerge.These properties of MAS are not always derivable or representable solely on the basis of properties and capa-bilities of individual agents,but are usually given also by their mutual relationships,interactions,coordination mech-anisms,social attitudes,etc.Such aspects of MASs are commonly referred to as social aspects.From the social perspective the following aspects of MAS are commonly considered in MAS models(for de-taisl see[4]):–Social structure concerning mainly with the identifi-cation of societies which can evolve within the sys-tem,specification of their properties,structure,identi-fication of comprised roles,individual entities that can participate in such societies,what roles they can play, their mutual relationships,etc.–Social behavior covering such phenomena as social dynamics(i.e.the ability of a society to react to inter-nal and external events),norms(i.e.rules or standards of behavior shared by members of a society),social interactions(how individuals and/or societies interact with others in order to exchange information,coordi-nate their activities,etc.),and social activities of in-dividual entities and societies(e.g.how they change their attitudes,roles they play,social relationships), etc.–Social attitudes addressing the individual and/or com-mon tendencies(usually expressed in terms of moti-vations,needs,wishes,intentions,goals,beliefs,com-mitments,etc.)to anything of a social value.In this section the focus is on modeling social structure of multi-agent systems.AML modeling constructs which can be used to model social behavior and social attitudes are outlined in the subsequent sections,mainly5,6,and8.394Informatica29(2005)391–400I.Trencansky et al.In order to accommodate special needs for modeling so-cial aspects,AML utilizes concepts of:organization units, social relationships,entity roles,and role properties.4.1Organization UnitsOrganization unit type is a specialized environment type, and thus inherits features of behaviored,socialized and mental semi-entity types.They are used to specify the type of societies that can evolve within the system from both the external as well as internal perspectives.From an external perspective,organization units repre-sent coherent autonomous entities,which can be character-ized in terms of their mental and social attitudes,can per-form behavior,participate in different kinds of(social)rela-tionships,can observe and interact with their environment, offer and use services,play roles,etc.Their properties and behavior are both(1)emergent properties and behavior of all their constituents,their mutual relationships,observa-tions and interactions,and(2)the features and behavior of organization units themselves.For modeling organization units from external perspec-tives,in addition to features defined for UML classes (structural and behavioral features,owned behaviors,rela-tionships,etc.),also all the features of behaviored,social-ized,and mental semi-entities can be utilized.From an internal perspective,organization units are types of environment that specify the social arrangements of entities in terms of structures,interactions,roles,con-straints,norms,etc.For this purpose organization unit types usually utilize the possibilities inherited from UML structured classifier, and model their internal structure by contained parts and connectors,in combination with entity role types used as types of the parts.For an example of an organization unit see Fig.3(b).4.2Social RelationshipsSocial relationship is a particular type of connection be-tween social entities related to or having dealings with each other.For modeling such relationships,AML defines a spe-cial type of UML property,called social property.The so-cial property can be used either in the form of an owned social attribute,or as the end of a social association,and can specify its social role kind4.For an example of modeling social relationships see Fig.3.4.3Roles and Role PropertiesRoles are used to define a normative behavioral repertoire of entities,and thus provide the basic building blocks of MAS societies.For modeling roles,AML provides entity role type,a specialized behaviored,socialized and mental 4AML predefines peer,subordinate and superordinate social role kinds,but this set can be extended as required.semi-entity type.Entity role types are used to model ab-stractions of coherent set of features,capabilities,behav-iors,observations,relationships,participation in interac-tions,and services offered or required by entities partici-pating in a particular context.Each entity role type should be realized by a specific implementation possessed by an entity that can play that entity role type.An instance of an entity role type is called entity role and exists only while some behavioral entity plays it.For modeling the ability of an entity to play an entity role type,AML provides role properties.Role property is a specialized UML property,used to specify that an instance of its owner(i.e.a behavioral entity)can play one or several roles of a particular entity role type.The role property can be used either in the form of a role attribute or as the end of a play association.One entity can at each time play several entity roles. These entity roles can be of the same as well as of dif-ferent types.The multiplicity defined for a role property constraints the number of entity roles of given type the par-ticular entity can play concurrently.Additional constraints which govern playing of entity roles can be specified by UML constraints.To allow explicit manipulation of entity roles in UML activities and state machines,AML defines a set of actions for entity role creation and disposal,particularly create role action and dispose role action.Fig.3(a)contains the diagram depicting an agent of type Person which can play entity roles of type Player, Captain,Coach,and Referee.The possibility of playing entity roles of a particular type is modeled by play associations.Fig.3(b)depicts an organization unit SoccerMatch,which comprises three referee s(of the Referee entity role type)and two team s(of the SoccerTeam organization unit type).The SoccerTeam itself consists of one to three coach es,and eleven to fifteen player s of which one is the captain.The player s are peers to each other(the cooperate con-nector),and subordinates to the coach es(the manage connector),and the captain(the lead connector).The referee s are superordinate to the both SoccerTeam s (the control connector).Fig.4shows the instantiation of the previously defined types in a model of a system’s snapshot,where the agent Lampard,of type Person,plays the entity role player,and the agent Terry,also of type Person,plays the entity role captain and leads Lampard.The agent Mourinho,play-ing the entity role coach manages both players Lampard and Terry.5Modeling InteractionsTo support modeling of interactions in MAS,AML pro-vides a number of UML extensions,which can be logi-cally subdivided into:(1)generic extensions to UML in-teractions,(2)speech act based extensions to UML inter-AGENT MODELING rmatica 29(2005)391–400395(b)Figure 3:Example of social structure modelingFigure 4:Example of the entity role instantiation and play-ingactions,(3)observations and effecting interactions,and (4)services.5.1Generic Extensions to UML InteractionsGeneric extensions to UML interactions provide means to model:(1)interactions between groups of entities (multi-message and multi-lifeline),(2)dynamic change of object’s attributes to express changes in internal structure of orga-nization units,social relationships,or played entity roles,etc.,induced by interactions (attribute change),(3)model-ing of messages and signals not explicitly associated with the invocation of corresponding methods and receptions (decoupled message),(4)mechanisms for modification of interaction roles of entities (not necessary entity roles)in-duced by interactions (subset and join dependencies),and (5)modeling the actions of dispatch and reception of de-coupled messages in activities (send and decoupled mes-sage actions,and associated triggers).Multi-message is a specialized UML message which is used to model a particular communication between (unlike UML message)multiple participants,i.e.multiple senders and/or multiple receivers.Multi-lifeline is a specialized UML lifeline,used to rep-resent (unlike UML lifeline)multiple participants in inter-actions.Decoupled message is a specialized multi-message used to model the asynchronous dispatch and reception of a mes-sage payload without (unlike UML message)explicit spec-ification of the behavior invoked on the side of the receiver.The decision of which behavior should be invoked when the decoupled message is received is up to the receiver what allows to preserve its autonomy in processing messages.Attribute change is a specialized UML interaction frag-ment used to model the change of attribute values (state)of interacting entities induced by the interaction.Attribute change thus enables to express addition,removal,or mod-ification of attribute values,and also to express the added attribute values by sub-lifelines.The most likely utiliza-tion of attribute change is in modeling of dynamic change of entity roles played by behavioral entities represented by lifelines in interactions,and the modeling of entity inter-actions with respect to the played entity roles (i.e.each sub-lifeline representing a played entity role can be used to model interaction of its player with respect to this entity role).Subset is a specialized UML dependency between event occurrences owned by two distinct (superset and subset)lifelines used to specify that since the event occurrence on the superset lifeline,some of the instances it represents (specified by the corresponding selector)are also repre-sented by another,the subset lifeline.Similarly,join dependency is also a specialized UML de-pendency between two event occurrences on lifelines (sub-set and union ones),used to specify that a subset of in-stances,which have been until the subset event occurrence represented by the subset lifeline,is after the union eventoccurrence represented by the ¸Sunion ˇTlifeline.The union lifeline,thus after the union event occurrence represents the union of the instances it has been representing before,and the instances specified by the join dependency.Send decoupled message action is a specialized UML send object action used to model the action of dispatch-ing a decoupled message,and accept decoupled message action is a specialized UML accept event action used to model reception of a decoupled message action that meets the conditions specified by the associated decoupled mes-sage trigger.A simplified interaction between entities taking part in a player substitution is depicted in Fig.5.Once the main coach decides which players are to be substituted (p1to be substituted and p2the substitute),he first notifies player p2to get ready and then asks the main referee for per-mission to make the substitution.The main referee in turn replies by an answer .If the answer is “yes”,the substitution process waits until the game is interrupted.If so,the coach instructs player p1to exit and p2to enter.Player p1then leaves the pitch and joins the group of in-active players and p2joins the pitch and thereby the group of active players.Fig.6shows an example of the communicative inter-action in which the attribute change elements are used to model changes of entity roles played by agents.The dia-gram realizes the scenario of a captain change caused by the original captain (player2)substitution.At the beginning of the scenario the agent396Informatica 29(2005)391–400I.Trencansky et al.Figure 5:Example of a communicative interaction player2is captain (modeled by its role prop-erty captain ).During the substitution,the main coach gives the player2order to hand the cap-tainship over (handCaptainshipOver()message)and the player1the order to become the captain (becomeCaptain()message).After receiving these messages,the player2stops playing the entity role captain (and starts playing the entity role ofordinary player )and the player1changes from ordinary player to captain .Figure 6:Example of a social interaction with entity role changes5.2Speech Act Specific Extensions to UMLInteractionsSpeech act specific extensions to UML interactions com-prise modeling of speech-acts (communication message),speech act based interactions (communicative interac-tions),patterns of interactions (interaction protocols),and modeling the actions of dispatch and reception of speech-act based messages in activities (send and accept commu-nicative message actions,and associated triggers).Communication message is a specialized decoupled message used to model communicative acts of speech act based communication within communicative interaction s (a specialized UML interaction)with the possibility of ex-plicit specification of the message performative and pay-load.Both the communication message and communica-tive interaction can also specify used agent communication and content languages,ontology and payload encoding.Interaction protocol is a parametrized communicative interaction template used to model reusable templates of communicative interactions.5.3Observations and Effecting InteractionsAML provides several mechanisms for modeling observa-tions and effecting interactions in order to (1)allow model-ing of the ability of an entity to observe and/or to bring about an effect on others (perceptors and effectors),(2)specify what observation and effecting interactions the en-tity is capable of (perceptor and effector types and perceiv-ing and effecting acts),(3)specify what entities can ob-serve and/or effect others (perceives and effects dependen-cies),and (4)explicitly model the actions of observations and effecting interactions in activities (percept and effect actions).Observations are in AML modeled as the ability of an entity to perceive the state of (or to receive a signal from)an observed object by means of perceptors ,which are spe-cialized UML ports.Perceptor types are used to specify (by means of owned perceiving acts )the observations an owner of a perceptor of that type can make.Perceiving acts are specialized UML operations which can be owned by perceptor types and thus used to specify what perceptions their owners,or perceptors of given type,can perform.The specification of which entities can observe others,is modeled by a perceives dependency.For modeling behav-ioral aspects of observations,AML provides a specialized percept action .Different aspects of effecting interactions are modeled analogously,by means of effectors ,effector types ,effecting acts ,effects dependencies,and effect actions .An example is depicted in Fig.8(a)which shows an entity role type Player with two eyes–perceptors called eye of type Eye ,and two legs–effectors called leg of type Leg .Eyes are used to see other players,the pitch and the ball,and to provide localization information to the in-ternal parts of a player.Legs are used to change the player’s position within the pitch (modeled by changing of internal state implying that no effects dependency need be placed in the diagram),and to manipulate the ball.5.4ServicesThe AML support for modeling services comprises (1)the means for the specification of the functionality of a service and the way a service can be accessed (service specification and service protocol),(2)the means for the specification of what entities provide/use services (service provision,ser-vice usage,and serviced property),and (if applicable)by what means (serviced port).A service is a coherent block of functionality provided by a behaviored semi-entity,called service provider,that can be accessed by other behaviored semi-entities (whichAGENT MODELING rmatica29(2005)391–400397can be either external or internal parts of the service provider),called service clients.Service specification is used to specify a service by means of owned service protocols,i.e.specialized inter-action protocols extended with the ability to specify two mandatory,disjoint and nonempty sets of(not bound)pa-rameters,particularly:provider and client template param-eters.The provider template parameters of all contained ser-vice protocols specify the set of the template parame-ters that must be bound by the service providers,and the client template parameters of all contained service proto-cols specify the set of template parameters that must be bound by the service clients.Binding of these complemen-tary template parameters specifies the features of the par-ticular service provision/usage which are dependent on its providers and clients.Service provision/usage are specialized dependencies used to model provision/use of a service by particular enti-ties,together with the binding of template parameters that are declared to be bound by service providers/clients. Fig.7shows a specification of the Motion service defined as a collection of three service protocols.The CanMove service protocol is based on the standard FIPA protocol FIPA-Query-Protocol5[21]and binds the proposition parameter(the content of a query-if message)to the capability canMove(what,to)of a service provider.The participant parameter of the FIPA-Query-Protocol is mapped to a service provider and the initiator parameter to a service client.The CanMove service protocol is used by the ser-vice client to ask if an object referred by the what parame-ter can be moved to the position referred by the to param-eter.The remaining service protocols Move and Turn are based on the FIPA-Request-Protocol[21]and are used to change the position or direction of a spatial object. Binding of the Motion service specification to the provider3DSpace and the client3DObject is depicted in Fig.2.Figure7:Example of service specification 5The AML specification of the interaction protocol can be found in[3].6Modeling Capabilities andBehaviorAML extends the capacity of UML to abstract and decom-pose behavior by another two modeling elements:capabil-ity and behavior fragment.Capability is an abstract specification of a behavior which allows reasoning about and operations on that spec-ification.Technically,a capability represents a unification of the common specification properties of UML’s behav-ioral features and behaviors expressed in terms of their in-puts,outputs,pre-and post-conditions.Behavior fragment is a specialized behaviored semi-entity type used to model a coherent re-usable fragment of behavior and related structural and behavioral features.It enables the(possibly recursive)decomposition of a com-plex behavior into simpler and(possibly)concurrently ex-ecutable fragments,as well as the dynamic modification of an entities behavior in run-time.The decomposition of a behavior of an entity is modeled by owned aggregate at-tributes of the corresponding behavior fragment type. Fig.8(a)shows the decomposition of the Player entity role type’s behavior into a structure of behavior fragments.In part(b)two fragments,Mobility and BallHandling,are described in terms of their owned capabilities(turn,walk,catch,etc.).(a)(b)Figure8:Example of behavior fragments,observations and effecting interactions7Modeling MAS Deployment and MobilityThe means provided by AML to support modeling of MAS deployment and agent mobility comprise:(1)the support for modeling the physical infrastructure onto which MAS entities are deployed(agent execution environment),(2) what entities can occur on which nodes of the physical in-frastructure and what is the relationship of deployed enti-ties to those nodes(hosting property),(3)how entities can get to a particular node of the physical infrastructure(move and clone dependencies),and(4)what can cause the en-。