Discovery-Driven Ontology Evolution
- 格式:pdf
- 大小:429.98 KB
- 文档页数:7
WebODE: a Scalable Workbench for OntologicalEngineeringJulio C. Arpírez, Oscar Corcho, Mariano Fernández-López, Asunción Gómez-Pérez Facultad de Informática. Universidad Politécnica de MadridCampus de Montegancedo, s/n. 28660 Boadilla del Monte. Madrid. Spain+34 91 336 7439jarpirez@delicias.dia.fi.upm.es; {ocorcho, mfernandez, asun}@fi.upm.esABSTRACTThis paper presents WebODE as a workbench for ontological engineering that not only allows the collaborative edition of ontologies at the knowledge level, but also provides a scalable architecture for the development of other ontology development tools and ontology-based applications. First, we will describe the knowledge model of WebODE, which has been mainly extracted and improved from the reference model of METHONTOLOGY’s intermediate repre-sentations. Later, we will present its architecture, together with the main functionalities of the WebODE ontology editor, such as its import/export service, translation services, ontology browser, inference engine and axiom generator, and some services that have been integrated in the workbench: WebPicker, OntoMerge and the OntoCatalogue.KeywordsWebODE, ontology engineering workbench, ontology building, translation, integration and merge.1. INTRODUCTIONIn the last years, several tools for building ontologies have been developed: Ontolingua [11], OntoSaurus [24], WebOnto [8], Protégé2000 [25], OilEd [20], OntoEdit [21], etc. A study comparing some of them can be found in [10]. Additional ontology tools and services have been built for other purposes: ontology merging (Chimaera [18], Ontomorph [4], PROMPT [14]), ontology access (OKBC [5]), etc. Finally, many applications have been built upon ontologies: Ontobroker [12], PlanetOnto [9], (KA)2 [1], MKBEEM [19], etc. All these tools and applications have contributed to a high development of the ontology community, and have laid the foundations of an emergent research and technological area: the Semantic Web [2]. However, current ontological technology suffers from the following problems, which must be solved prior to its transfer to the enterprise world: • There is no correspondence between existing methodologies and environments for building ontologies,except ODE and METHONTOLOGY [13].• Existing environments just give support for designing and implementing ontologies, but they do not support allthe activities of the ontology life cycle.• There are a lot of isolated ontology development tools that cannot interoperate easily, because they are based ondifferent technologies, on different knowledge modelsfor representing ontologies, etc.Consequently, there is a need for a common workbench to ensure a wide acceptance and use of ontological technology. We foresee three main areas in this workbench, as shown in figure 1:• Ontology development and management, which comprises technology that gives support to ontologydevelopment activities: knowledge acquisition, edition,browsing, integration, merging, reengineering, evaluation, implementation, etc.; ontology managementactivities: configuration management, ontology evolution, ontology libraries, etc.; and ontology supportactivities: scheduling, documentation, etc.• Ontology middleware services, which include different kinds of services that will allow the easy use andintegration of ontological technology into existing andfuture information systems, such as services foraccessing ontologies, integration with databases,ontology upgrading, query services, etc.• Ontology-based applications development suites, which will allow the rapid development and integration ofontology-based applications. They will be the last steptowards a real integration of ontologies into enterpriseinformation systems.In this paper, we will present WebODE as an scalable ontological engineering workbench that gives support to activities from the first two areas of the workbench previously identified. WebODE’s ontology editor allows the collaborative edition of ontologies at the knowledge level, supporting the conceptualization phase of METHONTOLOGY and most of the activities of the ontology’s life cycle (reengineering, conceptualization,Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee.K-CAP’01, October 22-23, 2001, Victoria, British Columbia, Canada. Copyright 2001 ACM 1-58113-380-4/01/0010…$5.00.implementation, etc). Besides, WebODE provides high extensibility in an application server basis, allowing the creation of middleware services that will allow the use of ontologies from applications.This paper is organized as follows: section WebODE in a nutshell gives a general overview of the main features of this ontological engineering workbench. Section WebODE’s knowledge model presents the knowledge model used for representing ontologies in the WebODE workbench. Section WebODE architecture describes its main services and the WebODE ontology editor, as an applications that uses most of the services. Section Related Work gives a short overview of existing ontology editing applications. Finally, the Conclusions section summarizes the main conclusions of this work, projects in which WebODE has already been used, ontologies developed using the WebODE ontology editor and further work.2. WEBODE IN A NUTSHELLWebODE is not an isolated tool for the development of ontologies, but an advanced ontological engineering workbench that provides varied ontology related services and covers and gives support to most of the activities involved in the ontology development process.WebODE workbench is built on an application server basis, which provides high extensibility and usability by allowing the addition of new services and the use of existing services. Examples of these services are WebPicker, OntoMerge and OntoCatalogue.Ontologies in WebODE are stored in a relational database. Moreover, WebODE provides a well-defined service-oriented API for ontology access that makes it easy the integration with other systems. Ontologies built with WebODE can be easily integrated with other systems by using its automatic exportation and importation services from and into XML, and its translation services into and from varied ontology specification languages (currently, RDF(S) [23], OIL [16], DAML+OIL [7], X-CARIN [19] and FLogic [17]).WebODE’s ontology editor allows the collaborative edition of ontologies at the knowledge level. Its knowledge model, which is described in depth in the next section, is mainly based on the set of intermediate representations of METHONTOLOGY and provides additional features. Ontology edition is aided both by form based and graphical user interfaces, a user-defined-views manager, a consistency checker, an inference engine, an axiom builder and the documentation service.Two interesting and novel features of WebODE with respect to other ontology engineering tools are: instance sets, which allow to instantiate the same conceptual model for different scenarios, and conceptual views from the same conceptual model, which allow creating and storing different parts of the ontology, highlighting and/or customizing the visualization of the ontology for each user.The graphical user interface allows browsing all the relationships defined on the ontology as well as graphical-pruning these views with respect to selected types of relationships. Mathematical properties such as reflexive, symmetric, etc. and other user-defined properties can be also attached to the "ad hoc" relationships.The collaborative edition of ontologies is ensured by a mechanism that allows users to establish the type of access of the ontologies developed, through the notion of groups of users. Synchronization mechanisms also exist that allow several users to edit the same ontology without errors.Figure 1. An ontological engineering workbench.Constraint checking capabilities are also provided for type constraints, numerical values constraints, cardinality constraints and taxonomic consistency verification [15] (i.e., common instances of disjoint classes, loops, etc.)Finally, WebODE’s inference service has been developed in Ciao Prolog. Although WebODE is not OKBC compliant yet, all the OKBC primitives have been defined in prolog for their use in its inference engine.3. WEBODE’S KNOWLEDGE MODELWebODE’s knowledge model is extracted from the set of intermediate representations of METHONTOLOGY. It allows the representation of concepts and their attributes (both class and instance attributes), taxonomies of concepts, disjoint and exhaustive class partitions, ad-hoc binary relations between concepts, properties of relations, constants, axioms and instances. It also allows the inclusion of bibliographic references for any of them and the importation of terms from other ontologies.Additionally, WebODE improves the reusability of ontologies defining sets of instances, which allow the instantiation of the same conceptual model for different scenarios it may be used for.In the following subsections we will describe each one of the components of the WebODE’s knowledge model:3.1. ConceptsIn short, a concept (also known as a class) can be anything about which something is said, and, therefore, can also be the description of a task, function, action, strategy, reasoning process, etc.Concepts are identified by their name, although they can also have synonyms and abbreviations attached to them. A natural language (NL) description can be also included.The same applies to references and formulae, which will be described later in this section. Any component in WebODE may have any amount of references and reasoning formulae attached to it.Class attributes are attributes whose value must be the same for all instances of the concept. They are not components themselves in WebODE's knowledge model, as they are always attached to a concept (and to its subclasses, because of the inheritance mechanism).The information stored for a class attribute is the following: its name (which must be different from the rest of attribute names of the same concept); the name of the concept it belongs to (attributes are local to concepts, that is, two different concepts can have attributes with the same name); its value type, also called range, which can be a basic data type (String, Integer, Cardinal, Float, Boolean, Date, Numeric Range, Enumerated, URL) or an instance of a concept (in this case, the name of the concept must be specified), and, finally, its minimum and maximum cardinality, which constrains the number of values that the class attribute may have. Optional information for class attributes consists of its NL description, the measurement unit and its precision (the last two ones just in case of numeric attributes).Finally, the value(s) of the class attribute can be specified once it has been defined completely. These values will be attached to the class attribute where they have been defined. Instance attributes are attributes whose value may be different for each instance of the concept. They have the same properties than class attributes and two additional properties, minimum value and maximum value, which are used in attributes with numeric value types. Values inserted for instance attributes are interpreted as default values for them. 3.2. GroupsGroups, also called partitions, are used to create disjoint and exhaustive class partitions. They are sets of disjoint concepts that have a name, the set of concepts they group together and, optionally, a NL description. A concept can belong to several groups.3.3. Built-in RelationsThis subsection deals with predefined relations in the WebODE’s knowledge model, related to the representation of taxonomies of concepts and mereology relationships between concepts. They are divided into three groups: Taxonomical relations between concepts. Two predefined relations are included: subclass-of and not-subclass-of. Single and multiple inheritance are allowed.Taxonomical relations between groups and concepts. A group is a set of disjoint concepts. There are two predefined relations available, whose semantics is also explained:Disjoint-subclass-partition. A disjoint subclass partition Y of class X defines the set Y of disjoint classes as subclasses of class X. This classification is not necessarily complete: there may be instances of X that are not included in any subclass of the partition.Exhaustive-subclass-partition. An exhaustive subclass partition Y of class X defines the set Y of disjoint subclasses as subclasses of the class X, where X can be defined as union of all the classes of the partition Mereological relations between concepts. Two relations are included: transitive-part-of and intransitive-part-of.3.4. Ad-hoc relationsWebODE allows just binary ad-hoc relations to be created between concepts. The creation of relations of higher arity must be made by reification (creating a concept for the relation itself and n binary relations between the concepts that appear in the relation and the concept that is used for representing the relation).Ad-hoc relations are characterized by their name, the name of the origin (source) and destination (target) concepts, and its cardinality, which establishes the number of facts (instances of the relation) that can hold between the origin and the destination term. Their cardinality can be restricted to 1 (only one fact) or N (any number of facts).Additionally, there is some optional information that can be provided for an ad-hoc relation, such as its NL description and its properties (they are used to describe algebraic properties of the relation).References and formulae can be also attached to the ad-hoc relations, as happened with the concepts.3.5. ConstantsConstants are components that have always the same value. They are included in the knowledge model of WebODE to ease the maintenance of ontologies. They are available for their use in any expression in the ontology.The information needed for a constant is: name, value type (the same as shown for attributes of concepts, except for instances of concepts), value and measurement unit. Its NL description can be optionally provided.3.6. FormulaeThere are three types of formulae that can be created in WebODE: axioms, rules and procedures. All of them are represented by their name, an optional NL description and a formal expression in first order logic, using a syntax provided by WebODE.Axioms model sentences, using first order logic, that are always true. They may be included in the ontologies for several purposes, such as constraining its information, verifying its correctness or deducting new information.Rules are included in the ontology for the inference of new knowledge in the ontology from the knowledge already included in it. Their chaining mechanism is not explicitly declared, although WebODE’s inference engine uses backward chaining.Procedures are used for declaring sequences of actions. Currently, the user is free to use any syntax for these components, because it is too much tight to the target language in which the ontology will be used.The axiom generator, which will be described later in this paper, allows the user create axioms more easily than if they were created from scratch. WebODE also provides a library of axiom patterns for common used expressions.3.7. InstancesThere are two kinds of instances that can be created in WebODE: instances of concepts and instances of relations (also called facts).Instances of concepts represent elements of a given concept. They have their own name, and a set of instance attributes with their values. Instance attributes are inherited from the concept they belong to and its superclasses.Instances of relations are used to represent a relation that holds between individuals (instances of concepts) in the ontology. They have their own name, the names of the relation and the instances that participate in it.WebODE allows grouping both kinds of instances in sets of instances. Instance sets, which are described by their name and an optional description,allow the distributed use of the ontology in different frameworks. In other words, the same ontology can be instantiated for different applications, and instances in an instance set are independent from instances in another one. This, along with the import/export features, permits the isolation of the main two parts of an ontology: its conceptualization and its instances (the knowledge base).3.8. ReferencesReferences are used for adding bibliographic references in the ontology. The information needed for references is their name and an optional description. They can be attached to any component of the WebODE’s knowledge model.3.9. PropertiesThey are used to describe algebraic properties of ad-hoc relations. They are divided in two groups:Built-in properties: reflexive, irreflexive, symmetric, asymmetric, antisymmetric and transitive.Ad-hoc properties. The user can define them and attach them to ad-hoc binary relations to describe either algebraic or other kinds of properties of them.3.10. Imported termsImported terms are components that are included in the current ontology from other ontologies. The user must provide their name, the host for retrieving the term from, the name of the ontology where to retrieve the term from and the original term name.Currently, only concepts from other ontologies can be imported into WebODE. In the future, this will be expanded to any kind of components of the ontology (groups, relations, axioms, etc.).4. WEBODE ARCHITECTUREThe architecture of the WebODE workbench is explained in this section, according to the classical three tiers architecture commonly found in web applications: data tier, business logic tier and presentation tier.4.1. Data TierOntologies are the central element in our workbench. They can be stored in a relational database with JDBC support (it has been tested both in Oracle and MySQL).The module for database access is included as a core service inside the Minerva Application Server (which is explained later in this section). Its main features are the optimization of connections to the database (connection pooling) and transparent fault tolerance capabilities.4.2. Business Logic TierThis tier usually is divided in two different ones: the presentation sub-tier and the logic sub-tier.The presentation sub-tier is responsible for generating the content to be presented in the user’s browser. It also handles user requests from the client (form handling, queries, etc.) and forward them to business logic services. Servlets and r JSPs (Java Server Pages) are used in it.The logic sub-tier comprises the applications’ business-logic services. All the implemented services are available from the Minerva Application Server, through RMI-IIOP technology. We distinguish two groups of services: services from the Minerva Application Server, which are not tied specifically to the WebODE workbench but can be used by any other service, and business-logic services for WebODE, which are specific to this workbench.Modules from Minerva Application ServerThis application server has been developed in our lab. In this subsection we will describe its main modules: Authentication module. All the authentication and security controls in the application server are based on this module. It allows managing access control lists for all the services of applications built upon the server, groups for sharing ontologies, information protection, etc.Currently, it uses an internal format for storing and accessing information. However, it is possible to develop additional modules for the authentication of users using other authentication systems (from Windows NT, UNIX, LDAP, etc.). This would allow the integration of the workbench in the authentication schema of the organization.Log module. This module is in charge of auditing tasks. Its verbose level can be configured, depending on the audit needs for the system.Administration module. It allows the administration of the application server by using the Minerva Management Console (MMC), which allows the server administrator to manage locally or remotely every installed service, to start and stop services, to manage users, groups and access control lists through the authenticator service, etc.Thread management module. This module optimizes the use of threads in the server for any task, using thread-pooling techniques, which improve drastically their execution time. Additionally, it is possible to change thread priorities: some tasks can be executed before other ones.Planning module. This module, which depends on the thread management module, allows the planning of periodical tasks, such as cache management, periodical backups, ontology consistency checking, etc.Backup module. Using this module, ontologies can be safely stored in any destination (which is configurable). Backups can be scheduled as needed.This service makes use of the planning and the ontology XML exportation services. This last service will be explained later in this section.Business logic modules for WebODE workbenchThese modules provide services for the WebODE ontology editor, although they can be used for any other application. Ontology access service. This module is in charge of managing the ontologies’ conceptual model, by inserting, deleting and modifying the definitions of all the terms in a domain. It uses the database access service, and, optionally, cache and consistency check services, which are explained below.Cache module. This module, which uses the database access and planning services, speeds up the access to ontologies, using several caching techniques that increase the performance of the ontology access service.Consistency check module. This module also uses the database access and planning services from the Minerva application server. It performs consistency checks during taxonomy building, as presented in [15], decoupling these verifications from the ontology access service.Ontology access API. Ontologies can be accessed from other applications through this well-defined API. This API is supported by the Minerva application server and can be accessed through RMI-IIOP.XML ontology exportation module. It exports ontologies to valid XML, according to a well-defined DTD. This XML code can be used by other applications able to use this formator for later importations of the ontologies into other instancesof the WebODE workbench.XML ontology importation module. It imports ontologiesin the XML format described by the DTD used in the XML exportation service. These ontologies must also accomplish consistency rules used by the consistency check service. Ontology languages exportation/importation. Currently, several services exist in WebODE for the exportation and importation of ontologies to the following languages: RDF(S), OIL, DAML+OIL, X-CARIN and FLogic.OKBC-based inference engine. It allows the ontology developer to perform queries and inferences on the ontology. The user can use predefined access primitives, which are based in the OKBC protocol, and create his/her own Prolog programs to perform inferences reusing the primitives already provided. It is based on Ciao Prolog.Axiom prover module. It makes use of the inference engine, allowing the ontology developer to test if knowledge currently included in the ontology is consistent with its axioms. Each axiom is translated into Horn clauses and can be tested independently from the other ones.Documentation module. WebODE ontologies are automatically documented in different formats, such as HTML tables (intermediate representations of METHONTOLOGY), HTML documents or XML files. The whole ontology or parts of it can be selected for this documentation generation. Views generated with OntoDesigner can be also selected for their documentation. WebPicker: Ontology Acquisition from Web Resources. WebPicker is a service for the ontology acquisition from web resources that has been used for the acquisition of several standards and initiatives of products and services classifications in the e-commerce domain (UNSPSC, e-cl@ss and RosettaNet) as described in detail in [6].Information represented in web resources is transformed into a conceptual model specified in the XML syntax of WebODE, which is imported later into WebODE, so that its ontology editor can be used to redesign it.OntoMerge: Ontology Merge. This service performs the merge of concepts (and their attributes) and relations of two ontologies built for the same domain. First, it assists the revision of both ontologies, based on a set of design criteria and semantic and syntactic relationships among the components of the ontology. Later, it uses natural language resources for establishing relationships between both ontologies. It performs a supervised merge of components from both ontologies using this information. Finally, it assists the final revision of the resulting merged ontology.OntoCatalogue: Catalogue Generation from Ontologies. This service generates electronic catalogues out from ontologies, taking into account several configuration parameters, such as the depth of the taxonomy of products, attributes to be generated, the mappings between relations in the ontology and links in the catalogue, navigation hints through the catalogue, parts of the taxonomy to be generated, etc.The catalogue generation from ontologies ensures a good and rich classification of products/services in it.4.2.1. User Interface Tier: WebODE Ontology EditorThe WebODE ontology editor is an application for the development of ontologies at the knowledge level, based on the knowledge model already presented, which uses most of the services that have been presented above. Its user interface uses HTML, CSS (Cascading Style Sheets ) and XML (Extended Mark-up Language ). JavaScript and Java are used for several kinds of user validations.Some specialized applets have been also included in the user interface, such as OntoDesigner, the axiom manager, the ontology browser and the clipboard.The design rationale for this user interface is based on an easy-to-use and clarity basis. Figure 2 shows one of the screens of the editor, while including a new instance attribute for a concept. We will explain the most relevant components in this figure:The clipboard applet is available in the upper part of the screen. It is used to copy and paste components’ definitions, which is useful when creating components that are very similar to others. It has enough space for four definitions. The ontology browser is placed on the left. It aids the navigation through the taxonomy of concepts, formulae, references and imported terms in the ontology. New components can be added by just double clicking on it andFigure 2.Snapshot of WebODE’s ontology editor while editing an instance attribute of a concept.filling the form that appears in the middle of the screen, and contextual menus arise when right-clicking on any of the visualized components.This user interface also includes the functionalities of exporting/importing ontologies into XML or varied ontology languages, inference engine and documentation. OntoDesigner. OntoDesigner is a graphical user interface for the visual construction of taxonomies of concepts and ad-hoc relations between concepts, which is integrated in the WebODE ontology editor as an applet. Figure 3 shows a snapshot of OntoDesigner while editing an ontology on the domain of office furniture.Using OntoDesigner, the user can create different views of the edited ontology, so that the visualization of parts of the ontology can be customized while creating it. Moreover, the user can decide at any time whether showing or hiding different kinds of relations (either predefined or ad-hoc) between concepts, in the sense of a graphical prune.Axiom Manager. This applet is used to ease the management of formulae in the WebODE ontology editor. It allows the user to create axioms using a graphical interface and provides functionalities such as an axiom library, axiom patterns and axiom parsing and verification.5. RELATED WORKWebODE has a strong relationship with ODE [3]. Both applications allow building ontologies at the knowledge level, and translators are used to implement them in different ontology languages. ODE was created as a classical application for single users and was difficult to extend. Furthermore, ontologies were stored in a Microsoft Access database, which proved to be inefficient when dealing with large ontologies. However, while ODE knowledge model is flexible, WebODE knowledge model is fixed, as has been explained in this paper. Protégé2000 and OntoEdit are ontology development tools developed at the same time than WebODE, and using a similar design rationale, although they are not web-based but stand-alone applications. In fact, they share many functionalities (ontology edition, ontology documentation, ontology exportation and importation into XML and other languages). Moreover, Protégé2000 has been developed using a plug-in architecture, where new services can be added easily to the environment. However, WebODE integrates all its services in a well-defined architecture, stores its ontologies in a relational database (avoiding the use of text files) and provides additional services such as the inference engine, the axiom builder, ontology acquisition or catalogue generation. OilEd was developed in the context of the OntoKnowledge [22] EU project for the easy development of OIL ontologies. It is not intended as a complete ontology editor, but just “the Notepad for OIL ontologies”.Other “classic” editors, such as WebOnto, Ontolingua and OntoSaurus, can be used for the edition of ontologies in a specific language (OCML, Ontolingua and LOOM, respectively). They do not use databases for storing ontologies.6. CONCLUSIONSIn this paper, we have stated the need for a workbench for ontological engineering that allows:• the development and management of ontologies,• a wide use and integration of ontologies using a set of useful ontology middleware services, and• the rapid development of ontology-based applications for their integration in enterprise information systems.We have presented the WebODE workbench as a solution for this needs, describing its expressive knowledge model for representing ontologies, several built-in services and additional reusable services, such as WebPicker, OntoMerge and OntoCatalogue.Figure 3. OntoDesigner.。
evolution and ethics - prolegomena译文蜜蜂进化与伦理 - 序言第一部分:进化的基本原理众所周知,生物进化是指生物种群随时间逐渐改变和适应环境的过程。
达尔文的进化论提出了自然选择的理论,即根据环境的变化,有利于适应环境的个体将更有可能生存和繁殖,从而传递其有利的遗传特征给下一代。
这一理论成为了现代进化生物学的基础。
第二部分:进化伦理的概念随着进化理论的发展,人们开始探讨进化与伦理之间的关系。
进化伦理是指通过考察人类和其他生物的进化历程来解决伦理问题的一种方法。
它探讨了人类行为和道德观念的起源,以及我们是否可以从进化的角度来解释道德标准的形成和变化。
第三部分:进化对道德观念的影响进化理论对道德观念的影响是一个令人感兴趣的话题。
有些人认为,道德价值观是人类文化的产物,与进化无关。
然而,进化伦理学家提出,道德观念在一定程度上可以被解释为进化过程中的适应策略。
例如,合作和互助行为可能会增加个体的生存机会,因此在进化中获得了优势。
第四部分:道德决策的进化基础在日常生活中,我们面临着许多道德决策。
从进化伦理的角度来看,我们可以探讨道德决策是否受到进化过程的影响。
一些研究表明,我们的道德判断可能受到基因和环境交互作用的影响。
通过研究进化过程中的合作与互助生存策略,人们可以更好地理解道德决策的进化基础。
第五部分:进化伦理的争议与挑战尽管进化伦理提供了一种有趣的角度来解释道德观念的起源和变化,但它也面临着争议和挑战。
一些人担心,将道德归因于进化过程可能会削弱人类的责任感和自由意志。
此外,进化伦理也面临着伦理相对主义的批评,即由进化所解释的道德标准可能会导致不同文化间的道德观念相对化。
第六部分:进化伦理的价值与应用尽管进化伦理存在争议,但它仍具有许多价值和应用。
通过理解进化与道德之间的关系,我们可以更好地解答伦理难题,并为伦理决策提供更清晰的指导。
此外,进化伦理还有助于我们更好地理解人类行为和社会互动,为构建更和谐和谦逊的社会提供了新的思考方式。
人类进化之谜:探索科学与文化的交融1. Introduction:1.1 Overview:The study of human evolution has been a fascinating and challenging field, exploring the convergence of science and culture. Humans have evolved over millions of years, adapting to their environment and developing complex societies with unique cultural practices. This article aims to delve into the mystery of human evolution and examine how science and culture intersect in shaping our understanding of this process.1.2 Background:Human evolution is a complex subject that has intrigued scientists, researchers, and philosophers for centuries. Charles Darwin's theory of evolution by natural selection revolutionized our understanding of how species, including humans, have undergone gradual changes over time. However, the study of human evolution extends beyond Darwin's theory to encompass diverse scientific perspectives like modern genetics, paleoanthropology, and comparative anatomy.1.3 Purpose:The purpose of this article is to explore the various scientific theories associated with human evolution as well as investigate the profound influence of technology on our evolutionary trajectory. Moreover, it will delve into the role of cultural inheritance in shaping social group dynamics and examine how cooperation among individuals within cultural communities leads to societal advancement. By critically analyzing these aspects, we can gain insights into both past and future possibilities for human development.Note: The above text is written based on information provided in the given outline while following a general style/format for an introductory section. Feel free to make any necessary modifications or adjustments according to your writing style or content requirements.2. 人类进化的科学理论2.1 达尔文进化论引言:在人类进化的科学理论中,达尔文进化论是最重要也是最具有影响力的理论之一。
A Web Search Contextual Crawler Using OntologyRelation MiningWu ChenshengBeijing Municipal Institute of Science and TechnologyInformationBeijing, ChinaHou Wei, Shi Yanqin, Liu TongSchool of Information EngineeringUniversity of Science and Technology Beijing Beijing Municipal Institute of Science and TechnologyInformationBeijing, ChinaAbstract—In order to increase the correctness and the recall of vertical web search system, a novel web crawler ORC is proposed in this paper. By introducing ontologies, the semantic concepts are defined. The relations between ontologies are exploited to evaluate the importance of web documents’ context. These ontologies are organized by a network structure that is updated periodically. Thanking to the ontology relation mining, the crawling space is expanded and focused more. A primitive vertical search system is built based on ORC at the end of this paper.Keywords-vertical search; ontology; data mining; web crawlerI.I NTRODUCTIONNowadays, Web Search Engine (alternatively, Information Extraction System) has already been one of indispensable applications. Several power search engines have been built, such as google, baidu and so on. However, do they really return your favorite want? Sometimes, it would be failed when you focus on something professional or uncommon concepts. This problem is relieved by vertical search engines[1], which concentrates on only certain concepts or regions. However, to the best of our knowledge, this problem is not solved properly yet up to now. How to increase the correctness and the recall is one of the most urgent issues in web search society.One ideal scenario is semantic web[2], which aims to bridge the concepts between humans and machines. Although many efforts had been made, and several methods had been proposed to establish semantic webs, the ratio of standard semantic webs is very little in the world. In other words, semantic web is not the main current at present.If the structure of web document is organized regular, such as XML, the situation would be better. But the condition is opposite. For that reason, another technique had been proposed, that is ontology. In philosophy, ontology is the study of the nature of being, existence or reality in general, as well as of the basic categories of being and their relations. Traditionally listed as a part of the major branch of philosophy known as metaphysics, ontology deals with questions concerning what entities exist or can be said to exist, and how such entities can be grouped, related within a hierarchy, and subdivided according to similarities and differences. In computer science and information science, an ontology is a formal representation of a set of concepts within a domain and the relationships between those concepts. It is used to reason about the properties of that domain, and may be used to define the domain.In theory, an ontology is a "formal, explicit specification of a shared conceptualization"[3]. An ontology provides a shared vocabulary, which can be used to model a domain — that is, the type of objects and/or concepts that exist, and their properties and relations.In some of web search engines, ontology is utilized to describe and define concepts. The crawlers (alternatively spiders), a kind of agent software, explore the World Wide Web and establish the indices of the concepts. Generally speaking, a common web search engine is composed of crawler algorithms, concept index databases, and search algorithms. Among them, crawler algorithm decides the recall and correctness at a big extent.When the concepts related closely, similar concepts should be likely the wants of users either, besides the one user submitted. Data mining is the process of extracting hidden patterns from large amounts of data. As more data is gathered, with the amount of data doubling every three years[4], data mining is becoming an increasingly important tool to transform this data into information. It is commonly used in a wide range of profiling practices, such as marketing, surveillance, fraud detection and scientific discovery.Constructing the relation networks of ontologies by data mining method, is a novel strategy to organize the concept index database and explore heuristically. That is the main topic of this paper.This paper is organized as follows: The state-of-the-art web search engine and data mining technology are introduced in section II; in section III, ontology concept description technology is discussed; a novel web crawler algorithm ORC, which is based on ontology relation mining, is proposed in section IV; in section V, a primitive search website using ORC is presented.II.B ACKGROUNDA web search engine is a tool designed to search for information on the World Wide Web. Its research could be derived from 1990s. One of the first "full text" crawler-based search engines was WebCrawler, which came out in 1994. Unlike its predecessors, it let users search for any word in any978-1-4244-4507-3/09/$25.00 ©2009 IEEEwebpage, which has become the standard for all major search engines since. It was also the first one to be widely known by the public. Also in 1994 Lycos (which started at Carnegie Mellon University) was launched, and became a major commercial endeavor.Search engines were also known as some of the brightest stars in the Internet investing frenzy that occurred in the late 1990s[5]. Several companies entered the market spectacularly, receiving record gains during their initial public offerings. Some have taken down their public search engine, and are marketing enterprise-only editions, such as Northern Light. Many search engine companies were caught up in the dot-com bubble, a speculation-driven market boom that peaked in 1999 and ended in 2001. Nowadays, a series of search engines are booming out, such as Yahoo, Google, Live search and so on.Web search engines work by storing information about many web pages, which they retrieve from the WWW itself. These pages are retrieved by a Web crawler — an automated Web browser which follows every link it sees. Exclusions can be made by the use of robots. The contents of each page are then analyzed to determine how it should be indexed (for example, words are extracted from the titles, headings, or special fields called meta tags). Data about web pages are stored in an index database for use in later queries. Some search engines, such as Google, store all or part of the source page as well as information about the web pages, whereas others, such as AltaVista, store every word of every page they find. This cached page always holds the actual search text since it is the one that was actually indexed, so it can be very useful when the content of the current page has been updated and the search terms are no longer in it. This problem might be considered to be a mild form of linkrot, and Google's handling of it increases usability by satisfying user expectations that the search terms will be on the returned webpage. This satisfies the principle of least astonishment since the user normally expects the search terms to be on the returned pages. Increased search relevance makes these cached pages very useful, even beyond the fact that they may contain data that may no longer be available elsewhere.While data mining can be used to uncover hidden patterns in data samples that have been "mined", it is important to be aware that the use of a sample of the data may produce results that are not indicative of the domain. Data mining will not uncover patterns that are present in the domain, but not in the sample. Humans have been "manually" extracting information from data for centuries, but the increasing volume of data in modern times has called for more automatic approaches. As data sets and the information extracted from them has grown in size and complexity, direct hands-on data analysis has increasingly been supplemented and augmented with indirect, automatic data processing using more complex and sophisticated tools, methods and models. The proliferation, ubiquity and increasing power of computer technology has aided data collection, processing, management and storage. However, the captured data needs to be converted into information and knowledge to become useful. Data mining is the process of using computing power to apply methodologies, including new techniques for knowledge discovery, to data[6].Data mining identifies trends within data that go beyond simple data analysis. Through the use of sophisticated algorithms, non-statistician users have the opportunity to identify key attributes of processes and target opportunities. However, abdicating control and understanding of processes from statisticians to poorly informed or uninformed users can result in false-positives, no useful results, and worst of all, results that are misleading and/or misinterpreted.III.O NTOLOGYHistorically, ontologies arise out of the branch of philosophy known as metaphysics, which deals with the nature of reality – of what exists. This fundamental branch is concerned with analyzing various types or modes of existence, often with special attention to the relations between particulars and universals, between intrinsic and extrinsic properties, and between essence and existence. The traditional goal of ontological inquiry in particular is to divide the world "at its joints", to discover those fundamental categories, or kinds, into which the world’s objects naturally fall.During the second half of the 20th century, philosophers extensively debated the possible methods or approaches to building ontologies, without actually building any very elaborate ontologies themselves. By contrast, computer scientists were building some large and robust ontologies with comparatively little debate over how they were built.Since the mid-1970s, researchers in the field of artificial intelligence have recognized that capturing knowledge is the key to building large and powerful AI systems. AI researchers argued that they could create new ontologies as computational models that enable certain kinds of automated reasoning. In the 1980s, the AI community began to use the term ontology to refer to both a theory of a modeled world and a component of knowledge systems. Some researchers, drawing inspiration from philosophical ontologies, viewed computational ontology as a kind of applied philosophy[7].In the early 1990s, the widely cited Web page and paper "Toward Principles for the Design of Ontologies Used for Knowledge Sharing" by Tom Gruber[8] is credited with a deliberate definition of ontology as a technical term in computer science. Gruber introduced the term to mean a specification of a conceptualization. That is, an ontology is a description, like a formal specification of a program, of the concepts and relationships that can exist for an agent or a community of agents. This definition is consistent with the usage of ontology as set of concept definitions, but more general. And it is a different sense of the word than its use in philosophy.Ontologies are often equated with taxonomic hierarchies of classes, class definitions, and the subsumption relation, but ontologies need not be limited to these forms. Ontologies are also not limited to conservative definitions – that is, definitions in the traditional logic sense that only introduce terminology and do not add any knowledge about the world. To specify aconceptualization, one needs to state axioms that do constrain the possible interpretations for the defined terms[9].IV.ORCVertical search, or domain-specific search, part of a larger sub-grouping known as "specialized" search, is a relatively new tier in the Internet search, industry consisting of search engines that focus on specific slices of content. The type of content in special focus may be based on topicality or information type. For example, an intelligent medical search engine would clearly be specialized in terms of its topical focus, whereas a video search engine would seek out results within content that is in a video format. So vertical search may focus on all manner of differentiating criteria, such as particular locations, multimedia object types and so on.Broad-based search engines such as Google or Yahoo fetch very large numbers of documents using a Web crawler. Another program called an indexer then reads these documents and creates a search index based on words contained in each document. Each search engine uses a proprietary algorithm to create its indexes so that, ideally, only meaningful results are returned for each query.Vertical search engines, on the other hand, send their spiders out to a highly refined database, and their indexes contain information about a specific topic. As a result, they are of most value to people interested in a particular area; what’s more, the companies that advertise on these search engines reach a much focused audience. For example, there are search engines for veterinarians, doctors, patients, job seekers, house hunters, recruiters, travelers and corporate purchasers, to name a few.Pursuant, for a certain key word, the crawlers of vertical search usually explore broader documents than the one of general search engines. In this section, Ontology Relation Crawler, shortly ORC, is discussed. Its distinguish feature is the contextual sensitivity, which is based on ontology co-existent relation mining.A.Ontology OganizationIn ORC, the concepts described by ontologies are organized by network, as shown in Figure 1. An example of a set of ontologies in ORC about science is depicted. Every ontology is denoted by a frame, and connected with each other by curves. These curves imply the relations between the ontologies. Each curve is correspondent with a coefficient, support called. It decides the co-existent relation between the two ontologies connected by the curve.Figure 1. Example of ontologies in ORCThe supports of the ontology co-existent relations could be compute by frequent pattern mining, a kind of data mining method. The support is evident to be understood. If there are N documents the crawler explored, and s ones from them contain both of ontology A and B, then the support of the pattern A B∧is s N. All of the supports would be updated by the ORC periodically by one frequent pattern mining method modified from fp growth.B.ORC AlgorithmFigure 2. ORC AlgorithmORC algorithm is one iterative process as shown in Figure 2. Every web document online is identified by a Uniform Resource Locator (Url). The domain ontology set O is preset by expertise. In step 7, current document is indexed into the ontologies found.A template database D is maintained by ORC. This database records the information of co-existent relations of ontologies. The supports are updated, in step 11 and 12, by incremental frequent pattern mining from D. The CreateChild function, in step 14,defines the next explore directions by two ways: 1) The links ofcurrent document; 2) The indexed documents which contained co-existent ontologies.C.Rank AlgorithmFigure 3. Rank AlgorithmThe contextual sensitivity of ORC becomes from its ontology network based on co-existent relations. It is realized by its rank algorithm as shown in Figure 3. In step 5, a set of key word relative ontologies would be found for certain document. Only relative ontologies are considered here. Then the certain document’s rank is assessed on its relation degree with the key word by supports in the network in step 8. At last, the rank list would be adjusted for each document in step 9.V.A PRIMITIVE WEB SEARCH ENGINEORC algorithm is utilized in the system, which is a vertical search engine about popular science. This system is implemented by Visual Studio 2008, and the main technology taken is C#. The aim of the system is to popularize science to citizens conveniently. The configuration of the host of the system is, CPU 2.4GHz, RAM 4G.TEXTTOONTO[11] is adopted firstly to establish automatically the ontology database used here, and then the ontology database is purified by the expertise in popular science. As shown in Figure 4, the ontology database is the foundation of the deep search engine of popular science. The initial Html database is comprised of 10000 Html documents about popular science. After 138 hours’ computing, a Url Index Database, whose size was 15 million with 0.1 million ontologies, was acquired by ORC. The user interface component call the function of search operation in the Url index database, in order to feedback the wants of users. From general queries, the system unusually obtains more contextually deeper and broader results than Google and Yahoo, especially in Chinese.Figure 4. The deep search engine of popular scienceVI.CONCLUSIONTo attack the problem of correctness and recall of vertical web search system, a novel web crawler ORC is proposed in this paper. It introduces ontologies, on which the semantic concepts are defined. The relations between ontologies are exploited to evaluate the importance of web documents’ context. These ontologies are organized by a network structure that is updated periodically. Due to the ontology relation mining, the crawling space is expanded and focused more. At the end of this paper, a primitive vertical search system is built based on ORC.A CKNOWLEDGMENTThis work was supported in part by Systems Engineering Study Center Beijing.R EFERENCES[1]M. Chau and H. Chen, "Comparison of three vertical search spiders,"Computer, 2003, vol. 36, pp. 56-62.[2]S.A. McIlraith, T.C. Son and H. Zeng, "Semantic Web services,"Intelligent Systems, 2001, vol. 16, pp.46-53.[3]T. Gruber, "A translation approach to portable ontology specifications,"Knowledge Acquisition, 1993, vol. 5, pp. 199-199.[4]L., Peter and H.R. Varian, "How Much Information,"/how-much-info-2003, 2003.[5]G. Neil, "The dynamics of competition in the internet search enginemarket," International Journal of Industrial Organization, 2001, vol. 19 , pp.1103–1117, doi:10.1016/S0167-7187(01)00065-0.[6]K. Mehmed, “Data Mining: Concepts, Models, Methods, and Algorithms,”John Wiley & Sons, 2003.[7]T. Gruber, "Ontology," To appear in the Encyclopedia of DatabaseSystems, Ling Liu and M. Tamer Özsu (Eds.), Springer-Verlag, 2008.[8]T. R. Gruber, "Toward Principles for the Design of Ontologies Used forKnowledge Sharing," Proc. International Journal Human-Computer Studies, 1995, vol. 43, pp. 907-928.[9]T. R. Gruber, "A translation approach to portable ontologies," Proc.Knowledge Acquisition, 1993, vol. 5, pp. 199-220.[10]J. Han, "Data Mining and Knowledge Discovery," Springer Netherlands,2004.[11]S. Bloehdorn, A. Hotho and S. Staab, “An Ontology-based framework fortext mining,” Journal for computational linguistics and language technology, 2005, Vol.20, pp. 1-20.。
探索新生物的英语作文Title: Exploring New Frontiers: Unveiling Novel Life Forms。
In the vast expanse of the universe, the quest to discover new life forms remains one of the most intriguing endeavors of humanity. From the depths of our oceans to the far reaches of outer space, scientists are continuously pushing the boundaries of exploration in search of organisms that may challenge our understanding of life itself. In this essay, we delve into the significance of exploring new life forms and the methods employed in this exciting pursuit.First and foremost, the exploration of new life forms holds immense scientific importance. Each new discovery expands our knowledge of the diversity of life and its potential adaptations to different environments. By studying these organisms, scientists gain insights into the fundamental principles governing life, including itsorigins and evolution. Moreover, the discovery of novellife forms could have profound implications for fields such as medicine, biotechnology, and astrobiology, paving the way for groundbreaking discoveries and innovations.One of the primary methods employed in the search for new life forms is exploration of extreme environments on Earth. These environments, such as deep-sea hydrothermal vents, acidic hot springs, and polar ice caps, harbor conditions that mimic extraterrestrial habitats, making them ideal analogs for studying life beyond Earth. By venturing into these hostile environments, scientists have discovered a plethora of extremophiles – organisms capable of thriving in extreme conditions previously thought uninhabitable. These discoveries not only expand our understanding of life's limits but also provide valuable insights into the potential for life on other planets.In addition to Earth-based exploration, the search for new life forms extends beyond our planet's boundaries. Space missions, such as the Mars rovers and the Voyager probes, are equipped with instruments designed to detectsigns of life on other celestial bodies. Although no definitive evidence of extraterrestrial life has been found thus far, these missions have provided valuable data that continues to inform our understanding of the conditions necessary for life to exist elsewhere in the universe.Furthermore, advancements in technology have revolutionized our ability to explore and study new life forms. High-throughput sequencing techniques allow scientists to analyze the genetic makeup of organisms with unprecedented speed and accuracy, providing insights into their evolutionary relationships and potential functions. Likewise, robotic probes and autonomous underwater vehicles enable exploration of remote and hazardous environments with minimal human intervention, opening up new frontiers for discovery.Despite the remarkable progress made in the field of astrobiology, many questions remain unanswered. What forms might life take in environments vastly different from our own? Could there be alternative biochemistries beyond the familiar carbon-based life? These questions drivescientists to continue their quest for new life forms, fueling curiosity and inspiring future generations to explore the unknown.In conclusion, the exploration of new life forms is a multifaceted endeavor with far-reaching implications for science and society. By venturing into extreme environments on Earth and beyond, scientists are uncovering a wealth of biodiversity and expanding our understanding of the potential for life in the universe. Through continued exploration and technological innovation, we may one day unravel the mysteries of life on Earth and beyond, forever changing our perception of the cosmos.。
探索科学规律英语作文Exploring the Laws of Science。
Science is a never-ending journey of exploration and discovery. It is a systematic and logical approach to understanding the world around us. Through scientific inquiry, we uncover the laws that govern the universe and unravel the mysteries of nature. In this essay, we will delve into the importance of exploring scientific laws and how they shape our understanding of the world.To begin with, scientific laws are fundamental principles that describe natural phenomena. They are derived from repeated observations, experiments, and measurements. These laws provide a concise and precise description of how things work in the natural world. For example, Newton's laws of motion explain how objects move and interact with one another. By understanding these laws, we are able to predict and explain the behavior of objects in motion.Furthermore, exploring scientific laws allows us to make sense of the complex and intricate workings of the universe. It enables us to comprehend the underlying principles that govern various phenomena, from the microscopic world of atoms to the vast expanse of galaxies. For instance, the laws of thermodynamics help us understand the transfer of heat and energy, which is essential in fields such as engineering, chemistry, and physics. By studying these laws, we can develop new technologies, improve efficiency, and make informed decisions thatbenefit society.Moreover, exploring scientific laws fosters critical thinking and problem-solving skills. It encourages us to question the world around us and seek evidence-based explanations. Through experimentation and observation, we learn to analyze data, draw conclusions, and make informed judgments. This process of scientific inquiry not only enhances our understanding of the natural world but also equips us with valuable skills that can be applied in various aspects of life. Whether it is in the field ofmedicine, environmental conservation, or technological advancements, the ability to think critically and solve problems is crucial for progress and innovation.In addition, exploring scientific laws promotes curiosity and a sense of wonder. It encourages us to ask questions and seek answers to the unknown. By embracing a scientific mindset, we become open to new ideas, perspectives, and possibilities. This curiosity-driven exploration has led to groundbreaking discoveries and advancements throughout history. From the theory of evolution to the discovery of DNA, these scientific breakthroughs have revolutionized our understanding of life and shaped the course of human civilization.However, it is important to note that scientific laws are not absolute and can be subject to revision and refinement. As our knowledge and technology advance, new evidence may emerge that challenges existing laws or leads to the formulation of new ones. This dynamic nature of science ensures that our understanding of the world continues to evolve and expand.In conclusion, exploring scientific laws is essential for understanding the natural world, fostering critical thinking, and driving innovation. By unraveling the laws that govern the universe, we gain insights into the workings of nature and develop a deeper appreciation for its wonders. Through scientific inquiry, we continue to push the boundaries of knowledge and shape the future of humanity.。
初二人类探索与自然发现英语阅读理解20题1<背景文章>The exploration of space by humans is a remarkable journey filled with numerous challenges and significant milestones. Since the dawn of civilization, humans have been gazing at the stars with wonder and curiosity. In the early days, our attempts to reach for the cosmos were rather primitive. For example, the ancient Chinese invented rockets which were mainly used for fireworks at first, but these simple inventions were the first steps towards space exploration.As time passed, the real exploration of space began to take shape. One of the most important milestones was the launch of Sputnik 1 by the Soviet Union in 1957. This small satellite orbiting the Earth marked the beginning of the space age. It was a shock to the world and also inspired other countries to invest more in space exploration.Then came the era of manned spaceflight. Yuri Gagarin, a Soviet cosmonaut, became the first human to orbit the Earth in 1961. His flight was a huge step forward for humanity. However, space exploration is not without difficulties. One of the major challenges is the harsh environment in space. The vacuum, radiation, and microgravity can pose serious threats to astronauts and spacecraft.Another difficulty is the high cost. Building and launching spacecraft requires a vast amount of resources. Despite these challenges, humans continue to explore space, hoping to discover new planets, find signs of extraterrestrial life, and expand our understanding of the universe.1. <问题1>A. The exploration of space started recently.B. The ancient Chinese rockets had no relation to space exploration.C. The exploration of space has a long history with early attempts.D. Space exploration began with manned spaceflight.答案:C。
自然科学经典导引英语作文1. The Theory of Evolution proposed by Charles Darwin revolutionized our understanding of the natural world. It challenged the traditional belief that species were fixed and unchanging, and instead proposed that they evolved over time through a process of natural selection. This theory has had a profound impact on fields ranging from biology to anthropology, and continues to be a subject of research and debate.2. Isaac Newton's laws of motion laid the foundationfor classical physics. These laws describe the relationship between the motion of an object and the forces acting upon it. They are still widely used today to understand and predict the behavior of objects in motion, from the movement of planets to the flight of a baseball.3. Albert Einstein's theory of relativityrevolutionized our understanding of space, time, and gravity. It challenged the Newtonian view of the universeas a fixed and absolute framework, and instead proposedthat space and time are intertwined and can be influenced by mass and energy. This theory has had far-reaching implications, from the development of GPS technology to our understanding of the origins of the universe.4. The discovery of the structure of DNA by James Watson and Francis Crick in 1953 was a landmark moment in the field of biology. It revealed the double helixstructure of DNA, providing a key to understanding how genetic information is stored and passed on. This discovery has paved the way for advancements in genetics, biotechnology, and our understanding of human health and disease.5. The Big Bang theory is the prevailing scientific explanation for the origins of the universe. It proposes that the universe began as a singularity, a point ofinfinite density and temperature, and has been expanding ever since. This theory is supported by a range of observational evidence, from the cosmic microwave background radiation to the redshift of distant galaxies.It has revolutionized our understanding of the universe and opened up new avenues of research in cosmology.6. The discovery of penicillin by Alexander Fleming in 1928 marked a major breakthrough in the field of medicine.It was the first antibiotic to be discovered and revolutionized the treatment of bacterial infections. This discovery has saved countless lives and paved the way for the development of many other antibiotics.7. The periodic table of elements, developed by Dmitri Mendeleev in the late 19th century, is a fundamental toolin chemistry. It organizes all known elements based ontheir atomic number and chemical properties. This table provides a framework for understanding the relationships between elements and has been crucial in the development of new materials and the understanding of chemical reactions.8. The discovery of the Higgs boson at the Large Hadron Collider in 2012 confirmed the existence of the Higgs field, a fundamental component of the Standard Model of particle physics. This discovery provided evidence for the mechanismby which particles acquire mass and has deepened our understanding of the fundamental building blocks of the universe.9. The theory of plate tectonics, proposed in the 1960s, revolutionized our understanding of the Earth's geology. It explains how the Earth's crust is divided into a number of large plates that move and interact with each other. This theory has provided insights into the formation of mountains, earthquakes, and volcanoes, and has helped us understand the dynamic nature of our planet.10. The discovery of the double helix structure of DNA by James Watson and Francis Crick in 1953 revolutionizedour understanding of genetics and laid the foundation for the field of molecular biology. This discovery revealed how genetic information is stored and replicated, and has hadfar-reaching implications in fields ranging from medicineto agriculture.。
Discovery-Driven Ontology Evolution Silvana Castano,Alfio Ferrara,Guillermo N.HessDipartamento di Informatica e ComunicazioneUniversit`a degli Studi di MilanoMilan,Italy20135Email:{castano,ferrara,hess}@dico.unimi.itTelephone:(39)025*******Fax:(39)025*******Abstract—In this paper,we present a methodology for ontology evolution,by focusing on the specific case of multimedia ontology evolution.In particular,we discuss the situation where the ontology needs to be enriched because it does not contain any concept that could be used to explain a new multimedia resource. The paper shows how ontology matching techniques can be used to enforce the discovery of new relevant concepts by probing external knowledge sources using both the information available in the multimedia resource and the knowledge contained in the current version of the ontology.I.I NTRODUCTIONOntology evolution regards the capability of managing the modification of an ontology in a consistent way.An ontology may change because the domain or the user needs have changed[16]or simply because the shared conceptualization (i.e.,the perspective)has been modified[13].According to[11],we can define ontology evolution as the timely adaptation of the ontology to changing requirements and the consistent propagation of changes to the dependent artifacts. The problem of ontology evolution can be considered also as a special case of the more general and well studied problem of belief change as described in[5].In this respect,some of the most important concepts of the belief change literature have been revised in order to apply them to the ontology evolution context.A six-phase evolution process for ontology evolution has been proposed in[15]which is generally recognized as a comprehensive reference methodology capable of handling the evolution of multiple ontologies.In this paper,we focus on a specific case of evolution, which is multimedia ontology evolution.In our approach,the ontology is used to provide an interpretation for the elements extracted from multimedia resources,such as images,textual documents,video and audio.Our approach is defined in the framework of the BOEMIE EU Project[1],where the reader canfind further details about the semantic information extraction from multimedia resources as well as the whole evolution methodology.In the paper,we focus on the use of ontology matching techniques to support ontology enrichment when new knowledge is required tofind an interpretation for a new resource.The paper is organized as follows.In Section II we describe the requirements for ontology evolution.The methodology we propose for multimedia ontology evolution is presented in Section III.A discussion of the activity of concept discovery, together with an example is provided in Section IV.Section V presents the related work in thefield of ontology evolution. Conclusions and future work are discussed in Section VI.II.R EQUIREMENTS FOR THE ONTOLOGY EVOLUTIONIn this paper,we address ontology evolution in a scenario where:•Multimedia resources are considered,which have been already submitted to a semantic extraction process for associating them with appropriate metadata that describe their contents.Consequently,the ontology evolution is triggered by incoming multimedia resources with associ-ated metadata information.•An initial domain ontology is available,providing a description of the domain of interest(in the worst case, also a high-level,poorly detailed description).Such an ontology is used also by the extraction process and will be continuously evolved so that the new ontology version is used as input for subsequent extraction and evolution activities,according to a bootstrapping process which is iterated until a satisfactory quality of multimedia resource classification is reached.In the paper,we will consider the Athletics domain and we will discuss ontology evolution by providing some examples taken from this domain.•An open system approach is adopted,in that when back-ground knowledge in the underlying ontology is not suf-ficient to interpret new incoming multimedia resources,a concept discovery activity is triggered to acquire newknowledge from other external knowledge sources.This way,we try to limit the human involvement of the on-tology expert in the definition of new concepts,which is usually a manual and time consuming activity in ontology evolution[16].Before introducing our evolution methodology,we introduce some basic definition used throughout the paper:•A multimedia resource is any kind of multimedia object(e.g.,image,video,sound)from which information isextracted.•Each individual extracted element from a multimedia resource is called a media object(e.g.,a portion of an image).•With each media object some metadata are associated, which represent the semantic information that is extracted from the resource.•A simple concept is a concept in the domain ontology which explains a media object in terms of metadata.•An aggregate concept is a concept in the domain on-tology which is defined as an aggregation of simple concepts,such as,for instance,an event.Furthermore, an aggregate concept is an explanation of the multimedia resource,that is an interpretation of the real event or object described in the resource.To illustrate these concepts,we consider an evolution sce-nario in the Athletics domain characterized by the multimedia resource shown in Fig.1.In this case,four media objects (marked in thefigure)are identified during semantic extraction which have associated metadata stating that:(1)is a foam mattress,(2)is a pole,(3)is an horizontal bar,and(4)is an athlete,respectively.Moreover,with respect to the event(i.e., aggregate concept)described by the image,we can recognize that all these media objects are part of a bigger picture,which describes a pole vaultevent.Fig.1.An example of multimedia resourceThe metadata associated with the media objects of Fig.1 are shown in Table I.TABLE IM ETADATA ASSOCIATED WITH THE MEDIA OBJECTS OF THE MULTIMEDIARESOURCE OF F IG.1Foam mat foam mat1Pole pole1Horizontal bar horizontal bar1Athlete athlete1With respect to this example,the domain ontology con-tains the aggregate concept Pole vault,while Horizontal bar, Foam mat,Athlete and Pole are simple concepts.The ac-tivity of classifying a new multimedia resource with respect to the ontology is,in this context,the activity offinding an aggregate concept in the ontology capable of explaining the set of simple concepts detected in the multimedia resource semantic extraction.Such an activity is called interpretation of the multimedia resource,which can be performed by exploiting non-standard reasoning techniques[9].However, not always the interpretation process succeeds in determining an explanation for the resource in the ontology.In particular, depending on the background ontology knowledge and on the metadata information for the incoming resource,four cases are possible:i)one single aggregate concept explaining the multimedia resource semantics is found in the ontology;ii) more than one aggregate concept explaining the multimedia resource semantics are found in the ontology;iii)no aggregate concepts are found in the ontology capable of explaining the semantics of the multimedia resource,but all the simple concepts extracted from the resource are already defined in the ontology;iv)no aggregate concepts are found in the on-tology capable of explaining the semantics of the multimedia resource,and one or more simple concepts are missing in the ontology too.These four scenarios produce two main classes of evolution patterns,namely population and enrichment,respectively(see Fig.2).In the following we briefly describe these two classes of patterns.Fig.2.The evolution patterns•Population patterns:the domain ontology contains the aggregate concepts as well as all the simple concepts described by the media objects metadata extracted from the multimedia resource.In this case,only ontology population has to be performed.Ontology population is the activity of introducing new instances in the ontologyto describe the multimedia resource with respect to theaggregate concepts already present in the ontology.•Enrichment patterns:The ontology does not contain concepts that can be used to explain the multimediaresource.In this case,the ontology has to be enriched.Ontology enrichment is the activity of defining new con-cepts,properties,and/or semantic relations in the ontol-ogy.Basically,the idea behind our approach to ontologyenrichment is to exploit the metadata available in themultimedia resource description in order to discover newaggregate concepts in external knowledge sources(e.g.,other ontologies,web directories,lexical systems)thatcan be added to the domain ontology in order to explainthe new multimedia resource.This activity is referredas concept discovery and it is executed by exploitingontology matching techniques.In this paper,we focus on the enrichment patterns and,inparticular,we describe the case when all the simple conceptsare already stored in the domain ontology,but the aggregateconcept is missing,in order to show how the ontology can beenriched by the concept discovery activity.III.M ETHODOLOGY FOR DISCOVERY-DRIVEN ONTOLOGYEVOLUTIONIn the evolution scenario discussed in the paper,the in-terpretation of a multimedia resource produces a descriptionof the simple concept instances that are detected into theresource(e.g.,the objects that appear into an image)but noaggregate concept(e.g.,events)in the ontology can be foundthat explain the resource itself.In this section,we describea methodology for ontology evolution in this scenario andthe role of ontology matching techniques in supporting theidentified evolution phases.A.The evolution phasesThe goal of ontology evolution is to augment the back-ground knowledge in the existing domain ontology to betterclassify extracted object descriptions.The ontology is enrichedby adding concepts,properties,and/or semantic relations or bymodifying existing ones,in order to produce a new ontologyversion capable of providing interpretation of the new multi-media resource.In order to detect the ontology modifications required forexplaining the new resources,we exploit the description of themedia objects extracted from the resources.This means that,different from the usual evolution scenarios,in our case weknow from the interpretation phase that an aggregate conceptis missing in the domain ontology,but we do not know exactlywhich one.We have the instance of the missing concept andhave to create an aggregate concept which explains it.The evolution process is articulated in three phases,whichare explained in the following.Temporary concept definition:the incoming instance and itsassociated metadata information is used to define a temporaryconcept c t,which is matched against the domain ontologyto detect semantically related concepts already present in the ontology.This internal concept detection activity is optional and aims atfinding in the domain ontology some aggregate concepts which have semantic affinity with c t to be used as the starting basis in the next phase.Concept discovery:if internal concepts have been detected in the previous phase,for each retrieved matching concept c i in the ontology(i.e.,c i is semantically similar to the temporary concept c t)a probe query is created.Otherwise,the ontology expert has to define the probe query manually,by exploiting his own domain knowledge and the information available in the definition of the temporary concept c t.A probe query is a description of the properties,the constraints,and the semantic relations that should be exhibited by the(actually missing) new concept in order to consider it as a candidate for ontology enrichment.The probe query is then matched against one or more external knowledge sources to discover some external concepts that could be useful to define the new concept to be inserted in the ontology.Ontology enrichment:this phase comprises the definition and insertion of the new(aggregate)concept in the domain ontology and also the validation of the resulting ontology. The definition of the new concept is interactively performed by the ontology expert,by exploiting the definitions of the temporary concept c t and of the external matching concepts retrieved during the concept discovery phase.Once the new aggregate concept has been defined,the ontology expert inserts it in a proper position in the ontology taxonomy.The task of finding the most suitable placement of the new concept in the ontology taxonomy is performed by combining ontology matching techniques and ontology reasoning,as described in[2].Finally,the evolved ontology is validated by means of standard reasoning techniques[6].B.Ontology matching for concept discoveryAs described in the previous section,ontology matching techniques play an essential role in the ontology evolution process.In temporary concept definition,the temporary con-cept can be compared against the existing ontology tofind internal matching concepts that can help the probe query formulation activity.In concept discovery,a probe query is compared against external knowledge sources with the goal offinding potentially relevant knowledge.To perform these concept comparisons,we rely on our semantic matchmaker H-M ATCH developed in the framework of the H ELIOS project for matching independent peer ontologies[3].H-M ATCH takes a target concept description c and an ontology O as input and returns the concepts in O which match c,namely the concepts with the same or the closest intended meaning of c. In H-M ATCH we perform concept matching through affinity metrics by determining a measure of semantic affinity in the range[0,1].A threshold-based mechanism is enforced to set the minimum level of semantic affinity required to consider two concepts as matching concepts.Given two con-cepts c and c ,H-M ATCH calculates a semantic affinity value SA(c,c )as the linear combination of a linguistic affinity value LA(c,c )and a contextual affinity value CA(c,c ).Thelinguistic affinity function of H-M ATCH provides a measure of similarity between two ontology concepts c and c computed on the basis of their linguistic features(i.e.,concept names). For the linguistic affinity evaluation,H-M ATCH relies on a thesaurus of terms and terminological relationships auto-matically extracted from the WordNet lexical system[12]. The contextual affinity function of H-M ATCH provides a measure of similarity by taking into account the contextual features of the ontology concepts c and c .The context of a concept can include properties,semantic relations with other concepts,and property values.The context can be differently composed to consider different levels of semantic complexity, and four matching models,namely,surface,shallow,deep,and intensive,are defined to this end.In the surface matching,only the linguistic affinity between the concept names of c and c is considered to determine concept similarity.In the shallow, deep,and intensive matching,also contextual affinity is taken into account to determine concept similarity.In particular, the shallow matching computes the contextual affinity by considering the context of c and c as composed only by their properties.Deep and intensive matching extend the depth of concept context for the contextual affinity evaluation of c and c ,by considering also semantic relations with other concepts (deep matching model)as well as property values(intensive matching model),respectively.The comprehensive semantic affinity SA(c,c )is evaluated as the weighted sum of the Linguistic Affinity value and the Contextual Affinity value, that is:SA(c,c )=W LA·LA(c,c )+(1−W LA)·CA(c,c )(1)where W LA is a weight expressing the relevance to be given for the linguistic affinity in the semantic affinity evaluation process.A detailed description of H-M ATCH and related matching models is provided in[3].IV.C ONCEPT DISCOVERY AND ONTOLOGY ENRICHMENT In this section,we describe in more detail how concept dis-covery and ontology enrichment are performed,with respect to the jumping example of Fig.1.A.Concept discoveryThe concept discovery activity is articulated in the following steps:1)Given a multimedia resource r and the set M of mediaobjects that have been extracted from r,we build a temporary target concept c t.The context of c t(i.e., the set of properties,restrictions,and semantic relations featuring c t)is composed by exploiting the metadata associated with M,that is the set of simple concepts S explaining objects in M,that is,S={c i∈O|∀m i∈M,m i:c i}where O denotes the ontology.2)We use H-M ATCH to match c t against the existingdomain ontology.Since c t is not featured by a name,H-M ATCH is executed by exploiting only the contex-tual affinity matching techniques.In fact,contextual matching considers only the contextual affinity between the concepts to be compared without considering their names.The goal of this activity is to exploit the concepts already inside the ontology which are similar to c t,and which are featured by a name,to trigger the concept discovery phase.In other terms,we know that the missing concept is similar to something that is already known in the ontology,and we exploit this similarity between the ontology(what we know)and the external knowledge sources(what is known in other sources)in order to learn new concepts to be added to the ontology.3)For each concept c i whose affinity with c t is higherthan a threshold t,we build a probe query q i.Each probe query q i is matched against external knowledge sources using H-M ATCH.The result is a set of external candidate concepts returned by the matching process whose semantic affinity is higher than or equal to a given matching threshold.Retrieved concepts are then made available to the ontology expert,who can exploit them to define the new concept to be added to the ontology. function conceptDiscovery(Extracted_Metadata){Vector probeQueries=null;Vector candidateConcepts=null;TemporaryConcept ct=createConcept(Extracted_Metadata);foreach Concept c in Ontology{if(hmatch.contextualMatch(ct,c)>=threshold){probeQueries.add(c);}foreach ProbeQuery q in probeQueries{foreach Concept e in ExternalKnowledgeSource{if(hmatch.match(q,e)>=threshold){candidateConcepts.add(e);}}}}return candidateConcepts;}Fig.3.Concept discovery algorithmIn order to discuss an example of concept discovery,con-sider the multimedia resource shown in Fig.1and the media objects marked in thefigure.Then suppose that the domain on-tology contains information about jumping events,as defined in Fig.4.The concepts graphically represented by a white box(i.e.,Sport event,Athletic event,Jumping event and High jump)represent aggregate concepts,while gray boxes (e.g.,Foam mat,Pole,Horizontal bar,Athlete)represent simple position dependencies among simple and aggregate concepts are represented by means of dashed arrows. In the right side of thefigure,the corresponding Description Logics specification is reported.Thefirst step of concept discovery is the creation of the temporary concept c t,starting from the incoming resource information,as shown in Fig.5.The definition of c t is based on the analysis of the metadata associated with the multimedia resource,that are shown inAthlete≡P erson ∃hasP rofession.SportF oam mat SportEquipmentP ole SportEquipmentJavelin SportEquipmentHorizontal bar SportEquipmentJumping event EventHigh Jump Jumping event∃hasP art.Jumper∃hasP art.Horizontal bar∃hasP art.F oam matFig.4.The domain ontology before evolutionc t ∃hasP art.Jumper∃hasP art.Horizontal bar∃hasP art.F oam mat∃hasP art.P oleFig.5.The temporary concept c t definition from the metadata of Table I Table I.In particular,since the media object retrieved in the resource are considered as components of an unknown concepts that should be added in the ontology,we define on c t a hasP art restriction for each simple concept of Table I. The temporary concept c t describes the features(in terms of restrictions)of the required target concept.However,we do not have neither a name for c t nor any semantic relation among it and other concepts in the ontology.Because of this,if we match c t against the external knowledge source,we could notfind relevant candidate concepts.In order to address this problem,we enrich the definition of c t by exploiting the name and the context of other concepts,if any,in the ontology that are semantically similar to c t.Since a name is not available for c t,only the contextual matching is performed.In the case of the example,H-M ATCH evaluates the following semantic affinity values in the ontology of Fig.4:SA(Athlete,c t)= 0.45,and SA(High Jump,c t)=ing the H-M ATCH default threshold of0.5,only the High jump concept is actually selected for probe query composition.Having the High jump concept retrieved,we now create one probe query for High jump(i.e.,q1)to be compared against external knowledge sources.In our example,we use the Google and the Y ahoo sports directories as external knowledge sources. The probe query q1contains a description of High jump, which is composed by the name of the concept and by its context,i.e.,by the restrictions defined on it and by its super-classes and subclasses.The result of matching q1against the Google and Y ahoo external sources is shown if Fig.6,where matching concepts are shown together with corresponding semantic affinity values.All the11candidates concepts shown in Fig.6are returned to the ontology expert.High jump is discarded,since it is already present in the domain ontology.B.Ontology enrichmentBased on the matching concepts definitions retrieved during the concept discovery phase,on the incoming temporary concept c t and on the domain ontology,the ontology expert begins the concept definition activity for ontology enrichment.A new aggregate concept is defined using the simple concepts available in the domain ontology.If some simple concept is needed which is not yet described in the ontology,it is also defined.In the example,the ontology expert chooses to add the Pole Vault concept,on the basis of the information provided by the two web sources and on the fact that Pole Vault is retrieved in both the external sources.Concept definition is a manual activity,in that the ontology expert must specify all the properties and restrictions of the new concept.In our approach, the amount of manual activity is reduced in that the ontology expert works directly on the concepts specifications retrieved during the discovery activity,by properly integrating/merging those considered more relevant.Once the new concept is defined,the ontology expert inserts it into the ontology,which means that the appropriate location in the ontology hierarchy has to be found.To this end,the ontology expert may use H-M ATCH to locate a possible insertion point for the new concept and a reasoner to validate the new resulting version of the ontology[2].Fig.7show the definition of the new aggregate concept Pole vault from our example.V.R ELATED W ORKOntology evolution research work has mainly focused on the problem of evaluating the impact of requirement changes on the ontology contents[8].In[10],the authors present a framework for ontology evolution and change management based on an ontology of change operations with the aim of providing a formal description of the ontology modifications required to perform a given evolution task.The ontology of change operations is defined for the OWL knowledge model and contains basic change operations and complex change operations.A basic operation describes the procedure of mod-ifying only one specific feature of the OWL knowledge model (e.g.,type and cardinality restriction change),while a complex operation describes an articulated change procedure and isFig.6.Sample of probe queryevaluationAthlete≡P erson ∃hasP rofession.SportF oam mat SportEquipmentP ole SportEquipmentJavelin SportEquipmentHorizontal bar SportEquipmentJumping event EventHigh Jump Jumping event∃hasP art.Jumper∃hasP art.Horizontal bar∃hasP art.F oam matP ole vault Jumping event∃hasP art.Jumper∃hasP art.Horizontal bar∃hasP art.F oam mat∃hasP art.P oleFig.7.The domain ontology after evolutioncomposed of multiple basic operations.With respect to these classification of changes,the requirements that an ontology management tool should address for ontology evolution are discussed in[17].In particular,the authors emphasize that such a tool should provide a set of evolution operations according to the supported ontology models(functional requirement) and that changes should be discovered semi-automatically by analyzing user behavior(refinement requirement).During the evolution process,the tool has to reflect the user preferences (user supervision requirement)by providing advanced facili-ties,such as change-visualization and inconsistency detection (transparency and usability requirements).Moreover,history of changes needs to be supported to eventually undo any change applied to the ontology(auditing and reversibility requirements).The recent success of distributed and dynamic infrastructures for knowledge sharing has raised the need of semiautomatic/automatic ontology evolution strategies.An overview of some proposed approaches in this direction is presented in[4],even if limited concrete results have appeared in the literature.In most recent work,formal and logic-based approaches to ontology evolution are also being proposed. In[7],the authors provide a formal model for handling the semantics of change phase embedded in the evolution process of an OWL ontology.The proposed formalization allows to define and to preserve arbitrary consistency conditions(i.e., structural,logical,and user-defined).A six-phase evolution methodology has been implemented within the KAON[14]infrastructure for business-oriented ontology management.The ontology evolution process starts with the capturing phase,that identifies the ontology modifi-cations to apply either from the explicit business requirements or from the results of a change discovery activity.In therepresentation phase,the identified changes are described in a suitable format according to the specification language of the ontology to modify(e.g.,OWL).The effects of the changes are evaluated in the semantics of change phase,where the ontology consistency check is also performed.Due to the fact that an ontology can reuse or extend other ontologies (e.g.,through inclusion or mapping),the propagation phase ensures that any ontology change is propagated to the possible dependent artifacts in order to preserve the overall consistency. The subsequent implementation phase has the role to log all the performed changes in order to support the recovery facilities that in thefinal validation phase are provided to reverse an ontology change in case that an undesired effect occurs.With respect to the state of the art on ontology evolution, the original contribution of the presented approach is an evolution methodology tailored to provide a semi-automated support to the new concept detection and definition activities, which are conventionally the most human-intensive activities. In particular,we address the problem of semi-automatically discovering the new concepts required in the ontology to describe new multimedia resources by exploiting ontology matching techniques.VI.C ONCLUSIONIn this paper,we have presented a methodology for sup-porting the ontology enrichment activity in the context of multimedia ontology evolution.This problem and its peculiar requirements,is one of the main research issues addressed in the BOEMIE[1]project,where the discovery approach presented in the paper is studied.The original contribution of the work is focused on the role of ontology matching techniques with respect to the ontology evolution scenario. We have shown how ontology matching can support the ontology expert in retrieving new concepts to be added in the ontology.In the concept discovery activity,the use of matching techniques reduces the manual effort required to the ontology expert,by suggesting a set of alternatives(candidate concepts) for the definition of the new concept.Our future work will be devoted to:i)implement and test a software tool for supporting the whole discovery process;ii)study the problem of defining a new concept out of the specifications of retrieved matching concepts,by exploiting a combination of matching and reasoning techniques;iii)to investigate the problem of storing and maintaining the mappings retrieved among the ontology concept and the external knowledge sources,in order to reuse the results of an activity of concept discovery in subsequent stages of ontology enrichment.A CKNOWLEDGMENTThis work is founded by the BOEMIE Project,FP6-027538 -6th EU Framework Programme.The authors would like to thank the partners of the BOEMIE project for providing the pictures for the examples.R EFERENCES[1]BOEMIE Project,“Bootstrapping ontology evolution with multimediainformation extraction,”IST-2004-2.4.7Semantic-based Knowledge and Content Systems Proposal,2004.[2]S.Castano,A.Ferrara,and S.Montanelli,“Evolving open and inde-pendent ontologies,”Journal of Metadata,Semantics and Ontologies, 2006.[3]——,“Matching ontologies in open networked systems:Techniques andapplications,”Journal on Data Semantics(JoDS),vol.V,2006.[4]Y.Ding and S.Foo,“Ontology research and development.part2-a review of ontology mapping and evolving,”Journal of InformationScience,vol.28,no.5,pp.375–388,2002.[5]G.Flouris, D.Plexousakis,and G.Antoniou,“Evolving ontologyevolution.”in SOFSEM,ser.Lecture Notes in Computer Science,J.Wie-dermann,G.Tel,J.Pokorn´y,M.Bielikov´a,and J.Stuller,Eds.,vol.3831.Springer,2006,pp.14–29.[6]V.Haarslev and R.M¨o ller,“Description of the RACER system and itsapplications.”CEUR Electronic Workshop Proceedings,/V ol-49/,2001,pp.132–141.[7]P.Haase and L.Stojanovic,“Consistent evolution of owl ontologies.”inESWC,ser.Lecture Notes in Computer Science,A.G´o mez-P´e rez and J.Euzenat,Eds.,vol.3532.Springer,2005,pp.182–197.[8]P.Haase and Y.Sure,“State-of-the-art on ontology evolution,”Institute AIFB,University of Karlsruhe,SEKT informal deliverable 3.1.1.b,2004.[Online].Available:http://www.aifb.uni-karlsruhe.de/WBS/ysu/publications/SEKT-D3.1.1.b.pdf[9]J.R.Hobbs,M.Stickel,D.Appelt,and P.Martin,“Interpretation asabduction,”Artificial Intelligence Journal,vol.63,pp.69–142,1993.[10]M.Klein and N.Noy,“A component-based frameworkfor ontology evolution,”2003.[Online].Available:/article/klein03componentbased.html[11] A.Maedche,B.Motik,L.Stojanovic,R.Studer,and R.V olz,“Aninfrastructure for searching,reusing and evolving distributed ontologies,”in WWW.ACM,2003,pp.439–448.[12]ler,“WordNet:A Lexical Database for English,”Communica-tions of the ACM(CACM),vol.38,no.11,pp.39–41,1995.[13]N.F.Noy and M.Klein,“Ontology evolution:Not the same as schemaevolution,”Knowl.Inf.Syst.,vol.6,no.4,pp.428–440,2004. [14] D.Oberle,R.V olz,B.Motik,and S.Staab,“An extensible ontologysoftware environment,”in Handbook on Ontologies,ser.International Handbooks on Information Systems,S.Staab and R.Studer,Eds.Springer,2004,ch.III,pp.311–333.[15]L.Stojanovic,A.Madche,B.Motik,and N.Stojanovic,“User-drivenontology evolution management,”in European Conf.Knowledge Eng.and Management(EKAW2002).Springer-Verlag,2002,p.285300.[On-line].Available:http://wwwred.fzi.de/wim/publikationen.php?id=804 [16]L.Stojanovic,A.Maedche,N.Stojanovic,and R.Studer,“Ontologyevolution as reconfiguration-design problem solving.”in K-CAP,J.H.Gennari,B.W.Porter,and Y.Gil,Eds.ACM,2003,pp.162–171. [17]L.Stojanovic and B.Motik,“Ontology evolution within ontologyeditors,”in OntoWeb-SIG3Workshop at the13th International Conference on Knowledge Engineering and Knowledge Management EKAW2002;Siguenza(Spain),30th September2002,2002.[Online].Available:http://km.aifb.uni-karlsruhe.de/eon2002/program.html。