当前位置:文档之家› 基于粗糙集的本体不确定性建模(IJISA-V8-N4-6)

基于粗糙集的本体不确定性建模(IJISA-V8-N4-6)

基于粗糙集的本体不确定性建模(IJISA-V8-N4-6)
基于粗糙集的本体不确定性建模(IJISA-V8-N4-6)

I.J. Intelligent Systems and Applications, 2016, 4, 49-59

Published Online April 2016 in MECS (https://www.doczj.com/doc/753145061.html,/)

DOI: 10.5815/ijisa.2016.04.06

Modeling Uncertainty in Ontologies using

Rough Set

Armand F. Donfack Kana

Ahmadu Bello University/ Department of Mathematics, Zaria, Nigeria

E-mail: donfackkana@https://www.doczj.com/doc/753145061.html,

Babatunde O. Akinkunmi

University of Ibadan/ Department of Computer Science, Ibadan, Nigeria

E-mail: bo.akinkunmi@https://www.doczj.com/doc/753145061.html,.ng

Abstract—Modeling the uncertain aspect of the world in ontologies is attracting a lot of interests to ontologies builders especially in the World Wide Web community. This paper defines a way of handling uncertainty in description logic ontologies without remodeling existing ontologies or altering the syntax of existing ontologies modeling languages. We show that the source of vagueness in an ontology is from vague attributes and vague roles. Therefore, to have a clear separation between crisp concepts and vague concepts, the set of roles R is split into two distinct sets R c and R v representing the set of crisp roles and the set of vague roles respectively. Similarly, the set of attributes A was split into two distinct sets A c and A v representing the set of crisp attributes and the set of vague attributes respectively. Concepts are therefore clearly classified as crisp concepts or vague concepts depending on whether vague attributes or vague roles are used in its conceptualization or not. The concept of rough set introduced by Pawlak is used to measure the degree of satisfiability of vague concepts as well as vague roles. In this approach, the cost of reengineering existing ontologies in order to cope with reasoning over the uncertain aspects of the world is minimal.

Index Terms—Ontologies, Uncertainty, Rough set, Approximation.

I.I NTRODUCTION

Ontologies are explicit representations of conceptualizations and are widely used in modelling real world domains by defining their shared vocabularies such that they can be understood without ambiguity by both humans and machines. Classical ontologies contain only concepts and relations that describe asserted facts about the real world and therefore, lack consistent support for uncertainty and imprecision. As a result of that, they cannot handle incomplete or partial knowledge about an application domain. This is because most languages used in modelling ontologies nowadays are derived from Description logics(DL) which are crisp logic. In modelling the real world using DL, all the knowledge or partial knowledge of the domain is captured and represented in a crisp manner. According to [1], in doing that, the obtained ontology is not uncertain in nature, but rather presents an a priori model of the world, which has to be taken as true by its users. Such representations ignore one of the main characteristics of the real world which is vagueness. Vagueness, which could be caused by information incompleteness, randomness, limitations of measuring instruments, etc. are pervasive in many complicated problems in economics, engineering, environment, social science, medical science etc., that involve data which are not always crisp[2]. However, for ontology to reflect the real world domain, its uncertain aspects should be modelled as they are, and allow reasoning under uncertainty rather than interpreting its contents in a restrictive manner. As pointed out in [1], such interpretation in a restrictive manner leads to storing of erroneous pieces of information. For example, the answer to the question “Is Aristotle the greatest philosopher the world has ever had?” should de pend on perceptions rather than being an absolute true or false decision. The lack of traditional ontologies formalisms including DL to support the representation of uncertainties and imprecision limits them in handling incomplete, partial knowledge or vagueness about an application domain.

Modelling uncertainty in ontologies has become a concern to ontologies builders, especially in the WWW community. The need of modelling and reasoning with uncertainty has been found in many different Semantic Web contexts such as matchmaking in Web services, classification of genes in bioinformatics, multimedia annotation and ontology learning [3]. In their effort to handle uncertainty in the web, the W3C founded the Uncertainty Reasoning for the World Wide Web Incubator Group. In their final report, they presented requirements for better defining the challenge of reasoning with and representing the uncertain information available through the WWW and related WWW technologies[4]. This paper presents an approach of representing and interpreting uncertainty in DL ontologies based on Pawlak rough set[5]. Rough set is a mathematical tool for processing information with uncertainty and vagueness and is also a useful soft

computing tool in intelligence computing[6]. Rough set has been proposed for a variety of computational applications, especially machine learning, knowledge discovery, data mining, expert systems, approximate reasoning, and pattern recognition [7][8][9][10] [11] [12]. More specifically, this paper shows that by classifying the set of attributes of an ontology into the set of crisp attributes and the set of vague attributes, and also classifies the set of relations of the same ontology into the set of crisp and vague relations, one can easily detect the uncertain aspect of the ontology and approximate them using rough set. The rest of this paper is organized as follow: in the next section, the basic notions of description logics as ontologies language and rough set are reviewed. Section 3 presents our approach for representing uncertainty in ontologies. In section 4, the instantiation of vague concepts and vague relations is defined. section 5 discusses some issues related to the representation of uncertainty in an expressive description logic. Section 6 presents some related work and finally, section 7 concludes the paper

II.P RELIMINARIES

In this section, we present the basic notion of description logics and establish the connection between DL and the web ontology language (OWL) as DL constitutes the formal logic of OWL DL. We also present an overview of rough set as defined by Pawlak

A. Description Logics

Description logics are a family of knowledge representation formalisms which can be used to represent the terminological knowledge of an application domain in a structured and formally well-understood way. Well understood means that there is no ambiguity in interpreting the meaning of terminologies by both humans and machines. They are characterized by the use of various constructors to build complex concepts from simpler ones, an emphasis on the decidability of key reasoning tasks and by the provision of sound, complete and (empirically) tractable reasoning services[13]. Inference capability of DL makes it possible to use logical deduction to infer additional information from the facts stated explicitly in an ontology[14].

DL are made up of the following components: Instances of objects that denote singular entities in our domain of interest, Concepts which are collections or kinds of things, Attributes which describe the aspects, properties, features, characteristics, or parameters that objects can have and Relations which describe the ways in which concepts and individuals can be related to one another. It is customary to separate them into three groups: Assertional (ABox) axioms, Terminological (TBox) axioms and Relational (RBox) axioms.

?ABox axioms capture knowledge about named individuals. For example, Father(peter) is a concepts assertion which asserts that peter is an instance of the concept Father. hasChild(Peter, Amina)is a role

assertion which asserts that Peter is the parent of Amina.

?TBox axioms describe relationships between concepts. For example, the general concept inclusion such as Mother ?Parent which defines Mother as subsumed by the concept Parent. Fig.1 defined in Baader & Werner [15] shows a sample Tbox describing the family relationship.

Fig.1.TBox with Concepts about Family Relationships

?RBox axioms refer to properties of roles. It captures interdependencies between the roles of the considered knowledge base. For example the role inclusion motherOf ? parentOf.

An ontology is said to be satisfiable if an interpretation exists that satisfies all its axioms. When an ontology is not satisfied in any interpretation, it is said to be unsatisfiable or inconsistent.

An interpretation I = (?I, .I) consists of a set ?I called the domain of I, and an interpretation function.I that maps each atomic concept A to a set A I ??I , every role R to a binary relation R I, subset of ?I×?I and each individual name a to an element a I∈?I.

There exist several DL and they are classified based on the types of constructors and axioms that they allow, which are often a subset of the constructors in SROIQ. The description logic ALC is the fragment of SROIQ that allows no RBox axioms and only ?, ?, ?, ? and ? as its concept constructors. The extension of ALC to include transitively closed primitive roles is traditionally denoted by the letter S. This basic DL is extended in several ways. Some other letters used in DL names hint at a particular constructor, such as inverse roles I, nominals O (i.e., concepts having exactly one instance), quali?ed number restrictions Q, and role hierarchies H. For example, the DL named SHIQ is obtained from S by allowing additionally the role hierarchies, inverse roles and quali?ed number restrictions. The letter R most commonly refers to the presence of role inclusions, local re?exivity Self, and the universal role U, as well as the additional role characteristics of transitivity, symmetry, asymmetry, role disjointnes s, re?exivity, and irre?exivity[14].

Although DL have a range of applications, OWL is one of its main applications. OWL is based on Description Logics but also features additional types of extra-logical information such as ontology versioning information and annotations. Rudolph16. The main building blocks of OWL are indeed very similar to those of DL, with the main difference that concepts are called classes and roles

are called properties[14]. Large parts of OWL DL can indeed be considered as a syntactic variant of SROIQ. In many cases, it is indeed enough to translate an operator symbol of SROIQ into the corresponding operator name in OWL. B. Rough Set

Pawlak [5] defined rough set in the following way: Suppose we are given a set of objects U called the universe and an indiscernibility relation R ?U ? U representing our lack of knowledge about elements of U . Assume that R is an equivalence relation. Let X be a subset of U. We want to characterize the set X with respect to R .

? R -lower approximation of X is defined by

()()(){}*:x U

R x R x R x X ∈=

? (1)

In other words, The lower approximation of a set X with respect to R is the set of all objects, which can be for certain classified as X with respect to R (are certainly X with respect to R ).

? R -upper approximation of X is defined by

()()(){}*:x U

R x R x R x X ∈=

?≠? (2)

In other words, The upper approximation of a set X with respect to R is the set of all objects which can be possibly classified as X with respect to R (are possibly X in view of R).

? R -boundary region of X is defined by

()()()**R RN X R X R X =- (3)

In other words, the boundary region of a set X with respect to R is the set of all objects, which can be classified neither as X nor as not-X with respect to R.

Fig.2. Rough Set Structure

A Set is rough (imprecise) if it has nonempty boundary region; otherwise the set is crisp (precise). Fig. 2 defined in Pawlak[17] shows the structure of a rough set.

III. M ODELING U NCERTAINTY IN O NTOLOGIES In this section, we provide a formal definition of ontology and then present our approach of modelling uncertainty in ontologies which is able to handle both crisp and the vague aspect of the world domain. In this technique, there is a clear separation between the modelling language and the representation of uncertainty itself. Such separation is useful in generalising the representation of uncertainty in ontologies regardless of the modelling language. It is also useful in reengineering existing ontologies in order to enable them to cope with uncertainty with minimal change. More importantly, without modifying the core knowledge base of ontologies.

Definition 1: An ontology system is a 6-tuple O = (D, C, R, A, I, ∑) where D ??c i ,c i ∈C is the domain of discourse of the ontology, C ={c 1,c 2,…,c m } is a non-empty finite set of concepts of domain D,

R ={r 1,r 2,,r 3,...,r n } is a finite set of relations, A ={a 1,a 2,,a 3,…,a k } is a finite set of attributes,

I ={i 1,i 2,,i 3,...,i p } is a finite set of instances and ∑ is the chosen ontology language.

Real world is made up of crisp elements and imprecise elements. Imprecise here represents our lack of having a full knowledge of objects. It is intended to cover a variety of forms of incomplete knowledge, including incompleteness, vagueness, ambiguity, and others. Elements in this context refer to either attributes, which describes the properties of objects or relations that link individuals. Both attributes and relations can either be vague or crisp. During the conceptualization, the definition of concepts is performed by using attributes, roles and already defined concepts. Since roles and attributes are either crisp or imprecise, the obtained concepts will also be crisp or imprecise depending on the types of attributes and relations used in their conceptualization.

To express the degree of knowledge of the domain, we introduce the concept of Properties Box (PBox) as part of the ontologies knowledge base and also extend the RBox to contain more information about roles instead of performing only role declaration. PBox and the extended RBox express necessary knowledge about the properties and relationships between individuals of the domain of discourse respectively. To achieve that, the PBox classifies the attributes into the crisps attributes and vague attributes such that, the set of attributes A =A c ∪ A v where A c and A v represent the set of crisp and the set of vague attributes respectively. An attribute a i ∈A c ? a i is a crisp attribute and a i ∈A v ? a i is a vague attribute. Similarly, the role classification of the RBox

classifies the relations into the crisps relations and vague

relations such that, the set of roles R=R c∪ R v where R c and R v represent the set of crisp and the set of vague relations respectively. A relation r i ∈R c? r i is a crisp relation and r i ∈R v? r i is a vague relation. Fig.3 presents our modelling architecture

An ontology is said to be of a crisp domain if R v=? and A v=?, otherwise, it is an ontology of a vague domain. Similarly, a concept is crisp if all roles and attributes used in its definitions are crisps. Vague elements are therefore interpreted using approximate techniques while taking into consideration the degree of available knowledge about the elements. It is worth noting that, crisp elements can still be interpreted in any classical way.

Example 1:Assume a DL representation of the following statement “A happy father has a child and all beings he cares for are healthy” as follow:

Happy Father ? Man ??hasChild.Person??careFor. Healthy

The definition of the concept HappyFather contains a crisp role hasChild and a crisp concept Man. However, the role careFor and the attribute Healthy are vague since one cannot quantify them with a true or false membership especially when it comes to the boundary region. They can be classified as follow: Healt?y∈A v, hasChild ∈R c, and careFor∈R v. This makes the concept HappyFather vague.

Fig.3. Ontologies of Uncertainty Modelling

IV.I NTERPRETING AN O NTOLOGY OF U NCERTAINTY Given an ontology, an interpretation function.I maps the concept A∈ C to a set A I ??I. In classical ontologies

interpretation, the result of such mapping returns the set of individual satisfying the concept A. The concept is said to be satisfiable if there exists an interpretation for which the result does not return an empty set. That is A I ≠?.otherwise, the concept is not satisfiable. This interpretation assumes all concepts to be crisp. Consequently, if concept A is not satisfiable, then ¬A is satisfiable. Since ¬A I=?I - A I.

A. Vague Concept Approximation

In this section, we consider the problem of approximating vague concepts over the set of individuals. Because of the vagueness in their conceptualization, such concepts are perceived only through some subsets of I. We use the rough membership to approximate them. As shown in [18], a concept can be represented as attributed valued. We therefore construct a decision table based on concepts’attributes to approximate their rough membership. Like any information system, an ontology can be represented in a tabular form as shown in table 1. Table columns are labelled with concepts, table rows labelled with instances of interest and entries of the table are concepts values. The function f:I×C→V is such that f(i,c)∈V c, for every (i,c)∈I×C represents the information knowledge of i with respect to c. Where V c represents the set of values an instance i may have with respect to c.

Table 1. Tabular Representation of Ontology Knowledge

Table 2. Tabular Representation of a Concept Knowledge Based on Its

Attributes

Furthermore, since it was proved in [18] that a concept can be defined by using attributes and relations with no regards to previously defined concepts, we can therefore define an attributed table for a specific vague concept say c, based on the set of attribute used in it definition as shown in table 2. Let B be a subset of the set of attributes A such that, the elements of B are the attributes used in

the expanded definition of c. let f:I×A→V be a function defined such that f(i,a)∈V a, for every (i,a)∈I×A represents the information knowledge of i with respect to a.

From table 2, we can now determine the rough approximation of a concept. The starting point of rough approximation is the indiscernibility relation, which is generated by information about objects of interest. Elements that exhibit the same information are indiscernible (similar) and form blocks that can be understood as elementary granules of knowledge about the universe. The indiscernibility relation expresses the fact that due to the lack of knowledge we are unable to discern some objects employing the available information. Therefore, we are unable to deal with a single object. Nevertheless, we have to consider clusters of indiscernible objects.

Given an ontology O, two individuals x, y ∈ I are said to be c i-indiscernible (indiscernible by the concept c i∈ C) if and only if x, y ∈c i I. Since each concept can be uniquely defined from union or intersection of the most general concept, the set of attributes, the set of relations and constructors, we can define the concepts of indiscernible objects as follow:

Definition 2:Let O = (D, C, R, A, I, ∑ ) be an ontology and let c ∈ C be any concept of O and let B be the subset of A ∪? such that the elements of B are the attributes of the expanded definition of c. Two individuals x, y ∈ I are said to be c-indiscernible (indiscernible by the concept c) if and only if f(x, a)=f(y, a) for every a∈B. where f(x, a) denotes the value of the interpretation of x with respect to a.

Obviously, every concept c ∈ C induces a unique indiscernibility relation denoted by IND(c), which is an equivalence relation. The partition of I induced by IND(c) in ontology O is denoted by I/c and the equivalence class in the partition containing i ∈I, denoted by [i]c. Since the vague concept c cannot be characterised by using available knowledge, two crisps set are associated to c called its lower and upper approximation.

-The c-lower approximation of X, denoted by c?(X) where X is a given subset of I is defined as

c?(X)={i∈I suc? t?at [i]c?X}. (4)

In other words, the c-lower approximation of

concept c is the set of all individuals i∈I, which

can be for certain classified as instances of c I

-The c-upper approximation of X, denoted by c?(X) where X is a given subset of I is defined as

c?(X)={i∈I suc? t?at [i]c∩X≠?}. (5)

In other words, the c-upper approximation of

concept c is the set of all individuals i∈I, which

can be possibly classified as instances of c I

-The c-boundary region of X is the set of all individuals i∈I, which can be classified neither as

instances of c I nor as instances of ¬c I by employing available knowledge.

The accuracy of approximation of X with respect to c is defined as

αc(X)=|c?(X)|

|c(X)|

(6)

where |X| denotes the cardinality of X. whenever the accuracy of approximation αc(X)=1, then, the concept is crisp.

Example 2: Assuming the following concept definition:

HealthyPerson? Person ? MentallyStable ?EmotionallyStable ? MedicallySound

Where Person in this context is assumed to be the most general concept and MentallyStable, EmotionallyStable, MedicallySound are vague properties

Let the set of individuals I={A,B,C,D,E,F,G, H,I,J, K,L,M,N,O,P,Q,R,S,T,U,V,W,X,Y} and the set of attributes A={MentallyStable, EmotionallyStable, MedicallySound }

Let V x denotes the set of attribute values for attribute x,defined as follow:

V MentallyStable={High, Low, Mile}

V EmotionallyStable={High, Low, Mile}

V MedicallySound={High, Low, Mile}

V HealthyPerson={High, Low}

Table 3. Decision Table of HealthyPerson

Assuming that, based on available knowledge which does not provide enough information to classify boundary regions individuals, the decision table of Table3 is constructed

Since the concept Person is crisp and at the same time is the most general concept, all individuals i∈I must be instances of Person. Therefore, in the approximation of HealthyPerson over I, there is no need of bothering about the satisfiability of person.

Let f(HealthyPerson: High)={H, M, N, S} be the approximation of “HealthyPerson” when it corresponding attribute value is “ High”over the set of individuals, which should be denoted in the remaining of this example as f for short.

The set of equivalent classes can be defined based on the possible properties’ values as follow: [MentallyStable: Low; EmotionallyStable: Low; MedicallySound: Low] = {A, B, F} [MentallyStable: Low; EmotionallyStable: Mild; MedicallySound: Low] = {C, D, E} [MentallyStable: Mild; EmotionallyStable: Mild; MedicallySound: Low] = {G}

[MentallyStable: Mild; EmotionallyStable: High; MedicallySound: Mild] = {H, I}

[MentallyStable: Low; EmotionallyStable: High; MedicallySound: Mild] = {J}

[MentallyStable: High; EmotionallyStable: Low; MedicallySound: Low] = {K, Q}

[MentallyStable: High; EmotionallyStable: Mild; MedicallySound: Low] = {L}

[MentallyStable: Mild; EmotionallyStable: Mild; MedicallySound: Mild] = {M}

[MentallyStable: Low; EmotionallyStable: High; MedicallySound: High] = {O}

[MentallyStable: Mild; EmotionallyStable: High; MedicallySound: High] = {N}

[MentallyStable: Mild; EmotionallyStable: Low; MedicallySound: Low] = {U, P, V} [MentallyStable: High; EmotionallyStable: Low; MedicallySound: Mild] = {R}

[MentallyStable: High; EmotionallyStable: Low; MedicallySound: High] = {S}

[MentallyStable: Low; EmotionallyStable: Low; MedicallySound: High] = {T}

[MentallyStable: Mild; EmotionallyStable: Low; MedicallySound: Mild] = {W, X} [MentallyStable: Low; EmotionallyStable: Low; MedicallySound: Mild] = {Y}

Consequently, the set of partitions of I with respect to HealthyPerson is defined as follow:

I/HealthyPerson ={{A, B, F}, {C,D,E}, {G}, {H,I}, {J}, {K,Q}, {L}, {M}, {O}, {N}, {U,P,V}, {R}, {S}, {T}, {W,X}, {Y}}

Finally,

HealthyPerson ?(f)={{M}{N}{S}} HealthyPerson ?(f)={{H,I},{M},{N},{S}}

Because the lower approximation is not empty, one can conclude that, Healt?yPerson is absolutely satisfiable. The accuracy of approximation of f with respect to the concept HealthyPerson is

αHealt?yPerson(f)=3

5

=0.6 (7) B. Instantiating Uncertain Individuals

Uncertain individuals are approximated as absolute or rough instance of a concept. The classification depends on whether the individual belongs to the lower approximation or boundary region respectively. By introducing the notion of absolute and rough instances, one clearly shows the degree of satisfiability of vague concepts or properties. Given a concept c∈C and an individual i∈I, the degree of membership of i with respect to a rough interpretation of c denoted by μX c(i) is a value in the range[0,1] defined as

μX c(i)=|X∩c?(i)|

|c?(i)|

(8)

Where X is a given subset of I, c?(i)the partition of X containing i and |X| denotes the cardinality of X.

For example, by using the partition of equivalent classes I/HealthyPerson obtained in example 2, we can compute the degree of certainty of individuals f ={H ,M,N,S} classified as instances of HealthyPerson as follow:

μ

{H ,M,N,S}

Healt?yPerson(H)=|{H ,M,N,S}∩{H,I}|

|{H,I}|

=1

2

(9)

μ

{H ,M,N,S}

Healt?yPerson(M)=|{H ,M,N,S}∩{M}|

|{M}|

=1 (10)

μ

{H ,M,N,S}

Healt?yPerson(N)=|{H ,M,N,S}∩{N}|

|{N}|

=1 (11)

μ

{H ,M,N,S}

Healt?yPerson(S)=|{H ,M,N,S}∩{S}|

|{S}|

=1 (12)

Consequently, H is a rough instance of HealthyPerson while M, N and S are absolute instances of HealthyPerson.

C. Vague Relations Interpretation

In this section, we consider the problem of approximating vague relations between sets of individuals. The conception of rough relations as defined by Pawlak19cannot be applied directly in the case of ontologies because relations in rough set are assumed to be equivalence relations which is not always the case in ontologies. It is known that for a relation R to be an equivalence relation, it must be reflexive, symmetric and transitive. For example, assuming a relation motherOf (x,y) which specifies that, x is a mother of y. It is obvious that reflexivity e.g. MotherOf(peter, Peter), symmetric e.g. MotherOf(Mary, Peter) → MotherOf(Peter, Mary) and transitivity e.g. MotherOf(Amina, Mary) and

MotherOf(Mary, Peter)→ MotherOf(Amina, Peter)will not hold in this context.

Because of this limitation, we extend the general binary relation based rough set [6][20][21] to derive an approximation of ontological rough relations. The general binary relation based rough set is defined based on general binary relation.

Definition 3: Let U be the universe and let R?U×U be a general binary relation on the universe. For any x ∈ U, denote R s(x)={ (x,y)∈U×U|xRy} then R s(x) consists of pairs (x,y)such that y is called the successor

neighbourhood of x with respect to R . For any y ∈U, denote R p(y)={ (x,y)∈U×U|xRy} then R p(y) consists of pairs (x,y) such that x is called the predecessor neighbourhood of y with respect to R.

In other words, given x∈U, the successor neighbourhood R s(x) is consisted of all pairs (x, y) which satisfy x R y and, given y ∈U the predecessor neighbourhood R y(y)is the set of all pairs (x, y) which satisfy xRy. They can both be treated as classes induced by the general binary relation R. We can now define an approximation of rough ontological relations based on rough set theory.

Definition 4:Let O be an ontology and let P and Q be

two approximations over I,and let r∈R be any vague relation of R. For any X ?I×I, The X-lower approximation and the X-upper approximation of r with respect to P×Q are defined respectively as follows.

X?(r)={(x,y)∈P×Q |r s(x)?X} (13)

X?(r)={(x,y)∈P×Q |r s(x)∩X≠?} (14)

The accuracy of approximation of X with respect to r is defined as

αr(X)=|X?(r)|

|X(r)|

(15)

Example 3:Given I={A,B,C,D,E,F,G,H,I,J,K,L,M,N, O,P,Q,R,S,T,U,V,W,X,Y}, assume two arbitrary approximations P={A,B,C,E} and Q={B,C,D,E,F} over I. let X={(A,B),(A,D),(C,D),(E,D),(E,F)} and let ?P×Q={(A,B),(A,D),(B,C),(B,E),(C,D),(C,E),(E,D),(EF)} then, the partition defined by the successors neighbourhood with respect to r are the following:

r s(A)={(A,B),(A,D)}

r s(B)={(B,C),(B,E)}

r s(C)={(C,D),(C,E)}

r s(E)={(E,D),(E,F)}

Consequently, the set of partitions of r with respect to r s is defined as follows:

r/r s={{(A,B),(A,D)},{(B,C),(B,E)},{(C,D),(C,E)},{(E,D)

,(E,F)}}

Therefore,

r?(X)={{(A,B),(A,D)},{(C,D),(C,E)},{(E,D),(E,F)}} r?(X)={{(A,B),(A,D)},{(E,D),(E,F)}}

The accuracy of approximation of X with respect to r is

αr(X)=4

6

=0.67 (16) D. Vague Relation Membership

A pair (x, y) can be approximated as absolute or rough member of a relation r depending on whether the (x, y) belongs to the lower approximation or boundary region of r respectively. The degree of membership of (x, y) with respect to r denoted by μX r s(x,y) is defined as

μX r s(x)(x,y)=|X∩r s(x)|

|r s(x)|

(17)

Where X is a given subset of I,|X| denotes the cardinality of X and r s(x) the partition defined by the r-successors of x.

From Example 3, the following membership value can be defined

μX r s(A)(A,B)=|{(A,B),(A,D),(C,D),(E,D),(E,F)}∩{(A,B),(A,D)}|

|{(A,B),(A,D)}|

=1 (18)

μX r s(C)(C,D)=|{(A,B),(A,D),(C,D),(E,D),(E,F)}∩{(C,D),(C,E)}|

|{(C,D),(C,E)}|

=1

2 (19)

E. Vague Relations and Vague Concepts

The approximation of vague relation r as defined in section 4.3. assumes the approximations P and Q over I to be crisp approximations. If P and Q are approximation of vague concepts, we should have obtained P-upper approximation and P-lower approximation for P and Q-upper approximation and Q-lower approximation for Q. In that case, for every pair (x,y)∈P×Q, four scenarios are possible:

Case1: x∈BN(P) and y∈BN(Q)

Case2: x∈BN(P) and y∈Q?

Case3: x∈P? and y∈BN(Q)

Case4: x∈P? and y∈Q?

In the first three cases, one cannot talk of lower approximation of r since for each of them, either x∈BN(P) or y∈BN(Q). That is the instantiation of at least one of the individuals of the pair (x,y) is not certain. Because of that, it is not possible to establish a certain relation, that is r?, between x and y when the instantiation of individuals x and y with respect to P and Q respectively is not certain.

In case 4, since x∈P? and y∈Q?, we are certain of the instantiation of individuals x and y with respect to P and Q respectively. Therefore, a lower approximation of

r can be defined. The lower and the upper approximation of r can now be reformulated as follow:

Definition 5:Let O be an ontology and let P and Q be two approximations over I,and let r∈R be any vague relation of R. For any X ?I×I, The X-lower approximation and the X-upper approximation of r are defined respectively as follows:

X?(r)={(x,y)∈A?(P)×B?(Q) |r s(x)?X} (20) X?(r)={(x,y)∈A?(P)×B?(Q) |r s(x)∩X≠?} (21)

Where A?(P) and A?(P) denote the A-lower and A-upper approximation of P for some A?I such that dom(X)?A. B?(Q) and B?(Q) denote the B-lower and B-upper approximation of Q for some B?I such that Ran(X)?B. The rough membership of a pair (x, y) now should take into consideration the degree that x belongs to P given A, that is μA P(x), the degree that y belongs to Q given B, that is μB Q(y)and the degree that (x, y) belongs to X given r denoted by μX r s(x,y). Thus, the degree of membership of (x, y) denoted by μX(x,y) is defined as

μX(x,y)=μA P(x) × μX r s(x)(x,y)× μB Q(y) (22)

Example 4:Assuming the following definition: HappyPerson? HealthyPerson ? CareFor.Healthy and the set of individuals I={A,B,C,D,E,F,G,H,I,J,K,L,M,N, O,P,Q,R,S,T,U,V,W,X,Y} where HealthyPerson is a vague concept approximated as shown in example 2. That is, Healt?yPerson?(f)={{H, I},{M}, {N}, {S}}and Healt?yPerson ?(f)={{M},{N},{S}}where f(HealthyPerson: High)={ H,M,N,S}.

Suppose in the similar way, the vague concept Healthy is approximated to Healt?y?(?)= {{B}, {D, E}} and Healty?(?)= {{B}, {D, E}, {A, F}} where h(Healty) ={B, D, E, F}

The approximation of the concept HappyPerson is resumed to the approximation of the relation CareFor over f×? such that

(x,y)∈CareFor ?x∈Healt?yPerson ∧

y∈Healt?y

Let g? f×?={(H,A),(H,B),(I,D),(I,E),(M,E),(N,F)} and let CareFor={(H,A),(I,D),(I,E),(M,E)} then, the partitions defined by the successors neighbourhood with respect to g are the following:

g s(H)={(H,A),(H,B)}

g s(I)={(I,D),(I,E)}

g s(M)={(M,E)}

g s(N)={(N,F)}

Consequently, the set of partitions of g with respect to g s is defined as follows:

g/g s={{(H,A),(H,B)},{(I,D),(I,E)},{(M,E)},{(N,F) }}

Therefore,

g?(CareFor)

={{(H,A),(H,B)},{(I,D),(I,E)},{(M,E)}}

g?(CareFor)={{(M,E)}}

The degree of membership,

μg((H,A))=μ{

H ,M,N,S}

Healt?yPerson(H)×μ

g

g s(H)((H,B))×

μ{

B,D,E,F}

Healt?y(A) (23) where

μ

{H ,M,N,S}

Healt?yPerson(H)=|{H ,M,N,S}∩{H,I}|

|{H,I}|

=1

2

(24)

μ{

B,D,E,F}

Healt?y(A)=|{H ,M,N,S}∩{A,F}|

|{A,F}|

=1

2

(25)

μg g s(H)((H,A))=|{(H,A),(I,D),(I,E),(M,E)}∩{(H,A),(H,B)}|

|{(H,A),(H,B)}|

=1

2

(26) thus,

μg((H,A))=1

2

× 1

2

× 1

2

=1

8

(27)

Finally, the approximation of HappyPerson, that is f(HappyPerson)=dom(g(careFor))

HappyPerson?(f(HappyPerson))={M}

HappyPerson?(f(HappyPerson))={H,I,M}

The degree of membership μHappyPerson(H)=1

8

. The degree of membership of other individuals can be computed similarly.

V.D ISCUSSION

The key features of DL ontologies are the richness of the available constructors and their inference capability. As pointed out in Keet22, it would be a severe under-usage of DL knowledge bases if one only were to deal with rough ontology by copying the Pawlak’s informatio n system essentials without taking into consideration the richness of DL constructors as well as the flexibility of how to represent ontologies elements. For example, consider the definition of a vague concept HappyFather as follow: HappyFather? Man ??hasChild.Person ??careFor.Healthy where careFor and Healthy are vague role and vague attribute respectively. Because of the uses of the constructors ? and ?, this definition cannot be interpreted in the standard Pawlak conception of information system. However, for the universal and existential quantification (?,?) as well as the number restrictions (≥,≤), their rough interpretation can be easily defined. Let r be a vague relation, C a given concept and ??I×I then,

-If ?(i,j)∈??(r s(i)), j∈C?(c I) then ?r.C is satisfiable

-If ?(i,j)∈??(r s(i)), j∈C?(c I) then ?r.C is roughly satisfiable

-If ?(i,j)∈??(r s(i)),suc? t?at j∈C?(c I) then ?r.C is satisfiable

-If ?(i,j)∈??(r s(i)),suc? t?at j∈C?(c I)then ?r.C is roughly satisfiable

-If |??(r s(i))|≥n where |?| denotes the cardinality of h, then ≥nr.C is satisfiable

-If |??(r s(i))|≥n and |??(r s(i))|≤n then both ≥nr.C and ≤nr.C are roughly satisfiable

-If|??(r s(i))|≤n then≤nr.C is satisfiable

A general solution to this limitations on rough ontology based on Pawlak rough set is to define a rough interpretation of all constructors of an expressive DL as well as defining a rough tableau reasoning procedure.

VI.R ELATED W ORK

Several techniques of handling uncertainty in ontologies have been proposed. A good survey of early works in that direction is presented [3]. Although some techniques based on rough set have been proposed, most techniques are based on probability and the fuzzy logic. They differ mostly on the ontological language considered, the supported forms of fuzzy knowledge, and the underlying fuzzy reasoning formalism.

[22] augments SROIQ(D) and their application infrastructures together with rough set. The author shows that rough concepts can be dealt with in the mapping layer that links the concepts in the ontology to queries over the data source. He experimented his approach with rough concepts and vague instances using the HGT ontology with the HGT-DB database and septic patients.

[23] is the same as [24] except that the language considered is OWL2.

A probabilistic generalization of RDF, which allows for representing terminological probabilistic knowledge about classes and assertional probabilistic knowledge about properties of individuals was presented in[24]. The authors provide a technique for assertional probabilistic inference in acyclic probabilistic RDF theory, which is based on the notion of logical entailment in probabilistic logic, coupled with local probabilistic semantics. [25] suggested an extension of the OWL called PR-OWL, that provides a consistent framework for building probabilistic ontologies. The probabilistic semantics of PR-OWL is based on multi-entity Bayesian networks. PR-OWL combines the expressive power of OWL with the flexibility and inferential power of Bayesian logic. BayesOWL [26] is a probabilistic generalization of OWL which is based on standard Bayesian networks. BayesOWL is used to quantify the degree of overlap or inclusion between two concepts. It provides a set of rules and procedures for the direct translation of an OWL ontology into a Bayesian network, and it also provides a method for incorporating available probability constraints when constructing the Bayesian network. The generated Bayesian network preserves the semantics of the original ontology and is consistent with all the given probability constraints.

OntoBayes [27] is an ontology-driven model, which integrates Bayesian networks (BN) into the OWL to preserve the advantages of both. It defines additional OWL classes which can be used to markup probabilities and dependencies in OWL files. From the perspective of ontologies, the OntoBayes model preserves the ability to express meaningful knowledge in very large complex domains and extent ontologies to probability annotated OWL to facilitate meaningful knowledge representation in uncertain systems. From the perspective of probabilistic modelling, the model takes the advantage of powerful knowledge representation ability from ontologies, to scale up the ability to do uncertain reasoning.

Pronto[28][29] is a non-monotonic probabilistic reasoner for expressive DL developed in order to perform a probabilistic reasoning in the semantic web. Pronto reason about uncertainty in OWL ontologies by establishing the probabilistic relationships between OWL classes and probabilistic relationships between an OWL class and an individual. One of the principal requirements for pronto is that the uncertainty could be introduced into OWL ontologies and that existing OWL reasoning services should be retained. To meet that requirement, Pronto is designed on top of the OWL reasoner to provide routines for higher level probabilistic reasoning procedures.

A fuzzy extensions of an expressive classical DL SHIN which improves the knowledge expressiveness of ALC by allowing constructors including number restriction, inverse role, transitive role, and role hierarchy was presented in [30]. The authors provide a tableaux calculus for fuzzy SHIN without fuzzy general concept inclusions. Its semantics is based on Zadeh logic.

FuzzyDL [31] is an expressive is a DL reasoner supporting fuzzy logic reasoning. The reasoning algorithm uses a combination of a tableaux algorithm and a Mixed-integer linear programming (MILP). fuzzyDL extends the classical DL SHIF(D) to the fuzzy case. But in addition to the constructs of SHIF(D) it allows some new concepts constructs such as weighted concepts (n C), weighted sum concepts (n1C1 + · · · + n k C k) and threshold concepts (C[≥ n]) and (C[≤ n]) where C is a concept and n the weight. The degrees of the fuzzy axioms may not only be numerical constants, but also variables, thus being able to deal with unknown degrees of truth. The most interesting feature of fuzzyDL is the expressivity of the representation language.

An algorithm to perform a query-specific reasoning method for inconsistent and uncertain ontologies without revising the original ontologies was proposed in [32]. Their reasoning method adopts a selection function, which is capable of selecting elements related to a specific query. Thus, the elements necessary for the reasoning could be obtained fast and the query results

could be returned efficiently. Apart from the true or false answer, the results of the query achieved by their method also include the specific reasoning route (path) and certainty degree of each answer. In this way, the user may obtain more useful information to facilitate his selection of the most credible query result according to the certainty degree of each result.

An extension of description logics using possibilistic logics to reason with inconsistent and uncertain knowledge was proposed in [33]. The authors define the semantics and syntax of possibilistic description logics. The authors also define two inference services in possibilistic description logics named possibilistic inference and a variation, named linear order inference. The algorithms for possibilistic inference of the proposed logics is independent of DL reasoner.

An approach of addressing the problem of representing uncertainty based on the Dempster–Shafer theory was proposed in [1]. The authors modelled a Dempster-Shafer upper level ontology that can be merged with any specific domain ontology and to instantiate it in an uncertain manner. To reason about the uncertain objects contained in the ontology, the authors implement a Java application based on Jena framework. The Jena framework is a tool to retrieve the instances associated to each individual uncertain concept, and the Dempster-Shafer information collected. This information is then transmitted to a common basic Dempster-Shafer library which then performs calculations in the Dempster-Shafer theory. Finally, decision criteria can be applied to extract the chosen hypothesis, either based on the maximum of plausibility or belief function or either on probability criteria, etc.

A tableaux algorithm for computing the inconsistency degree of a knowledge base in possibilistic DL ALCIR+, which extends possibilistic DL ALC with inverse roles and transitive roles was proposed in [34]. The authors proposed a blocking condition to ensure the termination of their algorithm.

VII.C ONCLUSION

This paper has presented an approach of modeling uncertainty in ontologies as a result of the shortcoming of classical ontologies not being able to represent vague or incomplete knowledge of an application domain. Since the source of vagueness in ontologies is from vague attributes and vague relations, concepts are therefore clearly classified as crisp or vague concepts depending on whether vague attributes or vague relations are used in their conceptualization or not. The approximation of the degree of instance of individuals with respect to a given concept or relation is therefore evaluated based on the rough membership as defined in Pawlak rough set. Subsequent works look into achieving a satisfiability reasoning of complex construct of expressive description logic language. This will be based on providing an extension of the tableau-based algorithm with a rough interpretation of various constructors defined in SROIQ which is an expressive description logics in which the core logic of OWL is based upon.

R EFERENCES

[1] A. Bellenger and S. Gatepaille, "Uncertainty in Ontologies:

Dempster-Shafer Theory for Data Fusion Applications,"

workshop on the Theory of Belief Functions, Brest, France CoRR abs/1106.3876, 2011.

[2]P. Maji, R. Bismas and A. R. Roy, "Soft set theory,"

Computers & Mathematics with Applications vol.45, pp.

555-562., 2010.

[3]J. Zhao, "Uncertainty and Rule Extensions to Description

Logics and Semantic Web Ontologies," Advances in semantic computing. Vol.2, pp. 1-22, 2010.

[4]K. J. Laskey, K. B. Laskey, P. C. G. Costa, M. M. Kokar,

T. Martin and T. Lukasiewicz, "Uncertainty Reasoning for the World Wide Web," W3c incubator group report., 2008.

[5]Z. Pawlak, "Rough sets," International Journal of

Information and Computer Sciences 11, pp. 341-356, 1982.

[6]W. Xu, X. Zhang, Q. Wang and S. Sun, "On General

Binary Relation Based Rough Set," Journal of Information and Computing Science 7(1), pp. 054-066, 2012.

[7]I. Masahiro and T. Tetsuzo, "Generalized rough sets and

rule extraction.," in Rough Sets and Current Trends in Computing. Berlin: Springer Berlin/Heidelberg., pp. 105-112, 2002.

[8]Z. Pawlak and A. Skowron., " Rough sets: Some

Extensions.," Information Sciences. 177(1), pp. 28-40, 2007.

[9]Z. Pawlak and A. Skowron., "Rough sets and Boolean

Reasoning," Information Sciences. 177(1), pp. 41-73, 2007.

[10]Z. Pawlak and A. Skowron., "Rudiments of rough sets,"

Information Sciences. 177(1), pp. 3-27, 2007.

[11]R. Silvia and L.-T. Germano, "Rough Set Theory –

Fundamental Concepts, Principals, Data Extraction, and Applications," in Julio Ponce and Adem Data Mining and Knowledge Discovery in Real Life Applications, Julio Ponce and Adem Karahoca (Ed.), ISBN: 978-3-902613-53-0, InTech, 2009.

[12]S. Thabet, "Application of Rough Set Theory in Data

Mining," International journal of Computer Science & Network Solutions, 1(3), pp. 1-10, 2013.

[13]I. Horrocks, " Reasoning with expressive description

logics: Theory and practice," Proceeding of the 19th International Conference on Automated Deduction (CADE 2002), number 2392 in Lecture Notes in Artificial Intelligence, pp. 1-15, 2002.

[14]M. Kr?tzsch, F. Siman?ík and I. Horrocks, "A Description

Logic Primer," CoRR, abs/1201.4089, 2012.

[15]F. Baader and N. Werner, "Basic Description Logics.," in

The Description Logics Handbook., Cambridge, Cambridge University Press,2003., 2003, pp. 43-95. [16]S. Rudolph, "Foundations of description logics," In Axel

Polleres, Claudia d’Amato, Marcelo Arenas, Siegfried Handschuh, Paula Kroner, Sascha Ossowski, and Peter F.

PatelSchneider, editors, Reasoning Web. Semantic Technologies for the Web, 2011.

[17]Z. Pawlak, "Some Issues on Rough Sets," in Transactions

on Rough Sets. James F. et all Eds. Volume 3100 of Lecture Notes in Computer Science, Springer, 2004. [18]A. Donfack- Kana and B. Akinkunmi, "An Algebra of

Ontologies Approximation under Uncertainty,"

International Journal of Intelligence Science, 4, https://www.doczj.com/doc/753145061.html,/10.4236/ijis.2014.42007, pp. 54-64, 2014.

[19]Z. Pawlak, "Rough Sets, Rough Relations and Rough

Functions." Fundamenta Informaticae 27(2/3), pp. 103-108, 1996.

[20]Z. William, "Relationship between generalized rough sets

based on binary relation and covering," Information Sciences. 179, pp. 210-225., 2009.

[21]Z. William and F. Wang, "Binary Relation Based Rough

Sets." FSKD, LNAI. 4223, pp. 276-285, 2006.

[22]C. M. Keet, "On the feasibility of Description Logic

knowledge bases," in Proc. 23rd Int. Workshop on Description Logics (DL2010), CEUR-WS 573, Waterloo, Canada, 2010.

[23]C.M. Keet, "Ontology engineering with rough concepts

and instances.," in 17th International Conference on Knowledge Engineering and Knowledge Management (EKAW'10), P. Cimiano and H.S. Pinto (Eds.). 11-15 October 2010, Lisbon, Portugal. Springer LNAI 6317, 50.

[24]O. Udrea, V. Subrahmanian and Z. Majkic, " Probabilistic

rdf.," in Proceedings IRI-2006, IEEE Systems, Man, and Cybernetics Society, 2006.

[25]P. C. G. Costa and K. B. Laskey, " PR-OWL: a framework

for probabilistic ontologies.," Frontiers in Artificial Intelligence and Applications vol.150, pp. 237-249., 2006.

[26]Z. Ding, Y. Peng and R. Pan, "BayesOWL: Uncertainty

Modelling in Semantic Web," Ontologies, Soft Computing in Ontologies and Semantic Web. volume 204 of Studies in Fuzziness and Soft Computing, 2005.

[27]Y. Yang and J. Calmet, " Ontobayes: An ontology-driven

uncertainty model.," Computational Intelligence for Modelling, Control and Automation., pp. 457-463, 2005. [28]P. Klinov, "Pronto: a Non-Monotonic Proba-bilistic

Description Logic Reasoner," in European Semantic Web Conference, 2008.

[29]P. Klinov and P. Bijan, " Pronto: A Practical Probabilistic

Description Logic Reasoner," Uncertainty Reasoning For The Semantic Web II. Lecture Notes in Computer Science vol.7123, pp. 59-79, 2013.

[30]G. Stoilos, G. Stamou, P. J. Z., V. Tzouvaras and I.

Horrocks, " Reasoning with Very Expressive Fuzzy Description Logics," Journal of Artificial Intelligence Research, 30(8), pp. 273-320, 2007.

[31]F. Bobillo and U. Straccia, " fuzzyDL: An Expressive

Fuzzy Description Logic Reasoner, Fuzzy Systems," in 17th IEEE International Conference on Fuzzy Systems, 2008.

[32]B. Liu, J. Li and Y. Zhao, "A Query-specific Reasoning

Method for Inconsistent and Uncertain Ontology," in International Multi-Conference of Engineers and Computer scientists , Hong Kong, 2011.

[33]G. Qi, Q. Ji, J. Pan and J. Du, "Extending descrip-tion

logics with uncertainty reasoning in possibilistic logic,"

International Journal of Intelligent Systems 26(4), p. 353–381, 2011.

[34]J. Zhu, G. Qi and B. Suntisrivaraporn, "Tableaux

Algorithms for Expressive Possibilistic Description Logics," in IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies, 2013. Authors’ Profiles

Armand F. Donfack Kana received the

B.Sc. degree in Computer Science from

University of Ilorin, Nigeria, M.Sc and

Ph.D. degrees in Computer Science from

University of Ibadan, Nigeria. He is

currently a Lecturer in Computer Science

at the Department of Mathematics,

Ahmadu Bello University, Zaria, Nigeria.

His current research interests include Knowledge representation and reasoning, Formal Ontologies and Soft computing.

Babatunde O. Akinkunmi received the

B.Sc. degree in Computer Science, M.Sc.

in Physics and Ph.D in Computer Science

from University of Ibadan, Nigeria. He is

currently a Senior Lecturer in Computer

Science at the Department of Computer

Science, University of Ibadan., Nigeria.

His current research interests include

Knowledge representation and reasoning, Formal Ontologies and Software Process improvement.

How to cite this paper:

Armand F. Donfack Kana, Babatunde O. Akinkunmi, "Modeling Uncertainty in Ontologies using Rough Set", International Journal of Intelligent Systems and Applications (IJISA), Vol.8, No.4, pp.49-59, 2016. DOI: 10.5815/ijisa.2016.04.06

考虑不确定性影响的仿真模型验证及校准方法研究

考虑不确定性影响的仿真模型验证及校准方法研究模型验证与校准在仿真可信度评估工作中占有重要地位,相关理论及方法研究得到重视,取得了丰硕的成果。随着对仿真模型精度的要求越来越高,建模仿真中的不确定性问题逐渐得到重视,相应的模型验证与校准工作也必须考虑不确定性的影响。 如何实现考虑不确定性影响时的仿真模型验证与校准是目前面临的关键问题。围绕这一问题,本文开展了以下研究。 首先,研究了建模仿真中的不确定性描述与传播方法。在给出基于概率框架的随机变量以及基于非概率框架的区间变量、模糊变量、不精确随机变量以及模糊随机变量等不确定性描述方法的基础上,针对非概率不确定性描述方法在不确定性传播方面存在的困难,分别给出相应的概率框架转化模型,并提出一种基于概率抽样的异类不确定性联合传播方法,实现采用不同不确定性描述方法建模描述的不确定性变量的联合传播,为后续的模型验证与校准奠定了基础。 其次,研究了考虑不确定性影响的仿真结果验证方法。针对输入仅含有固有不确定性的仿真结果验证问题,提出一种基于特征的仿真结果验证方法,通过特征提取,建立结果数据特征验证矩阵,进而利用贝叶斯因子法实现仿真结果验证;进一步,针对输入同时含有认知和固有不确定性,并且参考数据也含有不确定性的仿真结果验证问题,提出一种基于证据理论的验证方法,通过验证数据的证据理论统一建模与描述方法,将验证数据转换到证据框架,采用一种基于加权平均算子的证据融合算法实现多元验证数据的综合;最后提出一种基于证据距离的验证准则实现仿真结果验证,为存在多种来源不同种类不确定性的仿真模型验证提供了方法支持。

再次,研究了多元输出仿真模型校准方法。针对复杂系统模型校准存在的输出数量多、数据类型多样、不确定性因素众多以及校准效率低等问题,提出一种基于优化和元模型的多元输出仿真模型校准方法。 采用基于概率抽样的异类不确定性联合传播方法,实现不同种类不确定性的传播;提出一种基于特征及Mahalanobis距离的多元输出一致性度量模型,解决具有不同数据类型的多元仿真输出的一致性度量;采用基于随机Kriging和遗传算法的快速校准方法,将复杂的双层嵌套校准过程转化为静态的优化问题,提高了校准效率。同时,为保证元模型拟合的高效性与准确性,提出一种用于元模型建模的序贯试验设计方法。 最后,设计实现了仿真模型验证与校准软件平台。针对考虑不确定性影响的仿真模型验证与校准工作涉及到的大量的仿真运行、海量的仿真数据和繁琐的数据处理方法,开发了仿真模型验证与校准软件平台,主要包括不确定性建模与描述、不确定性传播、仿真结果验证以及模型校准四大功能,并将其应用于某飞行器纵向平面内末制导模型的验证与校准工作中,验证了工具的有效性和实用性。

中医药领域本体研究概述

中医药领域本体研究概述 【关键词】本体构建;中医药;综述 本体(Ontology)自20世纪90年代引入计算机人工智能领域后,在计算机及相关领域迅速形成一个研究热点。作为一种能在语义和知识层次上描述信息系统的概念模型建模工具,将在人工智能、知识工程、图书情报等领域具有重要的作用和广阔的应用前景。笔者从中医药领域本体构建、基于本体的中医药语言系统和应用系统三方面对中医药本体研究进行概述,并结合发展现状对其进行展望。 1 本体与本体构建 1.1 本体的概念 本体是源于哲学的一个概念,原指对世界上客观存在物的系统描述,即存在论,后衍生到语言、信息、知识系统等领域,被定义为“概念化的明确的规范说明”。目前,关于本体的定义有很多种说法,但不外有两层含义:一是哲学领域的存在,是本体论的研究对象;二是延伸到特定领域之中,指某套概念及其相互之间关系的形式化表达,包括概念化、规范化、形式化和共享4个特征[1]。 从本体的内涵上看,综合不同学者的认识,本体大都被认为是信息、知识的底层构架工具,用于组织较高层次的知识抽象,是领域知识概念化、形式化的说明,也可以是特定领域内“人机交流”的语义基础,即提供概念与概念之间关系的共识。按照领域依赖程度,本体可以分为顶层、领域、任务和应用本体4类;按照主题可分为知识表示本体、通用本体、领域本体、术语本体和任务本体。中医药本体主要用于描述中医领域知识的专门本体,是专业性本体,一般属于领域本体和知识表示本体。 1.2 本体构建工具与描述语言 在本体构建方面,一是利用已有的叙词表或术语词典进行改造;二是利用现有信息和领域专家从头做起,而以后者较常用。目前已经得到公认的方法包括Bemeras法(KACTUS法)、SENSUS法、“骨架”法、企业建模法(TOVE法)、Methontology法等。Gruber[2]于1995年提出了本体构建的五条规则(明确性和客观性、完全性、一致性、最大单调可扩展性、最小承诺),但本体工程构建方法尚处于相对不成熟阶段。本体的构建工具也有很多,包括protégé、WebOnto、Ontolingua、OntoEdit、Ontosaurus、OntoEdit、IBM Ontology Management System等,其中,protégé 是斯坦福大学开发的使用较为广泛的构建工具之一,目前已有4.0版本。

直线电机本体建模

基于Simulink 的直线电机本体建模 电磁发射课题组 2015年10月29日

1直线感应电动机的等效电路 直线电机在结构上可看作是沿径向剖开并将圆周展为直线的旋转电机,如图1所示。直线感应电动机的稳态特性近似计算方法基本可以沿用旋转感应电动机的等效电路[1] 定子 图1旋转电机演变为直线电机示意图 对于旋转异步电机而言,与电机绕组交链的磁通主要有两类:类是穿过气隙的相间互感磁通;另一类是只与一相绕组交链而不穿过气隙的漏磁通,前者是主要的。定子各相漏磁通所对应的电感称作定子漏感L l s,由于绕组的对称 性,各相漏感值均相等。同样,转子各相漏磁通则对应于转子漏感L l r。 对于每一相绕组来说,它所交链的磁通是互感磁通和漏感磁通之和,因此,定子各相自感为: L A A二L BB二L ee 二L ms ' L ls ( 1) 转子各相自感为: 2) L aa —L bb 一L CC一L mr L l^ _ L ms L lr ( 两相绕组之间只有互感,互感又分为两类: 1) 定子三相彼此之间和转子三相彼此之间位置都是固定的, 故互感为常值;

2) 定子任一相与转子任一相之间的位置是变化的,互感是角位移的函数。 由于三相绕组轴线彼此在空间的相位差为-120,因此互感为: 定转子绕组间的互感由于相互间的位置的变化,为: L Ab 二L bA = L BC = L CB = L ac = L C^ = L ms COs二'120( 7) L Ac 二L CA二 L Ba 二 L a^ L b^ L C^ L ms cO^ ~ 120( 8) 以上是针对旋转异步电机的参数的推到过程,而对于直线电机, 文献[3]中作者给出了圆筒形直线感应电动机的等效电路,如所示: 图2圆筒形直线电机的等效电路 图2中,R s和X s分别代表初级绕组的电阻和漏抗;R m代表励磁电阻;X m代表励磁电抗;r20代表次级表面电阻;X20代表次级表面电抗;R ed代表边端效应影响纵向边电功率产生的损耗折算成的等效电 于是: 1L_1L (3) (4) (5) L Aa = L aA = L Bb = LbB = L cC =L cc = L ms COs^ (6) 1 L ms COS 120 = L ms COS - 120 - - 2 L ms mr

基于区间理论的不确定性无功优化模型及算法

基于区间理论的不确定性无功优化模型及算法电力系统数据中存在很多不确定性,如新能源机组出力的间歇性、负荷的不确定性、网络参数测量误差等。这些不确定性因素会造成电网电压的波动、闪变和越限,对电网的安全运行构成威胁。 为保证不确定性环境下电网电压的安全,目前的解决方案主要有两类:1)基于随机规划的无功优化;2)鲁棒无功优化。但随机规划需要采集大量数据来构建不确定性因素的概率分布函数,在求解时需要花费大量时间进行蒙特卡洛模拟,且获得的电压控制策略无法在理论上保证电压在安全限内运行;鲁棒优化法需对潮流方程进行凸化等近似处理,所获得的控制策略并不能保证电压满足精确模型的安全约束。 为克服以上两类方案的缺点,本文提出了基于区间理论的区间无功优化模型及其求解算法。区间无功优化模型是将不确定性数据(如新能源机组出力和负荷的功率)表示成区间形式,状态变量(如负荷电压、相角、发电机无功出力)的值视为区间,控制变量(如变压器变比、无功补偿、发电机机端电压)的值当作一般的实数。 这个模型的含义是寻求一组最优的控制变量,使得状态变量的区间处于所设定的安全约束内,同时使得运行成本(如网损水平)最小。区间无功优化模型是一个离散非凸的带区间参数的非线性规划问题,目前尚未找到这一类规划问题有效的求解算法。 为求解区间无功优化模型,本文先采用区间潮流算法求解区间潮流方程,处理区间无功优化模型中的区间非线性方程。以此为基础,基于不同的思想提出了四种不同的区间无功化算法。

本文的研究内容主要包括:1)构建了区间(动态)无功优化模型。该模型将不确定性功率数据表示成区间形式,以网损为目标函数,考虑电网的运行约束和安全约束,目的是寻求不确定性环境下使状态变量满足安全约束的(动态)电压控制策略。 2)提出了精度更高的区间潮流算法。目前最有效的区间潮流算法是基于区间仿射理论,其本质是采用噪声元的线性组合形式逼近潮流区间,存在切比雪夫近似误差。 为提高区间潮流精度,一方面,提出了基于仿射算术的混合坐标区间潮流算法,该算法可以消减切比雪夫近似误差;另一方面,提出了基于优化场景法的区间潮流算法,该算法不存在近似误差,理论上可获得精确的区间潮流结果。3)提出了基于改进遗传算法的区间无功优化算法。 一方面,在遗传算法中引入带精英策略的快速非支配排序,用于获取多目标区间无功优化模型的Pareto前沿。另一方面,为提高遗传算法的计算效率和寻优能力,在算法中采用更精确的区间潮流算法,并将约束条件表示成罚函数的形式,采用自适应遗传算法求解区间无功优化模型。 4)提出了区间无功优化模型的线性化近似算法。利用区间函数的泰勒公式,将区间无功优化模型进行线性近似,构建了求解区间无功优化模型的线性化近似迭代算法。 同时,利用正曲率二次罚函数处理模型中的离散变量。采用基于仿射算术的区间潮流算法提高状态变量区间的估计精度,以提高线性化近似算法的精度和收敛性能。 5)提出了基于区间序列二次规划的区间无功优化算法。在线性化近似算法的

人工智能不确定性推理部分参考答案教学提纲

人工智能不确定性推理部分参考答案

不确定性推理部分参考答案 1.设有如下一组推理规则: r1: IF E1 THEN E2 (0.6) r2: IF E2 AND E3 THEN E4 (0.7) r3: IF E4 THEN H (0.8) r4: IF E5 THEN H (0.9) 且已知CF(E1)=0.5, CF(E3)=0.6, CF(E5)=0.7。求CF(H)=? 解:(1) 先由r1求CF(E2) CF(E2)=0.6 × max{0,CF(E1)} =0.6 × max{0,0.5}=0.3 (2) 再由r2求CF(E4) CF(E4)=0.7 × max{0, min{CF(E2 ), CF(E3 )}} =0.7 × max{0, min{0.3, 0.6}}=0.21 (3) 再由r3求CF1(H) CF1(H)= 0.8 × max{0,CF(E4)} =0.8 × max{0, 0.21)}=0.168 (4) 再由r4求CF2(H) CF2(H)= 0.9 ×max{0,CF(E5)} =0.9 ×max{0, 0.7)}=0.63 (5) 最后对CF1(H )和CF2(H)进行合成,求出CF(H) CF(H)= CF1(H)+CF2(H)+ CF1(H) × CF2(H) =0.692

2 设有如下推理规则 r1: IF E1 THEN (2, 0.00001) H1 r2: IF E2 THEN (100, 0.0001) H1 r3: IF E3 THEN (200, 0.001) H2 r4: IF H1 THEN (50, 0.1) H2 且已知P(E1)= P(E2)= P(H3)=0.6, P(H1)=0.091, P(H2)=0.01, 又由用户告知: P(E1| S1)=0.84, P(E2|S2)=0.68, P(E3|S3)=0.36 请用主观Bayes方法求P(H2|S1, S2, S3)=? 解:(1) 由r1计算O(H1| S1) 先把H1的先验概率更新为在E1下的后验概率P(H1| E1) P(H1| E1)=(LS1× P(H1)) / ((LS1-1) × P(H1)+1) =(2 × 0.091) / ((2 -1) × 0.091 +1) =0.16682 由于P(E1|S1)=0.84 > P(E1),使用P(H | S)公式的后半部分,得到在当前观察S1下的后验概率P(H1| S1)和后验几率O(H1| S1) P(H1| S1) = P(H1) + ((P(H1| E1) – P(H1)) / (1 - P(E1))) × (P(E1| S1) – P(E1)) = 0.091 + (0.16682 –0.091) / (1 – 0.6)) × (0.84 – 0.6) =0.091 + 0.18955 × 0.24 = 0.136492 O(H1| S1) = P(H1| S1) / (1 - P(H1| S1)) = 0.15807 (2) 由r2计算O(H1| S2) 先把H1的先验概率更新为在E2下的后验概率P(H1| E2) P(H1| E2)=(LS2×P(H1)) / ((LS2-1) × P(H1)+1)

人工智能复习试题和答案及解析

一、单选题 1. 人工智能的目的是让机器能够( D ),以实现某些脑力劳动的机械化。 A. 具有完全的智能 B. 和人脑一样考虑问题 C. 完全代替人 D. 模拟、延伸和扩展人的智能 2. 下列关于人工智能的叙述不正确的有( C )。 A. 人工智能技术它与其他科学技术相结合极大地提高了应用技术的智能化水平。 B. 人工智能是科学技术发展的趋势。 C. 因为人工智能的系统研究是从上世纪五十年代才开始的,非常新,所以十分重要。 D. 人工智能有力地促进了社会的发展。 3. 自然语言理解是人工智能的重要应用领域,下面列举中的( C)不是它要实现的目标。 A. 理解别人讲的话。 B. 对自然语言表示的信息进行分析概括或编辑。 C. 欣赏音乐。 D. 机器翻译。 4. 下列不是知识表示法的是()。 A. 计算机表示法 B. 谓词表示法 C. 框架表示法 D. 产生式规则表示法 5. 关于“与/或”图表示知识的叙述,错误的有( D )。 A. 用“与/或”图表示知识方便使用程序设计语言表达,也便于计算机存储处理。 B. “与/或”图表示知识时一定同时有“与节点”和“或节点”。 C. “与/或”图能方便地表示陈述性知识和过程性知识。 D. 能用“与/或”图表示的知识不适宜用其他方法表示。 6. 一般来讲,下列语言属于人工智能语言的是( D )。 A. VJ B. C# C. Foxpro D. LISP 7. 专家系统是一个复杂的智能软件,它处理的对象是用符号表示的知识,处理的过程是( C )的过程。 A. 思考 B. 回溯 C. 推理 D. 递归 8. 确定性知识是指(A )知识。 A. 可以精确表示的 B. 正确的 C. 在大学中学到的知识 D. 能够解决问题的 9. 下列关于不精确推理过程的叙述错误的是( B )。 A. 不精确推理过程是从不确定的事实出发 B. 不精确推理过程最终能够推出确定的结论 C. 不精确推理过程是运用不确定的知识 D. 不精确推理过程最终推出不确定性的结论

巨灾模型在巨灾风险分析中的不确定性

巨灾模型在巨灾风险分析中的不确定性 随着中国保险市场的发展,巨灾模型作为一种特定的巨灾风险管理工具和精算评估工具,已经越来越被人们所熟悉。目前,在巨灾模型行业有影响力的主要有三大模型公司,即:AIR环球公司、RMS风险管理公司和EQECAT公司。中国人保财险于2006年开始使用AIR环球公司的中国地震模型,标志着巨灾模型开始直接进入 中国保险行业;2010年,中国再保险集团引入RMS风险管理公司的中国地震模型,标志着巨灾模型领域在中国 保险行业进入多元化时代。 其实在业内,并没有一个针对巨灾模型的标准定义,但巨灾模型的功能可以普遍理解成是借助计算机技术以及现有的人口、地理及建筑等方面信息,来评估某种自然灾害或其他人为巨灾对于给定区域可能造成的损失。有些行外人误以为巨灾模型可以用来预测下一次地震或飓风,其实,巨灾模型并不能用来预测具体巨灾事件的发生,而是对给定区域或风险标的集合遭受巨灾打击的概率及受损程度进行估计。简单来说,模型可以告诉你某一地区发生里氏八级地震的几率是百年一遇,地震一旦发生造成的平均损失是150亿,但却无法预知这个百年一遇的地震是在 接下来的第一年还是第一百年发生。 三大巨灾建模公司的模型虽然在方法细节及参数假设方面不尽相同,但建模的基本原理和思路却已趋同。巨灾模型总体上分三个模块,分别是灾害模块(Hazard Module)、易损性模块(Vulnerability Module)和金融模块(Financial Module)。灾害模块,也称自然科学模块(Science Module),是由地质、地理、水文、气象等方面 的科学家对自然灾害本身的研究,此模块的成果为“事件集(Event Set)”,即在给定区域可能发生的所有巨灾事件 的集合。易损性模块,也称工程模块(Engineering Module),融合工程、建筑等方面专家的知识,研究在给定区 域某一灾害事件发生时对于特定风险标的(比如建筑物)的破坏情况。金融模块,由精算师等保险领域专家负责,将前两个模块的结果转化为保险损失,并应用于不同保险条款。可以说,前两个模块是个体巨灾事件对于个体标的造成损失的研究,再由金融模块转化为若干风险标的集合面对某种巨灾所产生损失的统计量。 随着巨灾模型在中国保险和再保险市场应用的逐渐广泛,人们对巨灾模型的输出结果及其在保险和再保险定价中的应用越发关注。很多人对一些有趣的现象提出疑问:为什么不同的巨灾模型对同样的风险组合的评估结果存在差异?为什么不同的再保险人依靠相同的巨灾模型给出的风险评估结果也会存在差异?这里我们就分析一下这些有趣的问题。 首先,为什么不同的巨灾模型对同样的巨灾风险组合会产生不同的评估结果?针对巨灾模型构成的三大模块,即灾害模块、易损性模块和金融模块,不同的巨灾模型公司在建模方法和技术处理上均存在着差异,尤其是在灾害模块和易损性模块上,这是导致不同的巨灾模型产生不同的评估结果的重要原因之一。 在这些方面的差异,反映了不同巨灾模型公司在自然科学和工程力学等方面的不同研究成果。自巨灾模型于上世纪80年代末被首次开发出来至今已有20余年的历史,其间随着自然科学理论的突破、计算机技术的进步、保 险市场需求的刺激,巨灾模型一直处于不断的发展过程中,不仅涉及的巨灾险种越来越丰富,涵盖的地区越来越全,模型本身也做得越来越精细,能够考虑进的细节也越来越多。然而,人类对大自然的认识毕竟是有限的,加上计算力瓶颈的限制,模型在很多方面都需要对实际情况进行假设和简化。公平地讲,巨灾模型的前两个模块是极大受制于基础科学的水平的,而自然灾害和工程力学方面的基础科学研究也在不断进步与自我超越,因此,巨灾模型由这方面引入不确定性几乎是不可避免的。 另一方面,为什么不同的再保险人依靠相同的巨灾模型对同样的巨灾风险组合也会给出不同的评估结果呢?这

概念模型建模方法研究_刘洁

概念模型建模方法研究 摘要:随着仿真规模的不断扩大,仿真系统复杂性不断提高,由此对概念模型建模的要求也不断提高。本文总结 了现有的概念模型抽象方法,提出了六元抽象方法,分析了这种方法的相似性原理,设计了相关的建模视图。关键词:概念模型;六元建模;相似性中图分类号:TP391 文献标识码:A 文章编号:1672-9870(2007)03-0126-04 收稿日期:2007-03-15 作者简介:刘洁(1981-),女,博士研究生,主要从事系统建模与仿真的研究,E-mail :wuqiong1205@https://www.doczj.com/doc/753145061.html, 。 刘洁1,柏彦奇1,孙海涛2 (1.河北石家庄军械工程学院 装备指挥与管理系,石家庄 050003; 2.河北石家庄军械工程学院 火炮工程系,石家庄050003) Research on Conceptual Modeling LIU Jie 1,BAI Yanqi 1,SUN Haitao 2 (1.Department of Equipment Command &Management ,Ordnance Engineering College ,Shijiazhuang Hebei ,050003; 2.Department of Artillery Engineering ,Ordnance Engineering College ,Shijiazhuang Hebei ,050003)Abstract:With the extending of simulation scope ,the system complexity is becoming larger and larger.Therefore ,the re-quest for the conceptual modeling is enhancing rapidly.This paper summarizes the existing methods of conceptual model-ing ,puts forward the six-element modeling method ,analyzes the inner comparability principle ,and designs the corre-sponding modeling view. Key words:conceptual model ;six-element modeling ;comparability 计算机仿真是复杂系统开发与集成的重要支撑 手段,在系统全寿命管理中发挥着不可替代的作用。为满足军用大规模复杂系统仿真的迫切需要,美国国防部仿真与建模办公室(DMSO )提出了美军的建模与仿真主计划。在该计划的通用技术框架中提出要开展“任务空间概念模型(Conceptual Models of the Mission Space ,CMMS )、高层体系结构(High Level Architecture ,HLA )及一系列数据标准”。目前,这一技术框架已成为指导仿真建设的基本依据,并在各国的作战仿真领域得到广泛的应用。然而,任务空间概念模型和HLA 中的对象模型(OM )面临着一系列问题和挑战,主要表现在:CMMS 规范中,EATI 的四元抽象描述对问题域的定义和描述并不完整。EATI 的四元抽象的方法用实体(Entity )、行为(Action )、任务(Tas- k )、交互(Interaction )来描述问题域的问题空间,但是忽略了内涵(Inclusion )、结构(Struc-ture )。因此,EATI 四元抽象产生的CCMS 描述问题域的准确性和完整性将受到质疑,由此建立的仿真系统与客观实际也不相符。 因此,有必要研究一种新的概念模型建模方法,扩展原有建模方法的描述方式,增强仿真模型描述的完整性,使得仿真应用更符合客观实际。 1概念模型建模方法现状分析 概念模型是对真实世界的第一次抽象,是连接真实世界与仿真世界的桥梁。对概念模型的深入研究始于美国国防部建模与仿真办公室(DMSO ),在1995年10月,DMSO 发布的“建模与仿真主计划”[1,2]中就把任务空间概念模型(CMMS )作为 第30卷第3期2007年9月 长春理工大学学报(自然科学版) Journal of Changchun University of Science and Technology (Natural Science Edition )Vol.30No.3 Sep.2007

1 产品建模方法通常有哪几种

1 产品建模方法通常有哪几种?举例说明三种以上的建模语言!简单描述! (1)机理分析法.从产品的基本定律及系统的物理结构出发从而得到建模模型。 (2)仿真法。通过计算机对产品进行仿真模拟,得出一定规律的建模方法。 (3)数据分析法。通过对产品的数据进行分析研究从而得到理论建模模型的方法. (4)基础建模,二维建模,复合对象建模。 UML基础: 统一建模语言 回顾20世纪晚期--准确地说是1997年,OMG组织(Object Management Group对象管理组织)发布了统一建模语言(Unified Modeling Language,UML)。UML的目标之一就是为开发团队提供标准通用的设计语言来开发和构建计算机应用。UML提出了一套IT 专业人员期待多年的统一的标准建模符号。通过使用UML,这些人员能够阅读和交流系统架构和设计规划--就像建筑工人多年来所使用的建筑设计图一样 AML:一种面向需求的多Agent建模语言 定义一种多Agent系统建模语言AML.该语言基于议会制的多 Agent协同构架,融合多种先进方法,采用目标分解的方式从需 求获取、系统分析到最后的系统设计,共涉及8种模型:用例模型、目标模型、组织模型、角色模型(任务模型)、交互模型、

本体模型、Agent类模型(包括Agent结构模型)、系统配置模 型.该语言还给出构造不同模型的工作流,以及不同模型之间相 互关联的方式.为了和UML保持一致,AML采用与UML一致的符 号系统,对于需要扩展的部分,制定专门的符号来表示.为了验 证AML的可行性,在开发一个AML的支撑环境AML-Tools的同时,使用该语言描述一个实例--智能仓库系统的设计和实现. 虚拟原型建模语言VPML 现有的模型描述语言难以满足基于虚拟原型的概念设计中产品 模型描述的需求.基于扩充连接图思想,以基于虚拟原型的概念 设计产品描述模型V-desModel为核心,提出了一种虚拟原型建 模语言VPML,VPML是一种独立于领域与过程的面向机电产品概 念设计的虚拟原型模型描述语言,具有较强的几何和行为建模 能力,为多领域系统在概念设计阶段的协同设计、并行设计及联 合仿真过程提供一致的模型描述.VPML模型内嵌的虚拟特征生 成算法,使得在概念设计阶段建立真实感很强的产品虚拟原型 时设计信息不完备问题得到有效解决. 2 产品主生命周期不同阶段的建模特点是什么? 1)产品的开发设计阶段(介绍期)。指企业研究开发新产品、新技术、新工艺所发生的新产品设计费、工艺规程制定费、设备调试费、原材料和半成品试验费等。建模特点,通过对新产品的机理分析进行初步建模。 (2)产品生产制造阶段(成长期)。指企业在生产采购过程中所

本体元建模理论与方法及其应用

本书简介 本书针对面向服务的软件工程中语义互操作性问题,体系化融合本体论和软件工程中元建模研究的最新成果,系统地介绍了本体元建模理论与方法、核心技术标准、实际应用和互操作性测评。 本书共分三个部分:第一部分(第1~3章)介绍了研究背景和现状,本体元建模理论和方法,语义互操作性含义;第二部分(第4~7章)介绍了语义互操作性管理的ISO标准核心技术;第三部分(第8~10章)介绍了方法和技术标准在几个典型领域中的应用,以及互操作性测评等。最后(第11章 )对今后的研究工作进行了展望。 本书可供从事软件工作的科研及技术人员阅读,亦可作为计算机软件与理论专业的研究生教材或参考书。 目录 序

前言 第1章 绪论 1.1 信息系统的互操作问题 1.2 从含意三角形到语义三角形 1.3 本体元建模理论与方法 1.4 复杂信息资源的管理与服务模型SSCI 1.5 本书的组织结构 参考文献 第2章 本体元模型 2.1 什么是本体 2.1.1 本体的定义 2.1.2 本体的结构与描述语言 2.1.3 本体的分类 2.1.4 本体在软件工程中的应用 2.2 什么是元模型 2.2.1 元模型的定义 2.2.2 元模型的创建 2.2.3 元模型在软件工程中的应用 2.3 什么是本体元模型 2.3.1 用本体语言描述的元模型 2.3.2 用本体语义标识的元模型 2.4 本体元模型与语义互操作 2.4.1 什么是语义互操作 2.4.2 语义Web服务 2.4.3 基于本体元模型的语义互操作 2.5 小结 参考文献 第3章 本体元建模 3.1 本体元建模理论 3.1.1 基本思想 3.1.2 本体的UML承诺与表达 3.1.3 本体元建模 3.2 复杂信息资源管理技术 3.2.1 ISO/IEC 11179 3.2.2 OASIS ebXML 3.2.3 UDDI 3.3 基于本体的软件工程 3.3.1 语义中间件技术 3.3.2 本体定义元模型ODM 3.4 SSOA相关研究 3.4.1 DII 3.4.2 SEKT 3.4.3 DERI 3.5 小结 参考文献 第4章 支持语义互操作的复杂信息资源管理框架 4.1 现状分析 4.2 国际标准IS0/IEC 19763综述

人工智能不确定性推理部分参考答案

不确定性推理部分参考答案 1.设有如下一组推理规则: r1: IF E1THEN E2 (0.6) r2: IF E2AND E3THEN E4 (0.7) r3: IF E4THEN H (0.8) r4: IF E5THEN H (0.9) 且已知CF(E1)=0.5, CF(E3)=0.6, CF(E5)=0.7。求CF(H)=? 解:(1) 先由r1求CF(E2) CF(E2)=0.6 × max{0,CF(E1)} =0.6 × max{0,0.5}=0.3 (2) 再由r2求CF(E4) CF(E4)=0.7 × max{0, min{CF(E2 ), CF(E3 )}} =0.7 × max{0, min{0.3, 0.6}}=0.21 (3) 再由r3求CF1(H) CF1(H)= 0.8 × max{0,CF(E4)} =0.8 × max{0, 0.21)}=0.168 (4) 再由r4求CF2(H) CF2(H)= 0.9 ×max{0,CF(E5)} =0.9 ×max{0, 0.7)}=0.63 (5) 最后对CF1(H )和CF2(H)进行合成,求出CF(H) CF(H)= CF1(H)+CF2(H)+ CF1(H) × CF2(H) =0.692 2 设有如下推理规则 r1: IF E1THEN (2, 0.00001) H1 r2: IF E2THEN (100, 0.0001) H1 r3: IF E3THEN (200, 0.001) H2 r4: IF H1THEN (50, 0.1) H2 且已知P(E1)= P(E2)= P(H3)=0.6, P(H1)=0.091, P(H2)=0.01, 又由用户告知: P(E1| S1)=0.84, P(E2|S2)=0.68, P(E3|S3)=0.36 请用主观Bayes方法求P(H2|S1, S2, S3)=? 解:(1) 由r1计算O(H1| S1) 先把H1的先验概率更新为在E1下的后验概率P(H1| E1) P(H1| E1)=(LS1× P(H1)) / ((LS1-1) × P(H1)+1) =(2 × 0.091) / ((2 -1) × 0.091 +1) =0.16682 由于P(E1|S1)=0.84 > P(E1),使用P(H | S)公式的后半部分,得到在当前观察S1下的后验概率P(H1| S1)和后验几率O(H1| S1) P(H1| S1) = P(H1) + ((P(H1| E1) – P(H1)) / (1 - P(E1))) × (P(E1| S1) – P(E1))

电力系统中不确定性潮流分析的研究展望

电力系统中不确定性潮流分析的研究展望 环境保护是世界上最具有挑战性的难题之一。为了应对这个全人类的挑战,世界各国争相制定出一系列的能源计划。分布式能源的渗透主要集中在可再生能源的飞速发展,新能源以一种空前的发展速度,使人类快速步入新能源时代。然而,这对电力系统产生了新的不确定性,因此对电力系统性能的不确定性分析十分必要。通常,工程系统的不确定性研究可以表示为基于概率理论或基于可能性理论。在不确定性潮流分析中,同时考虑概率性和可能性的不确定性是现今不确定性潮流分析研究中的发展方向。 标签:不确定性;潮流分析;概率性;可能性 引言 对于电力系统的调度和规划来说,潮流评估是一个强大且重要的工具和手段。确定性潮流分析需要系统提供各方面条件的精确数值才能保证分析结果的准确性,比如需求、发电量、网络情况等。然而,随着新能源时代的到来,世界上的电力系统中出现越来越多的不确定性情況,特别是分布式能源比如新能源的并网,将导致许多难以预测的副作用。所谓分布式发电,即将电力能源互相连接到分布式网络中。虽然分布式发电在技术、社会经济和环境保护等方面带来了许多无与伦比的优势,但是我们深知任何事物都有其两面性,这项技术也拥有消极的一面。分布式发电,特别是飞速发展的新能源,在对系统性能的不确定性方面的理论研究尚未成熟,需要一代又一代的中国优秀电气工程师投入大量精力研究。 1 不确定性潮流分析研究方法概论 在这样一种不确定的情况下,确定性潮流计算无法准确深刻地揭示电力系统运行的状态。因此,在如今的潮流计算研究中,基于不确定性观点下的潮流分析与计算受到广泛研究者的关注。当今的研究中,概率潮流分析通常认为是系统调度与规划的理想助力。概率潮流分析方法致力于模拟母线电压和线电流随不确定性系统中的参数改变而变化的状态分析,帮助电力系统工程师分析系统未来的状态变化趋势,这样在发生系统发生重大变化时可以提前作出相关的决策。如果这些系统中具有不确定性的状态量拥有充足的历史数据,现行的研究中主要采用基于概率论观点下的数学工具和模型来处理这类不确定性。然而,在电力系统实际运行中,很多不确定性的系统变量的历史数据往往不完整,或者变量的取值是通过经验推测的等。这些情况的存在将严重影响基于概率论建立的系统概率潮流分析模型的精确度。在电力系统实际运行中,对不确定变量的状态分析更加困难,一些不确定变量是概率性的,一些是可能性的,并且这两类不确定变量时常出现交叉耦合的情况。因此在这种情况下,同时考虑概率性和可能性的不确定性变量的影响是现行的研究方向,这也就是我们所谓的不确定性潮流分析问题。 至今为止,许多杰出的研究者和工程师提出了大量针对实际工程系统中不确定性现象分析方法,并且很多已经在研究中广泛应用。从上世纪70年代开始,

考虑风电接入不确定性的广义负荷建模及应用

DOI :10.7500/AEPS20131219001 考虑风电接入不确定性的广义负荷建模及应用 张一旭,梁一军,贠志皓,王洪涛,牛一睿,石国萍 (电网智能化调度与控制教育部重点实验室(山东大学) ,山东省济南市250061)摘要:考虑风电接入原负荷节点后带来的节点特性不确定性问题,提出了基于概率统计的广义负 荷节点稳态特性学习与建模的新方法.为分析风电接入后功率流向的改变,将节点特性分为电源特性与负荷特性;针对节点特性的不确定性变化,基于历史实测数据对有功功率样本空间进行自适应分段细化,统计其概率分布;利用Levenber g -Mar q uardt 神经网络法学习并提取各段节点特征,构建节点特性统一模型,并以风险分析为例说明新模型的应用.仿真结果表明,所提方法不但可精确建模,而且通过统计数据样本引入概率信息,可对不确定性问题按概率分场景分析,弥补了传统方法对随机特征描述能力不足的缺陷,是对传统建模方法在不确定场景应用上的扩展和延伸,从而可为风电接入后的仿真分析与调度控制提供辅助参考. 关键词:风力发电;不确定分析;广义负荷建模;概率统计;Levenber g -Mar q uardt 神经网络 收稿日期:2013-12-19;修回日期:2014-06-04. 国家自然科学基金资助项目(51177091);国家高技术研究发展计划(863计划)资助项目(2011AA05A101) .0一引言 传统负荷建模可在很多应用场景下取得较高精度[ 1-4] .然而近年来随着风电等新能源的大规模接入,其对电网的影响也已取得业界共识.从节点特性角度,风电接入不但改变了负荷组成和功率流向,且随着风功率的随机性和基础负荷的时变性,广义负荷不确定性加剧,这对系统潮流二可靠性分析等会产生较大影响,也给负荷建模分析带来新的挑战. 风电接入会造成潮流反向,需考虑其注入功率特性,此情况下传统建模方法难以适用;另外,由于缺乏对随机特征的描述,传统建模方法不能用于不确定场景下的分析计算.因此需要提出新的研究方法以解决新形势下风电接入带来的随机分析问题.传统稳态下为表征负荷时变性问题,许多学者提出 了分类建模思想[5],拟合效果较好.但应对不确定 性问题,传统稳态模型不具有对随机特征的描述能力,给建模和稳态分析带来不便.另外,现有考虑风电接入的负荷建模方法对风电并网研究做出了很大贡献,但其研究背景或为机电暂态情况,可以合理假 设风电机组出力恒定[6-9] ,或通过参数辨识获得风电 机组详细模型[10-11] ,均未涉及长时间尺度下风电接 入带来的随机性这一根本问题. 对于考虑风电接入不确定性的节点特性建模问题,国内外鲜有研究.为此,本文从稳态入手,提出了考虑风电随机出力的节点特性建模新方法及相应 的统一广义负荷模型.针对该问题,首要任务是分 析节点特性,为此将所研究的含风电场节点特性划分为电源特性和负荷特性.为解决节点特性不确定问题,本文借助概率信息,出于统计概率分布的需要,对数据分段,每段为一个功率场景,利用各场景中对应的原始电压和功率学习并提取相应的节点特征,这样不但可定性分析节点呈现负荷特性还是电源特性,还可将其定量细化到具体功率范围,对节点特性处理更加细致.该方法为后续稳态分析奠定了基础,可应用于不确定场景的相关分析,如考虑风电接入随机特征的各场景下的潮流计算二稳定性计算和风险评估等,从而可为风电接入后的仿真分析和调度控制提供辅助参考.最后,以风险分析为例说明该模型的具体应用. 1一节点特性及建模问题的提出 根母线节点组成示意图如图1所示. 图1一根母线节点组成示意图 Fi g .1一Schematic dia g ram of the root bus com p osition 图1中的根母线节点,相当于从系统侧看入二与该母线相连的风电和邻近基础负荷等的综合等值,其外特性取决于该时刻负荷与风功率的相对大小,它会随负荷时变性和风功率波动性而变化.为便于描述,把节点中总体呈现消耗功率的外特性称为负荷特性,呈现发出功率的外特性称为电源特性.从 16 第3 8卷一第20期2014年10月25日Vol.38一No.20Oct.25,2014

统计模型的“不确定性”问题与倾向值方法

统计模型的“不确定性”问题与倾向值方法 统计模型的“不确定性”问题与倾向值方法统计模型的“不确定性”问题与倾向值方法胡安宁摘要:量化社会学研究往往基于特定的统计模型展开。近十几年来日益流行的倾向值方法也不例外,其在实施过程中需要同时拟合估计倾向值得分的“倾向值模型”与估计因果关系的“结果模型”。然而,无论是其模型形式还是系数估计,统计模型本身都具有不可忽视的“不确定性”问题。本研究在倾向值分析方法的框架下,系统梳理和阐释了模型形式不确定性与模型系数不确定性 的内涵及其处理方法。通过分析“蒙特卡洛模拟”数据与经验调查数据,本文展示了在使用倾向值方法进行因果估计的过程中,研究者如何通过“贝叶斯平均法”进行多个备选倾向值模型的选择,以及如何通过联合估计解决倾向值模型与估计模型中的系数不确定性问题。本文的研究也表明,在考虑倾向值估计过程的不确定性之后,结果模型中对于因果关系的估计呈现更小的置信区间和更高的统计效率。关键词:模型形式不确定性模型系数不确定性贝叶斯平均倾向值方法统计效率实质上,所有的模型都是错的,只是一些有用而已。(Essentially,all models are wrong,but some are useful.) ——乔治·鲍克斯(George E. P. Box),诺尔曼·德雷珀(Norman R. Draper) 一、导言大量的社会学量化研究是

基于特定的统计模型展开的(Raftery,2001)。通过这些统 计模型,研究者能够确认变量之间的概率关系,并依据统计推论(statistical inference)的基本原则将此关系由随机样本 推广至研究总体。这一量化研究范式随着近十几年来各种因果推论模型(causal model)的开发与推广,展现出越来越强 的影响力(Morgan,2014)。在这些因果推论模型中,“倾向值方法”(propensity score method)因其方便、易操作得到国内外很多社会学研究者的青睐(Rosenbaum and Rubin,1983;Rubin,1997;胡安宁,2012;Imbens and Rubin, 2015)。从本质上讲,基于统计模型估计出的变量间关系代表的是一种概率关系而非决定性关系,对于这一点,目前社会学量化研究还没有给予足够的重视。在诠释量化模型结果的时候,很多学者倾向于采用一种“决定论”(deterministic)式的态度。比如,对于线性模型E(Y)=βX,一般会将其诠释为:X变动一个单位会带来Y的期望值E(Y)变动β个单位。这种诠释 虽不错误,却片面的关注点估计(point estimate)结果,忽视了系数β本身也是存在变异(variation)的情况。换句话说,β的“不确定性”(uncertainties)没有被考虑到。1. 例如,当用样本收入均值估算总体收入均值时,我们无法知道总体收入均值的具体值,而只能估算出其可能取值的区间。这一区间的大小和我们希望达到的统计效率(efficiency)有关。2. 一 般而言,所有的备选模型构成了一个模型空间(model space)。

项目名称不确定性系统的建模与分析

项目名称:不确定性系统的建模与分析 完成单位:1. 北京大学;2. 陕西师范大学;3. 清华大学 完成人:1. 曹永知;2. 李永明;3. 陈国青 项目简介: 受传感器或观测者本身的局限性、信息获取技术或方法的不完善性等因素的影响,复杂系统的状态和行为常常呈现出随机、模糊、非精确和不完全等多种不确定性。不确定性的大量出现,给系统的建模与分析带来了巨大的挑战。不确定性是智能问题的本质特征,是智能系统研究的核心课题。最近十余年,不确定性的研究已经发展成一个充满活力的研究领域,这也是Judea Pearl获2011年度图灵奖的主要工作领域。 本项目通过引入新的观点和概念,从不确定性系统的表示、推理方法和具有不确定性特征的知识发现等方面对智能系统中不确定性处理技术的若干关键问题进行了深入、系统的研究。主要贡献如下: 1.从全新的视角建模了移动通信系统和一般离散事件系统的容错性,量化研究了系统行为的可靠性,并给出了优化系统行为的方法。该工作首次将经典的香农通信理论与传值进程代数的研究结合了起来,并将行为相似性的思想引入离散事件系统建模和行为监控的研究。 2.建模了具有模糊不确定性的离散事件系统,引入了模糊离散事件系统基于事件和状态反馈的可控性、可观测性和分散化监控等基本概念,系统地发展了基于事件和状态的行为监控框架,并将这方面的理论结果应用于医疗诊断、治疗的建模与分析。 3.建立了取值于格半群的格值自动机理论,为建模和分析格值不确定性系统奠定了基础;建模了自然语言作为输入的模糊动力学系

统,开启了从模糊控制方法论研究模糊自动机的新思路。 4.率先开展了模糊数据库的设计理论研究,创建了模糊数据模型,提出了适用范围相当广泛的模糊数据表示框架;引入了模糊关联规则,并给出了有效的挖掘算法;提出了一种可自动实现的近似空间推理机制,该机制将已有的各种近似空间推理归为统一的形式。 本项目组在IEEE TC, IEEE TAC, IEEE TFS, IEEE TSMC-B, JCSS, TCS等国际知名期刊上发表SCI论文60余篇,其中IEEE Trans第一作者兼通讯作者论文11篇。项目组的研究成果被国内外多位知名学者在AIJ, IEEE TEC, IEEE TFS, IEEE TRO, IEEE TSMC, JCSS等国际期刊及学术专著上广泛引用和评述。

相关主题
文本预览
相关文档 最新文档