A constraint satisfaction framework for managing mixed-initiative discourse
- 格式:pdf
- 大小:151.42 KB
- 文档页数:7
Robust Solutions for Constraint Satisfaction and Optimization Emmanuel Hebrard and Brahim Hnich and Toby WalshCork Constraint Computation CentreUniversity College Cork,Ireland.email: e.hebrard,b.hnich,tw@4c.ucc.ieAbstractSuper solutions are solutions in which,if a small num-ber of variables lose their values,we are guaranteed tobe able to repair the solution with only a few changes.In this paper,we stress the need to extend the super so-lution framework in several dimensions to make it moreuseful practically.We demonstrate the usefulness ofthose extensions on an example from jobshop schedul-ing,an optimization problem solved through constraintsatisfaction.In such a case there is indeed a trade-offbetween optimality and robustness,however robustnessmay be increased without sacrificing optimality.IntroductionMany AI problems may be modeled as constraint satisfac-tion and optimization problems.However,the real worldis subject to change:machines may break,drivers may getsick,stock prices increase or decrease,etc.In such cases,our solutions to the problems may”break”.In this context,one may want a solution to be robust,that is able to re-main valid despite changes.Supermodels were introduced in(Ginsberg,Parkes,&Roy1998)as a framework to measureinherent degrees of solution stability.They are propositionalsatisfiability models with good properties of the neighbor-hood ensuring a close alternative in case of breakage.Thisnotion extends to constraint satisfaction:an-super-solution((Hebrard,Hnich,&Walsh2004))is a solution suchthat the loss of the values of at most variables in can berepaired by assigning other values to these variables,andmodifying the assignment of at most other variables.Wecall this new solution(with at most variables reas-signed)a repair solution.Consider the following CSP:is a-super-solution since for anyloss of value of any single variable,another solution may befound by changing at most one other variable.For instance,a break on A is repaired by the following solution:The problem of determining the existence of a super-solution has been proved to be NP-complete for afixedEach job is a sequence of activities,where each activity has a duration for its execution on one of the machines.The objective is to schedule the activities such that their order is respected,no machine is required by two activities that overlap,and the makespan,i.e.,the interval from the start-ing point of the first activity,and ending point of the last,is minimized.A jsp can be formulated as a CSP,with one variable for each activity,and a domain size equal to the makespan mi-nus its duration.To find the minimal makespan,one start with the makespan equal to a lower bound and increase it till a solution exists.The following figure represents a jsp with 4jobs and 4machines.Bars are activities,organized in jobs.Their length are proportional to their duration.Each job’s activities have the same border (respectively solid,dashed,dotted,dash and two dots),whilst colors correspond to machines.Job 1:Job 2:Job 3:Job 4:The second figure shows an optimalsolution.M 1:M 2:M 3:M 4:Activities are now scheduled on each machine.One may argue that this solution is not robust.Indeed activities are tightly grouped and a “break”on a machine,and the sub-sequent delay,may trigger further delays in the schedule.Now,consider a -super-solution for thisinstance:M 1:M 2:M 3:M 4:It’s important to say that until now no assumption were made about the problem solved,and about what might be a robust solution to that problem.Nevertheless,with a small increase in the makespan,we observe that a super-solution for a jsp contains more slacks between activities,making it more robust.However,this is not really satisfying.Indeed,the values are time points,the breakages are known only when they occur,thus we might want to change only events in the future.The next figure shows a super-solution with this (alternative value)restriction:M 1:M 2:M 3:M 4:This solution seems clearly more robust than the first one.However,the makespan increase is relatively impor-tant.One alternative,which ensure a small makespan is to optimize the repairability ,i.e,the proportion of repairable variables.For instance suppose that we don’t want the makespan to increase of more than two time units.The nextfigure shows a solution with optimal repairability (i.e.,more robust)whilst the makespan length isrestricted:M 1:M 2:M 3:M 4:There are a number of such improvements upon classicalsuper-solution that we can gather from our knowledge of the problem.For instance,some machines may be reliable,one would then restrict the break-set accordingly.Moreover,the slacks should be linked to the duration for repairing a partic-ular machine,this can be done by restricting the alternative values.Another example is the instruction scheduling problem,where primitive instructions of a program are to be sched-uled in order to minimize the time complexity of the pro-gram while keeping its semantic.When we model this prob-lem,a variable can take the value “NOP”,which stands for “no operation”.Whilst a breakage may happen on any vari-able,one assigned to “NOP”cannot break.When some vari-ables are set to such an assignment,no witness of repairabil-ity is required.Note that super-solutions offers no real dif-ficulties to adapt to the optimization framework.However,it’s not surprising that one has to sacrifice optimality in order to achieve robustness.An alternative is to optimize the re-pairability of a solution,that is the proportion of repairable breaks,without giving up optimality,or at least bounding the loss.More generally we have now a multi-criteria opti-mization problem.ConclusionWe claim that super-solutions offer a nice framework to ob-tain more robust solution to constraint satisfaction or opti-mization problems.They are often a good starting point for characterizing robustness,without requiring any knowledge of the problem.However,we showed that integrating spe-cific information about a problem can lead to much better results.In our future work we plan to define and implement a library of useful predicates and optimization criteria that can be used and combined in order to easily post robustness constraints.Our algorithm would simply use the appropri-ate predicates to check the validity of any repair.Then we would like to use this framework to obtain robust solutions of real problems subject to uncertainty.ReferencesBessiere,C.,and Regin,J.-C.1996.MAC and combined heuristics:Two reasons to forsake FC (and CBJ?)on hard problems.In Proceedings CP’96,61–75.Ginsberg,M.;Parkes,A.;and Roy,A.1998.Supermodels and robustness.In Proceedings AAAI’98,334–339.Hebrard, E.;Hnich, B.;and Walsh,T.2004.Su-per solutions in constraint programming.In Proceedings CPAIOR’04(to appear).Sabin,D.,and Freuder,E.1994.Contradicting Conven-tional Wisdom in Constraint Satisfaction.In Borning,A.,ed.,Proceedings CP’94,volume 874,10–20.。
He´le`ne Fargier,Je´roˆme Lang IRIT-Universite´Paul Sabatier 31062Toulouse Cedex(France) {fargier,lang}@irit.fr Roger Martin-Clouaire,Thomas SchiexINRA,BP2731326Castanet Cedex(France){rmc,tschiex}@toulouse.inra.frAbstractThe Constraint Satisfaction Problem(CSP)frameworkoffers a simple and sound basis for representing andsolving simple decision problems,without uncertainty.This paper is devoted to an extension of the CSP frame-work enabling us to deal with some decisions problemsunder uncertainty.This extension relies on a differen-tiation between the agent-controllable decision vari-ables and the uncontrollable parameters whose valuesdepend on the occurrenceof uncertain events.The un-certainty on the values of the parameters is assumed tobe given under the form of a probability distribution.Two algorithms are given,for computing respectivelydecisions solving the problem with a maximal proba-bility,and conditional decisions mapping the largestpossible amount of possible cases to actual decisions.1IntroductionDecision making is primarily a matter of choosing between alternatives that most commonly are expressed implicitly. Thus solving a decision problem amounts to generate the option(s)that is(are)most appropriate with respect to the specification of the decision problem at hand.Different decision problems can be characterized along two discrim-inating features:-decisions are made at a single time point(although they may have a temporal structure i.e.,form a plan)or are elaborated in sequence of steps,each being enriched by information resulting from the previous ones;-the requirements(constraints,criteria)that implicitly define the alternatives are uncertainty-free or not.Re-quirements are uncertain if they vary depending on the circumstances,the occurrence of which is incom-pletely known(uncertain).Sequential decisions and uncertain requirements increase the complexity of the decision problem,the most complex case being the joint situation.This paper is devoted to the case of single-instant decisions and uncertain requirements and proposes an approach of the problem in the framework of constraint satisfaction.So far the only decision problem tackled within CSP(Con-straint Satisfaction Problem)approaches concern the sim-plest case:single-instant decision problem with no uncer-tainty at all.In order to cast the addressed decision problem in a CSP framework it is essential to distinguish between two types of unknowns that are called parameters and deci-sion variables respectively.Parameters are uncontrollable unknowns,i.e.,the value taken by a parameter is a matter of occurrence of an event that cannot be controlled(not even influenced)by the decision maker(also referred to as the agent).The value of a parameter is imposed by the exter-nal world and the agent may have only partial knowledge about what this value might be.By contrast,the assignment of the decision variables is what the agent wants to decide upon.Failing to differentiate between parameters and de-cision variables may yield nonsensical results in a classical CSP approach because a particular value of a parameter can restrict the range of allowed(satisfactory)values for the de-cision variables whereas the converse is physically impossi-ble.The key issue is therefore to develop a CSP framework and resolution algorithms that provide for the uncontrol-lable/controllable dichotomy in the set of unknowns.This paper addresses an extension of the CSP framework, namely probabilistic CSP,involving both parameters and decision variables,the uncertainty on the values of the pa-rameters being represented by a probability distribution.A formal representation framework is defined and two algo-rithms relying on an assumption of independence of the parameters are described.In this paper we consider succes-sively two assumptions concerning the agent’s awareness of the paremeter values(the state of the world)at the time the decision must imperatively be made(deadline for acting): -(NK)(“no more knowledge”):the agent will never learn anything new about the actual state of the world before the deadline;all it will ever know is already encoded by the probability distribution.For this case we propose an algorithm which gives an actual,un-conditional decision,that is most likely to be suitable.-(CK)(“complete knowledge”):the actual world will be completely revealed before the deadline is reached (possibly,just before),so that it it useful to the agent to compute“off-line”a ready-to-use conditional deci-sion,that the agent will be able to instantiate“on-line”,as soon as it knows what the actual world is.For this purpose we have developed an“anytime”resolution algorithm that provides a set of decisions with their conditions of applicability,together with the likeli-hood of occurrence of these conditions.The intermediate case where some partial knowledge about the state of the world may be learned is not considered in this paper.See[5]for further discussion on this point.In Section2we define probabilistic CSPs and several types of solutions(corresponding to decisions);then we propose algorithms dealing with the problem of computing optimal decisions under the assumptions(NK)(Section3)and (CK)(Section4).2Probabilistic constraint satisfaction problems2.1Preliminary definitions and notationsA classical constraint satisfaction problem is a triple,where is a set of variables,each of which has its possible values in a domain(supposed herefinite) (with),and is a set of constraints.Each constraint involves a set of variables notedand is defined by a subset of the Cartesian product of the domains of the variables in.This subset,noted as the constraint itself,gives the set of all the possible assignments of the variables in:the constraint is satisfied by an assignment of the variables in iff this assignment belongs to the set of admissible tuples of.A solution of the CSP is an assignment of values to all the variables such that all the constraints are satisfied.The set of all the solutions of a CSP is noted.In the rest of the paper,assignments of values to a set of variables are also considered as tuples of values of the variables of,i.e.,elements of the Cartesian product of the domains of the variables in.The concatenation of two assignments and of variables in and such that is noted;it is an assignment of. The projection of an assignment of a set of variables on a set of variables,noted is simply the tuples of the values of the variables of in.This notion(and notation)is extended to a set of tuples:the projectionon of a subset of the Cartesian product of the domains of the variables of is the set of the projections on of all the assignments in.2.2Probabilistic CSPs:definitionsRoughly speaking,a probabilistic CSP is a CSP equipped with a partition between(controllable)decision variables and(uncontrollable)parameters,and a probability distribu-tion over the possible values of the parameters.Definition1A probabilistic CSP is a6-uplewhere:-is a set of parameters;-,where is the domain of;-is a set of decision variables;-,where is the domain of;-is a set of constraints,each of them involving at least one decision variable.-is a probability distribution over the parameter assignments.Constraints are defined in the same way as in classical CSP.We note(resp.)the set of variables (resp.parameters)involved in a constraint.A complete assignment of the parameters(resp.of the decision variables)will be called a world(resp.a decision) and will be generally denoted by(resp.by).A partial world(resp.a partial decision)will be an assignment of a subset of the parameters(resp.of the decision variables). The subset of containing the constraints involving no parameters will be denoted by;the other constraints in (involving at least a parameter)restrict the allowed values of some decision variables,dependently of the values of some parameters:they will be called parameterized constraints. Obviously,if,is a classical CSP.We assume that the constraints of involve at least a de-cision variable since the available information about the actual values of parameters is completely encoded by—if a tuple of parameter assignments is impossible then the probability of any world extending this tuple is0.We have not yet discussed how the probability distribution on worlds is specified.Clearly,it is not reasonable to assume that the input contains the explicit specification of for each.Hence,generally will be given in a much more concise way.The simplest case occurs when parameters are mutually independent(is then specified only by an individual probability distributions for each parameter).A more complex case consists in structuring the parameters in a Bayesian network:the computation of requires then the propagation of probabilitiesthrough the network.Thus,will generally be given implicitly rather than explicitly,and will generally require some computation in order to be available.From now on we choose to ignore this step:the computation of is taken for granted.Definition2(possible worlds)A world of such that is a possible world.The set of all possible worlds is denoted by.Let be an assignment of a subset of the variables(where variables is here the general terminology for both parame-ters and decision variables).We define the reduction of a constraint by assignment as the set of assignments of the unassigned variables compatible with according to:Definition3Let be a constraint of involving a setof variables and parameters.The reduction of by an assignment of is the con-straint on defined by the tuples.We note the reduction of by.This notion of reduction is especially meaningful when is either a world or a decision.The reduction of by a world is a set of classical decision constraints(involving decision variables only);it defines the decisions which are suitable if the world ly,to each world we as-sociate the uncertainty-free decision problem. Each of these classical CSPs will be called a candidate problem.One of them(and only one)is the actual problem ,corresponding to the actual world(but since there is generally more than one possible world,the agent does not know which one is the actual one).Definition4The set of candidate problems induced by is defined by. Note that among the CSPs of,some may be inconsistent,which means that the actual problem may be inconsistent.Dually,the reduction of by a decision yields a CSP involving parameters only,whose solutions are the worlds for which is a suitable decision.These worlds are said to be covered by.Note that,obviously,is a solution of iff is a solution ofiff is a solution of.Among the possible worlds,those which are covered by at least a decision are called good worlds—and the others are called bad worlds.Equivalently,is good iffis consistent.Definition5satisfiesNow,for each possible decision we can compute the probability that it is a suitable decision,i.e.that it covers the actual world.Definition6(probability that a decision is a solution) The probability that a given decision is a solution of the actual problem is the probability of the set of worlds it covers,i.e..Notice that is not a probability distribution.Now,we can compute the probability that the actual world can be covered:this is the probability of consistency of the actual problem.Definition7(probability of consistency)To a proba-bilistic CSP we associate the probability that the actual problem is consistent,i.e.If then will be said consistent.ccGargantua(white wine)Pantagruel(vegetarian)ccccHence,the problem can be stated in such a way that,simply by removing impossible values from the domain.variables of to one parameter,whose domain has two values(in our example,comes and comes),correspondingrespectively to the relevance and the irrelevance of to the actual problem.This simplifying assumption is denotedby.If denotes the set of these uncertain constraints,then it can be shown that is a relaxation lattice in the sense of Freuder[6],equipped withthe probability distribution obviously induced by the distri-bution on worlds:namely,the set of candidates problems are obtained by taking the union of and of any subset of.Moreover when the’s are independently relevant,is strongly consistent(see Definition13)iff the top of the lattice,namely is consistent in the classical sense.A more general class of probabilistic CSPs occurs when each parameterized constraint involves exactly one parameter.Each parameterized constraint cor-responds thus to an disjunctive family of classical decision constraints(one for each possible value of the involved pa-rameter).We denote by this simplifying assumption.2.3Conditional decisions as solutionsA conditional decision will ideally associate to each possi-ble world a decision satisfying the corresponding decision problem.Since some possible worlds may induce an in-consistent CSP,a weaker request consists in giving a partial conditional decision,i.e.,defined on a subset of(ideally on).Definition8(conditional decisions)A complete condi-tional decision is a map from to.A par-tial conditional decision is a map from a subset of to.A(partial or complete)conditional decision is sound iff covers.In the rest of the paper,we refer to“conditional decisions”as to partial or complete conditional decisions.Clearly, complete conditional decisions generalize decisions as de-fined in Section2.2(the latter are called pure decisions as opposed to conditional decisions).Namely,a conditional decision which is constant over corresponds to a(pure)decision.Conditional decisions will play the role of solutions for probabilistic CSPs(ideally,a solution is a complete sound conditional decision).As for(pure)deci-sions,a conditional decision has a probability to cover the actual world:Definition9The probability that a conditional decision will yield a solution to the actual problem is defined by:coversThe following simple results are easy to prove:Proposition1(i)for any conditional decision,. (ii)there exists a such that.(iii)is consistent iff there exists a such that.Definition10(optimal conditional decisions)A condi-tional decision(partial or complete)is optimal for iff is maximum(or equivalently iff).Thus,one may compute an optimal conditional decision off line and use it later on,in real time,when the actual world will be known.This assumes that this actual world will be known before the decision has to be taken((CK)). Clearly,under the other extreme assumption((NK)),it is useless to look for a conditional decision:consequently,in this case one has to look for a pure decision rather than for a conditional one.A reasonable strategy is then to maximize the probability that this pure decision will work:Definition11(optimal pure decisions)A pure decision is optimal iff is maximum.We letwhere is an optimal pure decision.is the maximum probability of S uccess of a P ure D ecision.Obviously,.The ideal case is when there is a pure decision covering all good worlds:Definition12(universal decisions)A pure decision is a universal decision iff.Thus,a universal decision will work for any possible world whose associated candidate problem is consistent and thus it is an optimal decision.Clearly,when there exists a universal decision,it is useless to look for a conditional decision. Definition13(strong universal decisions)A pure deci-sion is a strong universal decision of iff.is strongly consistent iff it has a strong universal decision.A strong universal decision is a universal decision covering all possible worlds.The case where is strongly consistent is the ideal one.Obviously,strong consistency implies consistency.Example:using the same probabilistic CSP,we may consider the following conditional decision,which is optimal since.Decisionccccccccsumption could be relaxed provided that the computation of the probability of a set of worlds is taken for granted. Furthermore,in this Section(only)we make the assumption (NK)that the agent will never have any further knowledge about the actual world before acting(all it knows is already encoded in)and consequently,all it can execute is a pure decision.Now,the best the agent can require from the solver is an optimal pure decision.As it may be computa-tionally costly to compute an optimal one,especially if the agent has a deadline,we propose an“anytime”algorithm which computes an approximately optimal pure decision if stopped before its natural stop(the longer the algorithm runs,the better the pure decision),and eventually gives an optimal one if it runs until its natural stop.Our approach to the problem of maximizing consists in using a Depth First Branch and Bound algorithm.For the sake of simplicity,consider that variables are instanti-ated in a prescribed order,say and that all the constraint are binary.The root of the tree is the empty instantiation.Intermediate nodes denote partial decisions .Leaves represent instantiations of,i.e.,pure decisions.In a depthfirst exploration of the tree,we keep track of the leaves maximizing. Each time a new variable decision is assigned to variable ,all the constraints connecting this variable to an unassigned decision variable or a parameter will be used as in the classical Forward-checking algorithm[8] to remove all the values incompatible with the assignment from the domains of(resp.):each domain (resp.)is simply replaced by(resp if involves a parameter). Updating the domain of a variable using constraint after the assignment of decision to is performed by the procedure F ORW ARD C HECK in Algorithm1, line2.After forward-checking,the Cartesian product of the re-maining values in the domains of all the parameters define the worlds which are compatible with the current partial de-cision:all the worlds that are obviously incompatible have been removed.The cumulated probabilities of the remain-ing worlds give an upper bound on the probability of the best complete decision among the descendants of the current node since only incompatible worlds have been re-moved.This bound is easily computed if independence is assumed:the bound is simply the product on all domains of the sums of the probabilitiesof the remaining values of each domain.When a complete decision is reached,since all the constraint have been propagated,the worlds remaining are the worlds covered by the decision:. When does backtrack occur?First,if the domain of a decision variable or a parameter becomes empty,then we know that no world is covered by our current partial deci-sion and we may therefore backtrack,reconsider the value of the last decision variable assigned,restore the domainsChoose a variablefor do1Let2F ORW ARD C HECK()S EARCH()probability associated to the current conditional decision).Intuitively,the algorithm incrementally builds a conditional decision which eventually covers a superset of all good worlds.Repeatedly,(i)we pick a new pure decision (to be added to the current partial conditional decision)that covers at least one possible world among the worlds which are not covered yet,(ii)we compute the set of worlds that this decision covers and (iii)we subtract this set from the set of worlds which haven’t yet been covered (initially,this set has the value ,i.e.it contains all worlds).In order to easily compute the worlds covered by and their probabilities,we make the following two assumptions:first,parameters are mutually independent (again),and second,any constraint of involves at most one parameter (F).Due to assumption (F),the worlds covered by a decision (i.e.,)form a Cartesian product of subsets of the parameter domains.The successive subtractions of these Cartesian products is performed using a technique recently proposed by Freuder and Hubbe [7],called subdomain subproblem extraction .Before we describe our algorithm,we first recall this technique of subdomain extraction.4.1Subdomain extractionWe define an environment as a set of worlds of the form,with .An example of envi-ronment is c c c c ,alsowritten c c c c or cc c c when there is no ambiguity on the order of parameters.The set of all possi-ble environments is obviously a lattice (equipped with the inclusion order),whose top is the set of all possible pa-rameter assignments (in the example,c cc c c c)and whose bottom is the empty set.The probability of an environment,under the independence assumption,is.Given two environments and ,sub-domain subprob-lem extraction technique [7],decomposes into a set ofdisjoint sub-environmentssuch that all worlds of belong either to or to one of the sub-environments of the decomposition.The decomposition is unique if an ordering on the variables is fixed.We give here a modi-fication of Freuder and Hubbe’s decomposition algorithm,where we also compute incrementally the probability of each environment generated by the decomposition.This function returns the decomposition of by and the prob-ability of the set of worlds of that are already in (used in Algorithm 3).When actually used,the probability of the environment needed on entry has already been computed since if the procedure D EC is called,either and ,or comes from a previous decomposition.Note that whenwe get D EC ,and whenwe get D EC.Example :c c c c c c,c cc c.1.;cc ccc;,c c c cc;Function D EC ()List ;;repeat{independence}ifthenuntil (or )return List ,Algorithm 2:The decomposition algorithm;List =cc c cc;2.;c c c cc;;3.;c cc c;,c cc c;;cc c ccc cc c.END4.2An algorithm for searching a conditional decision The algorithm is sketched as Algorithm 3.Two steps of the algorithm deserve some comments:first (line 1),the se-quences of CSP,repeatedly solved by the algorithm,define dynamic constraint satisfaction prob-lems [4]and their resolution may be improved by any tech-niques developed to solve such problems.Second (line 2),once a newis built,it is subtracted from all the await-ing environments to avoid some redundant computations.This furthermore guarantees that the computation will stop when the current list Decisions defines a solution.Example (continued):1.Env ;=0;=0;Decisions =2.;;c cc c;Env =cc cc cc c c c;Decisions =c c c c;.3.cc c cc;;c c cc c;Env =cc c cc c c c ;Decisions=c c c c cc cc c;.4.c ccc;;c c c c c;Env =c c ccc c;c c c c cis added to Decisions ;;5.c c c;inconsistent;Bad =c c c;Decisions{decisions&env.covered} Env{uncovered environment} Bad{uncoverable envitonment}{lower bound of}{lower bound of} repeat%contains only non-coverable worldsBad Bad;else2D EC()Env:=Envuntil(Env)(or interruption by the user)Algorithm3:Computing an optimal conditional decision.Env=c c c.6.ccc;inconsistent;Bad=cccccc;;Env=.ENDc c c c c c cc cccc ccis thefinal value of Decisions;from it we may built a conditional decision;moreover is in Bad.tends to and reaches it eventually.4.3Correctness of the algorithmProposition2At any point of the algorithm,the following properties hold:of Env we have;Bad we have;and Decisions,covers;Proposition3(Correctness of the algorithm)If run qui-etly,the algorithm stops and:thefinal value of Decisions defines an optimal con-ditional decision;thefinal values of and verify,and.Thus,the set of worlds covered by Decisions grows monotonically(and so does Bad),and the interval shrinks monotonically from its initial value to itsfinal singleton value.Therefore,it is possible to use the algorithm as an“anytime”algorithm:Decisions tends to a cover of and Bad tends to. So far we have assumed that the applicability of a con-ditional decision is taken for granted(since,according to assumption(CK),the actual world will be revealed before the deadline for acting).We could consider more com-plex situations where further knowledge about the world can be learned by means of knowledge-gathering actions; another problem consists then infinding a relevant set of knowledge-gathering actions,sufficient to make the agent able to act properly.This is left for further research(see[5] for further considerations on this point).5Related workIn thefield of constraint satisfaction and automated reason-ing:Partial and valued CSP[6,15]define general frame-works that extends the traditional CSP framework in a sim-ilar direction,but without any distinction between decision variables and parameters.Another related area of research is dynamic constraint satisfaction[4],where several tech-niques are proposed in order to solve a sequence of CSP that differs only in some constraints more efficiently than by naively solving each CSP one after the other.Such improve-ments could immediately be incorporated in the algorithm proposed.The computation of a compact representation of (partial)solutions could probably be performed using Fi-nite Automata or Ordered Binary Decision Diagrams[2]. OBDD are used to solve problems issued from propositional logic,closely related to CSP.In fact,an OBDD could rep-resent—using exponential memory and time in the worst case—all the partial solutions of a probabilistic CSP by shifting to propositional logic.In thefield of decision analysis:Deciding under uncertainty has been studied for long in decision theory and has been applied recently to planning.There is a number of computa-tional approaches to decision analysis,which all distinguish in a way or another parameters(subject to probability distri-butions)and decision variables,the most common of them include influence diagrams[9],Markov decision processes and also valuation-based systems[16].On the one hand, these computational frameworks tackle much more general decision problems since they consider sequences of deci-sions(and also utility values),while ours tackles only one-step decisions.On the other hand,our framework describes the constraints between dependent variables by means of an elaborate representation language equipped with elaborate computational tools(namely constraint satisfaction prob-lems)while influence diagrams and other approaches do not focus on this representation issue and represent these relationships more explicitly.Both kind of approaches are however somewhat complementary,and one could think of extending our framework so as to integrate for instance influence diagrams(for the nice representation of the de-pendency structure of the decision problem)and constraint satisfaction(for the representation of the constraints be-tween dependent variables).An interesting particular problem related to our framework is in Qi and Poole[13]who discuss a navigation problem in U-graphs,i.e.,graphs where some of the edges are un-certain(the probability associated to an uncertain edge is the probability that the connection between two vertices is traversable);the static version of this problem may be encoded by a probabilistic CSP where a parameter is asso-ciated to each uncertain edge.In thefield of knowledge representation:Recently,Boutilier [1]has proposed a logical,qualitativebasis for decision the-ory,distinguishing as we did between controllable and un-controllable propositions;however,while we look for con-ditional decisions,Boutilier’s very cautious strategy looks only for a pure decision(the best one maximizing the worst possible outcome).[12]also discusses controllability issues in an abduction-based framework for planning.6Concluding remarksOur contribution was mainly in extending the CSP frame-work in order to deal with decision problems under uncer-tainty,by distinguishing between controllable variables and uncontrollable parameters and by representing the knowl-edge of the world by a probability distribution on the pa-rameters.This framework can be considered as afirst step to embed decision theory into constraint satisfaction;two other important steps in this direction will consists in con-sidering utilities and sequences of decisions.Utility func-tions would enable us to representflexibility and it should not be significantly harder to embed them in our frame-work:in the constraints of,an extrafield is added to each tuple,namely the utility of the corresponding variable assignment.Forbidden tuples have a utility of.Non-flexible constraints allow only two utility degrees for the tuples,namely and.Extending our framework to sequences of decisions is significantly harder.The partition of the variables is not sufficient:not only decisions are in-fluenced by parameters,but the value of some parameters may also be influenced by some earlier decisions;for this we think of reusing ideas from influence diagrams[9]or causal networks(e.g[3])by structuring decision variables and parameters in a directed acyclic network(not to be con-fused with the undirected constraint graph of the CSP),a link from to meaning that the allowed values of the decision variable depend on,and a link from to meaning that the decision of assigning a value to has or may have some effects on.An interesting potential application of probabilistic CSP (and its possible extensions)is planning under uncertainty.A preliminary version of probabilistic CSP has been ap-plied to an agricultural planning problem[11].Clearly,this problematic needs the notion of controllability,and would certainly gain a lot in being encoded in an extension of the。
A Constraint Satisfaction Model of Causal Learning and ReasoningYork Hagmayer(york.hagmayer@bio.uni-goettingen.de)Department of Psychology,University of GöttingenGosslerstr.14,37073Göttingen,GermanyMichael R.Waldmann(michael.waldmann@bio.uni-goettingen.de)Department of Psychology,University of GöttingenGosslerstr.14,37073Göttingen,GermanyAbstractFollowing up on previous work by Thagard(1989,2000)we have developed a connectionist constraint satisfaction model which aims at capturing a wide variety of tasks involving causal cognitions,including causal reasoning,learning,hy-pothesis testing,and prediction.We will show that this model predicts a number of recent findings,including asymmetries of blocking,and asymmetries of sensitivity to structural im-plications of causal models in explicit versus implicit tasks.IntroductionCausal reasoning has been widely investigated during the last decade,which has led to a number of interesting novel findings(see Shanks,Holyoak,&Medin,1996;Hagmayer &Waldmann,2001,for overviews).For example,it has been shown that participants’causal judgments are sensitive to the contingency between the cause and the effect,and that people’s judgments reflect the causal models underlying the observed learning events(see Hagmayer&Waldmann, 2001;Waldmann,1996).Moreover,causal reasoning has been studied in the context of a number of different tasks, such as learning,reasoning,categorization,or hypothesis testing.Most psychological theories and computational models of causal learning and reasoning are rooted in two traditions. They are either based on associationistic or on probabilistic or Bayesian models(see Shanks et al.,1996;Thagard, 2000).Both kinds of models have been criticized.Associa-tionistic learning networks have proven unable to capture the fundamental semantics of causal models because they are insensitive to the differences between learning events that represent causes versus effects(see Waldmann,1996). By contrast,Bayesian networks are perfectly capable of rep-resenting causal models with links directed from causes to effects(see Pearl,2000).However,although the goal of these networks is to reduce the complexity of purely prob-abilistic reasoning,realistic Bayesian models still require fairly complex computations,and they presuppose compe-tencies in reasoning with numerical probabilities which seem unrealistic for untutored people(see Thagard,2000,for a detailed critique of these models).The aim of this paper is to introduce a more qualitatively oriented,connectionist constraint satisfaction model of causal reasoning and learning.Our model is inspired by Thagard’s(2000)suggestion that constraint satisfaction models may qualitatively capture many insights underlying normative Bayesian network models in spite of the fact that constraint satisfaction model use computationally far sim-pler,and therefore psychologically more realistic processes. The model differs from standard associationist learning models(e.g.,Rescorla&Wagner,1972)in that it is capable of expressing basic differences between causal models.Our model embodies a uniform mechanism of learning and rea-soning,which assesses the fit between data and causal mod-els.This architecture allows us to model a wide range of different tasks within a unified model,which in the literature have so far been treated as separate,such as learning and hypothesis testing.Constraint Satisfaction Models Constraint satisfaction models(Thagard,1989,2000)aim at capturing qualitative aspects of reasoning.Their basic as-sumption is that people hold a set of interconnected beliefs. The beliefs pose constraints on each other,they either sup-port each other,contradict each other,or are unrelated.Co-herence between the beliefs can be achieved by processes which attempt to honor these constraints.Within a constraint satisfaction model beliefs are repre-sented as nodes which represent propositions(e.g.,“A causes B”).The nodes are connected by symmetric relations. The numerical activation of the nodes indicates the strength of the belief in the proposition.A belief that is highly acti-vated is held strongly,a belief that is negatively activated is rejected.The activation of a node depends on the activation of all other nodes with which it is connected.More pre-cisely,the net input to a single node j from all other nodes i is defined as the weighted sum of the activation a of all re-lated nodes(following Thagard,1989,p.466,eq.5):Net j=∑i w ij a i(t)(1) The weights w represent the strength of the connection of the beliefs.In our simulations,they are generally pre-set to default values which are either positive or negative and re-main constant throughout the simulation.At the beginning of the simulations,the activation of the nodes representing hy-potheses are set to a low default value.However,nodes rep-resenting empirical evidence are connected to a special acti-vation node whose activation remains constant at1.0.This architecture allows us to capture the intuition that more faith is put into empirical evidence than into theoretical hypothe-ses(see Thagard,1989).To update the activation in eachcycle of the simulation,first the net input net j to each node is computed using Equation1.Second the activation of all nodes is updated using the following equation(Thagard, 1989,p.446,eq.4):a j(t+1)=a j(t)(1-θ)+net j(max-a j(t))if net j>0=a j(t)(1-θ)+net j(a j(t)-min)otherwise.(2) In Equation2,θis a decay parameter that decrements the activity of each node in every cycle,min represents the minimum activation(-1)and max the maximum activation (+1).The activations of all nodes are updated until a stable equilibrium is reached,which means that the activation of all nodes do no longer substantially change.To derive quantita-tive predictions it would be necessary to specify rules that map the final activations to different types of responses. This is an important goal which should be addressed in fu-ture research.In the present article we only derive ordinal, qualitative predictions from the model.The ModelFollowing causal-model theory(Waldmann,1996)we as-sume that people typically enter causal tasks with initial as-sumptions about the causal structure they are going to ob-serve.Even though specific knowledge about causal rela-tions may not always be available,people often bring to bear knowledge about abstract features of the models,such as the distinction between events that refer to potential causes and events that refer to potential effects.In virtually all psycho-logical studies this information can be gleaned from the ini-tial instructions and the materials(see Waldmann,1996).Figure1displays an example of how the model repre-sents a causal model.The nodes represent either causal hy-potheses or observable events.The causal hypothesis node at the top represents a structural causal hypothesis(H1),in this case the hypothesis that the three events e1,e2,x form a common-effect structure with e1and e2as the two alternative causes and x as the common effect.The two nodes on the middle level refer to the two causal relations H2and H3that are part of the common-effect model with two causes and a single effect.The nodes on the lowest level refer to all pat-terns of events that can be observed with three events(a dot represents“and”).On the left side,the nodes represent pat-terns of three events,in the middle pairs,and on the right side single events.Not only the present but also the corre-sponding absent events are represented within this model (for example~x).The links connecting the nodes represent belief relations.Thus,they do not represent probabilities or causal relations as in Bayesian models.There are two differ-ent kinds of connections between the nodes.Solid lines indi-cate excitatory links,dashed lines inhibitory links.How are the connections defined?A connection is positive if the propositions support each other.For example,if all three events are present,the observation is in accordance with both hypotheses H2and H3.This pattern might be observed if both e1and e2cause x.Therefore the evidence node e1.e2.x is positively connected to H2and H3.In general,a hypothesis is positively connected to an evidence node if the events mentioned in the hypothesis are either all present or all absent.If this is not the case,that is if one of the relevant events specified in the hypothesis is absent,the link is as-signed the negative default value.Exploratory studies have shown,that participants share a common intuition whether a certain pattern of events supports or contradicts a hypothesis (Hagmayer&Waldmann,2001).The assigned weights mir-ror these general intuitions.The weights of the links remain the same throughout the simulations.Figure1does not dis-play the special activation node.This node was pre-set to 1.0and attached to event nodes describing present events in the respective experiment.and reasoning.See text for further explanations.In Figure1,the dashed line between the hypotheses H1and H2,which signifies an inhibitory link,is of special interest. The network represents a common-effect structure.This means that there are two causes e1and e2which compete in explaining the occurrence of effect x.Therefore the two hypotheses referring to the individual causal relations have to be connected by a inhibitory link(see also Thagard, 2000).However,both hypotheses H2and H3are positively connected to the structural hypothesis H1.By contrast,a common-cause structure is represented slightly differently. In such a structure,event x would be the common cause of the two effects e1and e2(i.e.,H1:x!e1.e2).A model of this structure looks almost identical to the one for the com-mon-effect structure in Figure1.There is only one very im-portant difference.Because there is no competition between the effects of a common cause,a common-cause model has no inhibitory link between H2and H3.All other nodes and links in the two models are identical.Both the common-effect and the common-cause model were implemented using Microsoft Excel.Default values were adopted from the literature if not indicated otherwise (Thagard,1989).Initial activations were set to0.01,inhibi-tory links between nodes to–0.05,and excitatory links to +0.05.The inhibitory link between H1and H2within the common-effect model was pre-set to a value of–0.20.Thespecial activation node was attached to all evidence nodes. The additional activation was divided among the evidence nodes according to the relative frequency of the evidence in the learning input.This principle captures the intuition that more faith is put into evidence that is observed more fre-quently.EvaluationIn order to evaluate the proposed constraint satisfaction model different tasks and paradigms from the literature on causal learning and reasoning were modeled.One of our main goals was to show that the same architecture can be used to simulate different types of tasks.However,different tasks required different sections of the model depicted in Figure1.We used two principles for the construction of task specific networks.The first principle is that we only in-cluded the event nodes that corresponded to the event pat-terns observed in the learning phase or that corresponded to events that have to be evaluated or predicted in the test phase.For example,to model a task in which only event triples were shown,only the event nodes on the left side of the event layer in Figure1would be incorporated in the model.However,if the task following the learning phase required the prediction of single events,the corresponding nodes for single events would have to be added to the event layer.The second principle is that only the hypothesis nodes were included that represent hypotheses that are given or suggested to participants.These two principles ensure that for each paradigm a minimally sufficient sub-model of the complete model is instantiated.Test1:Asymmetries of BlockingBlocking belongs to the central phenomena observed in as-sociative learning which,among other findings,have moti-vated learning rules that embody cue competition(e.g.,Res-corla&Wagner,1972).A typical blocking experiment con-sists of two learning phases.In Phase1participants learn that two events e1and x are either both present or absent.In Phase2a third event e2is introduced.Now all three events are either present or absent.In both phases,events e1and e2 represent cues and x the outcome to be predicted.Associa-tive theories generally predict a blocking effect which means that participants should be reluctant about the causal status of the redundant event e2that has been constantly paired with the predictive event e1from Phase1.This prediction has come under attack by recent findings that have shown that the blocking effect depends on the causal model learn-ers bring to bear on the task(see Waldmann,1996,2000).If participants assume that e1and e2are the causes of x(com-mon-effect structure)a blocking effect can be seen.In con-trast,if participants assume that e1and e2are the collateral effects of the common cause x(common-cause structure),no blocking of e2is observed.In this condition,learners tend to view both e1and e2as equally valid diagnostic cues of x.To model blocking,we used a network that was ex-tended after Phase1.In Phase1,the net consisted of a hy-pothesis node(H2)and the nodes for patterns of two events (e1,x).After Phase1,the final activation of the hypothesis node was transferred to Phase2.In Phase2,the network consisted of two nodes for the two causal hypotheses(H2 and H3),and nodes that represented patterns of three events, the patterns participants observed within the learning phase. Furthermore,the node H1was included,which,depending on the condition,either coded a common-cause or a com-mon-effect hypothesis.The nodes for the event pairs from Phase1were removed.Figure2shows the activation of the two hypotheses re-ferring to the causal relations in Phase1and2.Figure2A depicts the activation for the common-cause model and Fig-ure2B for the common-effect model.Figure2A:Simulation of a blocking paradigm(Test1).Activation of hypothesis nodes for a common-causemodel.The solid line represents the activation ofH2:x→e1,the dotted line of H3:x→e2.Phase2started at the101st.cycle.The model shows no blocking for event e2in the context of the common-cause model.It quickly acquires the belief that there is a causal connection between x and e2.Figure2B:Simulation of a blocking paradigm(Test1).Activation of hypothesis nodes for a common-effectstructure.The upper line represents the activation ofH2:e1→x,the lower line of H3:e2→x.Phase2started at the101st cycle.For the common-effect model the simulation shows blocking of the second cause,that is the second hypothesis is believed to be wrong.Thus,the simulations closely correspond to the empirical finding that blocking interacts with the structure of the causal model used to interpret the learning data.Test2:Testing Complex Causal HypothesesThe first test of the model used a phenomenon from the lit-erature on causal learning.We now want to turn to a com-pletely different paradigm,hypothesis testing.In experi-ments on causal learning participants are typically instructed about a causal structure,and the task is to learn about the causal relations within the structure.They are not asked whether they believe that the structure is supported by the learning data or not.In recent experiments(Hagmayer, 2001;Hagmayer&Waldmann,2001)we gave participants the task to test a complex causal model hypothesis.For ex-ample,we asked them whether three observed events sup-port a common-cause hypothesis or not.Normatively this task should be solved by testing the implications of the given structural hypothesis.For example,a common-cause model implies a(spurious)correlation of the effects of the single common cause.In contrast,a common-effect structure does not imply a correlation of the different causes of the joint effect.Unless there is an additional hidden event that causes a correlation among the causes,they should be uncorrelated. In the experiment,participants were given data which either displayed a correlation between all three events(data set1) or correlations between e1-x and e2-x only,that is e1and e2 were marginally independent in this data(data set2).Data set1was consistent with a common-cause hypothesis which implies correlations between all three events.In contrast, data set2favors the common-effect hypothesis with x as the effect and e1and e2as independent causes.However,in a series of experiments we found that participants were not aware of these differential structural implications when test-ing the two hypotheses.Instead they checked whether the individual causal relations within the complex structures held(e.g.,e1-x).Thus,participants dismissed a hypothesis if one of the assumed causal links was missing.However,they proved unable to distinguish between the common-cause and the common-effect structure when both structures specified causal connections between the same events(regardless of the direction).To model this task we used the model without the nodes for event pairs and individual events.The special activation node was connected to the patterns of three events.As be-fore the activation of the individual event patterns was pro-portional to the frequency of the respective pattern in the data.To test the model,we used three sets of data.Either all three events were correlated(data set1),e1and x,and e2 and x were correlated and e1and e2were marginally inde-pendent(data set2),or e1and x,and e1and e2were corre-lated,and e2and x were uncorrelated(data set3).As com-peting hypotheses we either used a common-cause model with x as the common cause,or a common-effect model with x as the common effect.Figure3shows the activation of the node H1which represents the hypothesis that the respective causal model underlies the observed data.Figure3A shows the results for the common-cause hy-pothesis,Figure3B for the common-effect hypothesis.The results clearly mirror the judgments of our participants. Whenever the two assumed causal relations within either causal model were represented in the data,the structural hypothesis was accepted(solid lines),if one link was miss-ing the hypothesis was rejected(dotted line).One slight deviation from our empirical findings was ob-served.In early cycles there seems to be an effect favoring the common-effect hypothesis with data consistent with this hypothesis.However,the difference between the hypotheses is relatively small and further decreases after100updating cycles.Thus,the results are consistent with participants’insensitivity to structural implications of causal models in hypothesis testing tasks.Figure3A:Activation of hypothesis node H1for a com-mon-cause model(Test2).The solid lines represent the activations for data set1and2,the dotted line the activa-tions for data set3.Figure3B:Activation of hypothesis node H1for a com-mon-effect model(Test2).The solid lines represent the activations for data set1and2,the dashed line at thebottom the activations for data set3Why does the model not differentiate between the two causal structures?The reason is that it is assumed that com-plex structural hypotheses are not directly linked to empiri-cal evidence.In our model empirical evidence is connected to the hypotheses that represent individual causal links which in turn are linked to more complex model-related hypotheses.This architecture allows it to model learning and hypothesis testing within the same model.It also seems to capture the empirical finding that participants can easily decide whether a certain pattern of events supports a simple causal hypothesis,but have a hard time to relate event pat-terns to complex causal hypotheses.Test3:Causal InferencesIn the previous section we have mentioned studies showing insensitivity to spurious relations implied by causal models.A last test for our model is a task in which participants have to predict other events under the assumption that a certain causal model holds.Interestingly we have empirically dem-onstrated sensitivity to structural implications of causal models in this more implicit task(Hagmayer&Waldmann, 2000).In this task participants do not have to evaluate the validity of a causal model in light of observed evidence but rather are instructed to use causal models when predicting individual events.In our experiments we presented partici-pants with two learning phases in which they learned about two causal relations one at a time.Thus,in each phase par-ticipants only received information about the presence and absence of two events(x and e1,or x and e2).They never saw patterns of all three events during the experiment.The initial instructions described the two causal relations,which were identically presented across conditions,either as parts of a common-cause model with x as the cause or as part of a common-effect model with x as the effect.After participants had learned about the two causal relations we asked them to predict whether e1and e2were present given that x was present.We found that participants were more likely to pre-dict that both e1and e2would co-occur when x was viewed as the common cause than when it was seen as a common effect.Thus,in this more implicit task the predictions ex-pressed knowledge about structural implications of causal models.In particular,the patterns the participants predicted embodied a spurious correlation among the effects of a com-mon cause,whereas the causes of a common effect tended to be marginally uncorrelated in the predicted patterns.By contrast,in a more direct task which required explicit judgments about correlations,no such sensitivity was observed,which is consistent with the results reported in the previous section.To model this experiment we eventually used the com-plete network depicted in Figure1which was successively augmented according to our two principles.In Phase1,the learning phase,patterns of two events were connected to the hypotheses H2and H3.Depending on the learning condi-tion,these two hypotheses were either linked to a common-cause or a common-effect hypothesis(H1).The activations of the hypothesis nodes at the end of Phase1were used as initial activation values in Phase2.In Phase2the model consisted of the three hypothesis nodes,the nodes for pat-terns of three events and the nodes representing single events.The single event nodes were included because the task required the prediction of individual events.The special activation node was now attached to event x.The model then predicted the other two individual events and patterns of all three events.The model quickly learned the causal relations during Phase1of the experiment.Figure4depicts the results of Phase2.Figure4A shows the predictions of the model for the condition in which participants assumed a common-cause model,Figure4B shows the results for the common-effect condition.The results of the simulations are consistent with the behavior we have observed in our participants. When the model assumes a common-cause model the pres-ence of x leads to a high positive activation of the two ef-fects e1and e2.This means that the model tends to prefer the prediction that the two effects of a common cause co-occur.In contrast,for the common-effect structure the model does not show such a preference.In this condition, both causes or either one of them equally qualify as possible explanations of the observed effect.This means that our model,similar to the one Thagard(2000)has proposed, tends to“explain away”the second cause when one of the competing causes is present.This is a consequence of the competition between the two causal hypothesis H2and H3.Figure4A:Implicit causal inferences(Test3).Activa-tion of single event nodes for the common-cause model: Event x(top),events e1and e2(bottom)Figure4B:Implicit causal inferences(Test3).Activa-tion of single event nodes for the common-effect model: Event x(top),event e1(middle),event e2(bottom)DiscussionA constraint satisfaction model of causal learning and rea-soning was presented in this paper that extends the architec-ture and scope of the model proposed by Thagard(2000). Thagard’s model focuses upon causal explanations of singu-lar events and belief updating.Our aim was to create a model that allows it to model both learning and reasoning within causal models.The model was successfully applied to three different tasks.It modeled people’s sensitivity to struc-tural implications of causal models in tasks involving learn-ing and predictions whereas the same model also predicted that people would fail in tasks which required explicit knowledge of the statistical implications of causal models.One question that might be raised is whether the pro-posed model really captures learning or just models causal judgment.In our view,the concept of learning does not nec-essarily imply incremental updating of associative weights Our model embodies a hypothesis testing approach to learn-ing which assumes that learners modify the strength of belief in deterministic causal hypotheses based on probabilistic learning input.This view also underlies recent Bayesian models of causality(Pearl,2000).In the model the activa-tion(i.e.,degree of belief)of the hypothesis nodes is modi-fied based on the learning input.This way the model is ca-pable of modeling trial-by-trial learning as well as learning based on summary data within the same architecture.Thus far we have pre-set the weights connecting evi-dence and hypotheses.In our view,the assigned values re-flect everyday qualitative intuitions about whether an event pattern supports or contradicts a hypothesized causal hy-pothesis.These weights remained constant throughout the simulations.Despite this restriction the model successfully predicted empirical phenomena in learning and reasoning. However,pre-setting these weights is not a necessary feature of the model.It is possible to add a learning component that acquires knowledge about the relation between event pat-terns and hypotheses based on feedback in a prior learning phase(see Wang et al.,1998,for a model adding associative learning to Echo).In summary,our constraint satisfaction model seems to offer a promising new way to model causal learning and reasoning.It is capable of modeling phenomena in a wide range of different tasks,which thus far have been treated as separate in the literature.Relative to normative Bayesian models,our connectionist model allows it to simulate a large number of different tasks and different phenomena while using fairly simple computational routines.It proved capable of capturing a number of recent phenomena that have pre-sented problems to extant models of causal cognition.More tests of the model clearly seem warranted.ReferencesHagmayer,Y.(2001).Denken mit undüber Kausalmodelle. Unpublished Doctoral Dissertation,University of Göttin-gen.Hagmayer,Y.,&Waldmann,M.R.(2000).Simulating causal models:The way to structural sensitivity.In L.R. Gleitman& A.K.Joshi(Eds.),Proceedings of the Twenty-Second Annual Conference of the Cognitive Sci-ence Society(pp.214-219).Mahwah,NJ:Erlbaum. Hagmayer,Y.,&Waldmann,M.R.(2001).Testing com-plex causal hypotheses.In M.May&U.Oestermeier (Eds.),Interdisciplinary perspectives on causation(pp.59-80).Bern:Books on Demand.Pearl,J.(2000).Causality:Models,reasoning,and infer-ence.Cambridge:Cambridge University Press. Rescorla,R.A.,&Wagner,A.R.(1972).A theory of Pav-lovian conditioning:Variations in the effectiveness of rein-forcement and non-reinforcement.In A.H.Black&W.F. Prokasy(Eds.),Classical conditioning II.Current re-search and theory(pp.64-99)New York:Appleton-Century-Crofts.Shanks,D.R.,Holyoak,K.J.,&Medin,D.L.(Eds.)(1996). The psychology of learning and motivation,Vol.34: Causal learning.San Diego:Academic Press. Thagard,P.(1989).Explanatory coherence.Behavioral and Brain Sciences,12,435-467.Thagard,P.(2000).Coherence in thought and action.Cam-bridge,MA:MIT Press.Waldmann,M.R.(1996).Knowledge-based causal induc-tion.In D.R.Shanks,K.J.Holyoak&D.L.Medin(Eds.), The psychology of learning and motivation,Vol.34: Causal learning(pp.47-88).San Diego:Academic Press. Waldmann,M.R.(2000).Competition among causes but not effects in predictive and diagnostic learning.Journal of Experimental Psychology:Learning,Memory,and Cognition,26,53-76.Wang,H.,Johnson,T.R.,&Zhang,J.(1998).UEcho:A model of uncertainty management in human abductive rea-soning.In M.A.Gernsbacher&S.R.Derry(Eds.),Pro-ceedings of the Twentieth Annual Conference of the Cog-nitive Science Society(pp.1113-1118).Mahwah,NJ:Erl-baum.。
人工智能原理_北京大学中国大学mooc课后章节答案期末考试题库2023年1.Turing Test is designed to provide what kind of satisfactory operationaldefinition?图灵测试旨在给予哪一种令人满意的操作定义?答案:machine intelligence 机器智能2.Thinking the differences between agent functions and agent programs, selectcorrect statements from following ones.考虑智能体函数与智能体程序的差异,从下列陈述中选择正确的答案。
答案:An agent program implements an agent function.一个智能体程序实现一个智能体函数。
3.There are two main kinds of formulation for 8-queens problem. Which of thefollowing one is the formulation that starts with all 8 queens on the boardand moves them around?有两种8皇后问题的形式化方式。
“初始时8个皇后都放在棋盘上,然后再进行移动”属于哪一种形式化方式?答案:Complete-state formulation 全态形式化4.What kind of knowledge will be used to describe how a problem is solved?哪种知识可用于描述如何求解问题?答案:Procedural knowledge 过程性知识5.Which of the following is used to discover general facts from trainingexamples?下列中哪个用于训练样本中发现一般的事实?答案:Inductive learning 归纳学习6.Which statement best describes the task of “classification” in machinelearning?哪一个是机器学习中“分类”任务的正确描述?答案:To assign a category to each item. 为每个项目分配一个类别。
(ingot) withdrawal mechanism 取锭机构a first approximation 一级近似absolute stability 确定稳定度abuse 滥用, 虐待, 弊端AC(alternating current)沟通电acceleration 加速度accentuate增加account for 解决activation energy 激活能adaptation 适应,改编,改写本,版本adiabatic adj.[物]绝热的, 隔热的adiabatic 绝热的admissible velocity field 容许速度场advancing 前进的,前沿的age-hardening 时效硬化agitate 搅动airfoil 机翼aligned 列式的alignment 队列,找平,找齐alkaline cleaning 碱洗alligator产于美洲的鳄鱼alligatoring鳄嘴裂口Alusuisse 阿鲁修斯(瑞士公司) amorphous 非晶的an amount of 很多的angle of inclination 倾角anisotropy 各向异性anneal 退火annealing 退火annihilation 湮灭annular ring 环孔anvil 铁砧apex 顶点,反射点, 脊尖,最高点applied force 作用力applied plasticity 应用塑性力学approach angle 渐近角approach surface 渐进面approach 靠近,靠近arc force 电弧力arc process 电弧法armour steel ingot 装甲钢锭arrangment 排列articulated chill 铰接式激冷铸型articulate 铰接,连接assembly 集合, 装配,集会, 集结, 汇编atomization 雾化(法),分别成原子atomizing 雾化attendant 附带的ausforming 奥氏体形变austenite 奥氏体automatic gage control(AGC)自动厚度掌握axial 轴向的axisymmetric 轴对称的back extrusion 反挤压back relief 出口锥(带)backup roll 支撑辊bar 杆,棒(bar>rod>wire)bar mill 小型轧机, 棒材轧机barrel 鼓起的barreling 鼓起barrel 鼓形barrier 障碍物,隔离物,屏障base plate 底座bcc=body centered cubic 体心立方be concerned with 与…有关,涉及be prone to 有...的倾向, 易于be referred to as 被称为…beam 梁, (光线的)束bearing region 定径区,支撑区bell 钟形物,承口bending process 弯曲工艺bibliographic 书籍题解的,著述名目的,文献的billet 钢坯(横截面积>4⨯4cm2)Biot number 比奥数bite ratio 咬入比block drawing 卷筒冷拔机,blocking die 粗锻模block 料块,街区bloom 花,旺盛,青春;(使)开花, (使) 繁盛;初轧坯(宽=厚,横截面积>15⨯15cm2)blooming mill 初轧机,方坯初轧机blowhole 气孔blow 突然打击bolster 垫子bottom-pressure casting 加压下铸breakage 损坏breakdown 破环,打碎breakthrough 临界点bridle roll 活套张紧辊,拉紧辊broadside mill 宽展机buckle 带扣;扣住, 变弯曲;扣箍,纹buckle 弯曲buckling 纵向弯曲build up 增进,增大bulge 鼓起,突出bull block 卷取机,冷拔机卷筒buoyancy 浮力burning 过烧butt 粗大的一端, 残料calibration 标度, 刻度,校准call upon 要求,号召, 访问cam 凸轮camber 拱形;呈拱形;隆起,上拱度canning 包套capital investment 资金总额,资本投资carbon block 碳砖炉衬, 碳砖衬里casing 外套category类别,种类cell wall晶胞壁cellular growth 分格生长,胞状长大cellular 胞状的cellular 细胞的, 网状的,蜂窝状的,胞状的cemented carbide 硬质合金cementite 渗碳体centrifugal离心的ceramic 陶瓷的chain drive 链条传动channel 槽,沟charge炉料,坯料chevron cracking V 形裂纹,人形裂纹chevron 人字形chevron 人字形,V 字形chilling block 激冷块,激冷试块,冷铁chill 激冷铸型chill 激冷铸型chipping 刨削circular symmetry 圆对称civil engineer 土木工程师clamp 卡紧clearance 间隙, 空隙climb 攀移close control 周密掌握,准确检查closed-die forging 闭模锻造cluster mill 多辊式轧钢机coefficient of spread 宽展系数cogging mill 初轧机, 开坯机, 粗轧机cog 开坯coil up 盘绕,盘起coiler 卷取机cold finish 冷加工精整cold reduce 冷减径,冷轧cold reduction 冷压缩;冷减径collapsible 折叠式combined stress 综合应力,组合应力commercial alloy 工业合金compact shape 紧凑型competewith 与...竞争,比得上compete竞争competing process 竞争性工艺complexity 简单性compocasting 混成砂型铸造compressive stress 压应力concave 凹面的concentricity 同心concerned with 有关conductor grade 导电级conical 圆锥的, 圆锥形的constant-volume relationship 体积不变关系constitutional supercooling 组成过冷,组分过冷constraint factor 约束系数constraint 强制,约束constrain 约束consumable electrode remelting 自耗电极再熔contact-bend-stretch (CBS) rolling process 接触-弯曲-拉伸轧制过程(滚压法)container 挤压筒containerless 无模continuous casting 连(续浇)铸continuous hot-strip mill 连续式带材热轧机continuous mill 连续式轧机continuous rheocaster 连续流变铸造机contour 轮廓,周线, 等高线;外形,轮廓contour 轮廓,等高线,等值线convection 对流conventional strain 传统应变,名义应变=apparent strain=nominal strain convention 协定,惯例,习俗,规定conversion coating 可换涂层convex 外表弯曲如球的外侧, 凸起的copper rim 铜环cored structure 核状偏析组织correlation 相关creep deformation 蠕变变形creep failure 蠕变破坏creep test 蠕变试验critical打算性的cross rolling 横轧cross slip 交滑移crosshead 十字头,卡头crown 隆起crucible 坩埚crumble 裂开crystallographic texture 晶体学织构cumulative 累积的Cupping 杯突curling卷曲,卷边cylindrical圆柱体的DC(direct current)直流电dead soft 极软,完全软dead zone 死区decarbonization 脱碳,脱碳作用,除碳法deep drawing 深冲,深拉degreasing(碱洗)除油degree Celsius (degree Centigrade) 摄氏度delivery speed 轧出速度dendritic 枝晶dentrite 枝晶departure 偏离dependent variable 应变量,他变量development work 试制工作diamond 金刚石die block 模块,模具坯料die fill 模填充die half 半模die head 模头,冲垫die holder 凹模固定板;模座die stack 模具组合die throat 模孔dimensional tolerance 尺寸公差dimensional tolerance 尺寸公差direct-compression-type process 直承受压型工艺direct-drive pumping system 直接传动泵送系统direction extrusion 直接挤压,正挤压discard 丢弃, 抛弃;放弃discoloration 变色, 污点dislocation 位错dispersed 分散的dissipation 消散,分散, 铺张,消耗dissipation 耗散,逸散,能量耗散disturb 干扰,扰动diverse 不同的, 变化多的domed-shaped 圆顶形,半球形double diffusive 双集中draft allowance 倾斜角容许误差draft angle 倾斜角,脱模角drawbench 拉床,冷拔机drawhead 拉拔机机头,拉床机头drawing down 锻拉drawing out 拔长drawing 拉拔drive 驱使, 动力, 推动driven roll 从动辊driven roll 从动辊drop hammer 落锤drop tower 吊塔,下垂塔droplet 液滴ductile 易延展的ductility 延展性dummy block (在挤压活塞与热金属之间)挤压垫dynamic recovery 动态回复dynamic recovery 动态回复dynamic recrystallization 动态再结晶edging pass 立轧道次edging roll 立辊,轧边辊edging 边锻,侧锻effect 效应elastic constant 弹性常数elastic modulus 弹性模量electric discharge machining 电火花加工electrical cable 电缆electrode电焊条,电极electroplatedslab 电镀板electro-slagrefining 电渣精炼eliminate消退,排解ellipsoidal 椭球形embrittle 使变脆encase 嵌入engineering strain 工程应变enthalpy difference 焓差enthalpy(热)焓enthalpy 焓entropy 熵EP(extra-pure)超纯的evolved as heat 以热放出exaggerate 夸大, 夸大excessive 过分的expenditure消耗,支出, 花费exponential指数的extension 延长extensive 大规模的extraction 抽取,排出extrapolating 外延,外推fabrication 制造facilitate 使简洁Fahrenheit (degree)华氏度fall off 下降,削减fall within 属于fault层错,断层fcc=face centered cubic 面心立方feed roll 送料辊ferrite 铁素体fibrous 纤维的fillet 圆角fine detail 细节finishing die 精模,成品模finishing roll 精轧轧辊finishing stand 精轧机座,成品机座finishing temperature 终轧温度,最终温度finishing train 精轧机组finite element method methods 有限元法fir-tree cracking 杉树状裂纹fir-tree crystal 枝晶fissue 裂缝,龟裂flange 凸缘,法兰flash gutter 飞边槽flash 飞边,毛刺flat die 平模flat tool 平口刀具flat-faced die 平模flatness 平直度,平坦度, 平面flex 挠曲flexile roll 挠性轧辊floating bend roll 游动弯曲辊floating plug 游动芯头floating zone technique 浮动区域技术, 区域精炼技术flow resistance 流阻,抗流变性flux of matter 物质通量flux pool 掩盖剂熔池flying micrometer 快速测微仪, 飞测千分尺FM (Frequency Modulation) 调频foil 箔,薄金属片follower block 从动块,从动机构局部follower pad 挤压垫片forging hammer 锻锤forging press 锻压机four-high mill 四辊轧机(机座) fractional reduction 缩减分数fracture criteria 断裂判据(准则) fracture toughness 断裂韧度fracture 裂开,断裂framework 构架,框架frictional condition 摩擦条件frictional drag 摩擦拽力frictional resistance 摩擦阻力frictional restraint 摩擦阻力full hard 全硬化,淬透fullering 压槽锻fuller 压槽fundamentals根本原理funnel 漏斗,烟窗,烟道funnel 漏斗gage(=gauge) 标准度量,计量器,估量Geiringer velocity equation 盖林格速度方程general solution〔微分方程的〕通解glass pad 玻璃垫,玻璃衬垫glassy 玻璃状的graphite 石墨gravitational acceleration 重力加速度grid patter 网格状线,网格图形grinding 磨削grooved roll 有槽grooving 开槽guard against 提防, 预防gutter 槽,沟hammering锻锤headroom 净空, 头上空间, 净空高度heat supplied 供热heat transfer 换热,传热heavy scale (钢锭的)厚氧化皮Hencky equation 汉基方程hexagonal 六角形的HF(high frequency)高频Hodograph 速度图,矢端图,速端曲线homogeneous upset test 均匀墩粗试验homogeneous 均匀的horizontal DC casting 水平连铸hot shortness 热脆hot shortness 热脆hot tearing 热撕裂,形成热裂缝hot torsion test 高温扭转试验housing 机架hub 轮毂,磁盘套,套筒hydraulic accumulator system 液压储能系统hydraulic jack 液压千斤顶hydraulic mechanism 液压机构hydraulic press 液压机hypercooling 特定程度的过冷hypocooling 亚冷,次冷I beam 工字钢, 工字梁ideal work of deformation 抱负塑性(变形)功impact blow 冲击作用impact extrusion 冲挤,冷挤压impact 碰撞,冲击impact 撞击,冲击imperative 强制的, 紧急的, 必要的implementation 执行,实现impose 利用,施加影响的impression die 压模in term of 依据,凭借…in terms of 从…角度,依据…in terms 在谈判(协商)中,依据…incipient melting 初熔incline 倾斜, 斜坡, 斜面included angle=separation angle 夹角inclusion 夹杂物inclusion 夹杂物Incorporate into 合并,缩写成…increment 增量indentation 压痕indenter 受托代购商,压头indenting 压窝independent variable 独立变量,自由变量indirect extrusion 间接挤压,反挤压indirect-compression process 间承受压型工艺induction heating 感应加热inert 惰性initially 开头inlet nozzle 入口嘴innovative 创的in-process 在线installation 安装,装臵installation 装臵,设备instantaneous 瞬间的,即刻的interdependence 相互依靠intergranular 晶粒间的intermediate annealing 中间退火intermediate frequency(i-f)中频intermediate 中间的internal 在中心的Internal-combustion engine 内燃机interphase boundary 相间边界interpolate 窜改,添写进去, 插入语句intersperse 交替interval 区间,间隔intricate 简单的invariable 不变地,总是inverted extrusion 反挤压irrespective 不考虑isenthalpic 等焓的isothermal forging 等温锻造isotope 同位素isotropic 各向同性的isotropy 各向同性jaw 叉钳, 钳夹jaw 钳夹keep pace with 与..同步,与…齐步前进lamination 迭片构造;层压land width 通道宽度lap 重叠large scale plasticity 大比例塑性变形late stage 晚期的lead bath 铅淬火槽lead sheathing 包铅层lead 铅leveler (=leveller) 轧平机,矫直机likewise 同样地limit design 极限设计limited segregation 有限偏析limiting deformation 极限变形liner 衬垫linkage 连接,链接logarithm 对数longitudinal 纵向的loose 松动的,疏松的,不准确lubricant 润滑剂lying flat 平放lying on edge 侧立mandrel drawing 长芯棒管材拉拔mandrel 心轴;芯棒,顶杆mandrel 芯棒,顶杆mandrel 芯杆manifest 说明,呈现Marangoni effect 马兰各尼效应(界面张力梯度引起的流淌)marstraining 马氏体形变热处理martensite 马氏体mass feeding〔质量〕补缩mass flow rate 质量流量mast 桅杆,炉柱match 协作matrix method 矩阵法matrix 矩阵measure out 测量出mechanical equivalent of heat 热功当量mechanical press 机械压力机mechanics 力学,机构meet .. requirement 满足…要求melt drag 熔体拖拽法melt spinning 熔体旋压(纺丝,拉丝)工艺merchant mill 条钢轧机metal maching processing 切削加工metal pool 金属熔池metalworking process 金属加工工艺metastable 介稳的milestone 里程碑mill edge 轧制的(未经剪切的)边, 热轧边mill production 轧机生产力量mill product 轧制产品mill product 轧制成品mill spring 轧机弹跳mixer 搅拌器modelling 模型化,模拟monocrystal 单晶monograph=treatise 专论,专题论文morphological 形态的,构造的morphological 形态学的motor 电动机mould cavity 模腔multiple passes 多道次multiple 多样的, 多重的;成倍增加multipule –die machine 多模拉丝机mushy freezing zone 糊状凝固区music wire 琴弦,琴用钢丝natural and error 尝试法,试错法natural strain 自然应变necking 颈缩Newtonian suspension 牛顿悬浮液nib 尖端,尖头,楔尖劈nigh-current source 高电流源nomenclature 术语nonferrous metal 有色金属non-heat-treatable 不行热处理的novel condition 条件nucleated source 形核源nucleation 形核(现象)numerical controlled(N/C)machining 数控切割oblong 长方形,椭圆形offset 错排,偏移ohmic resistance 欧姆电阻on the basis of 依据onset 发端open-die forging 自由锻,开式模锻opening 通路order of complexity 简单程度orifice 孔, 口;喷嘴oval 卵形的, 椭圆的overlap 重叠oxalate 草酸(盐)oxidation 氧化oxide stringer 氧化皮细纹,氧化皮发纹oxygen lance 氧气喷枪packing 填料parallelland 平行带parting line 分型线,分隔线patenting 铅浴淬火payload 有效载荷pearlite 珠光体, 珍宝岩Pechiney 皮切尼(法国铝公司) pendulum mill 摆式轧机periphery 边缘,外围perlite 珍宝岩,珠光体perpendicularto 垂直于,与…正交pickle 酸洗,腌, 泡pickling 酸洗,浸洗pierce 刺穿, 穿透,钻孔,穿孔piercing 冲孔piercing 打孔pinch roll 夹送辊,夹紧辊,摩擦辊pinch 捏,撮pipe diffusion 管集中pipe 重皮(缺陷)piston 活塞,阳模pit 深坑, 深渊, 陷阱,凹陷planar 平面的plane front 平面波阵面,平面长大plane of shear=shear plane 剪切面plane-strain compression test 平面应变压缩试验plane-strain 平面应变planetary mill 行星式轧机planishing roll 平坦辊plasma torch 等离子喷枪,等离子焰炬plasma-arc melting 等离子熔炼plasma-arc 等离子弧plastic work 塑性功plastic-rigid solid 刚塑性体plastometer 塑性计platen 压盘platform 升降台,送料台plug drawing 定径拉拔plug 芯头point 弄尖polygon 多角形, 多边形porosity 孔隙positive正的,阳极的postpone 延迟,暂缓powder rolling 粉末轧制成型power hammer 动力锤predominate 占优势,统治,支配preferable 优先选择的prefered orientation 择优取向preform 预变形prescribe 规定,指示presentation 介绍,陈述,表达pressing 压制pressure plate 压力板,压板pressure vessel 压力容器prestrain 预应变primary roughing mill 初始开坯机principal stress 主应力prior coating 预涂层procedure 工序processing operation 工艺作业product mix 产品系列, 产品组合production run 生产性运行profile侧面profile 轮廓projection 突出局部,凸块propeller shaft 螺旋桨prototype 试验模型psi=pounds per square inch 磅/平方英寸pulling 拖,拽pullover 拔送器pull 拉力,张力pull 拉力,张力,拉,拔punching 冲孔,穿孔punch 冲孔,打孔,冲压机,冲床pyro-“火”,”热”,”焦”等含义pyrometallurgy 火法冶金学pyrometer 高温计quantitative 定性量的quenching 激冷,淬火radial coordinate 极坐标,径向坐标radial 径向的rail 钢轨,导轨,横杆, 围栏, 扶手ram 杆,推杆,挤压杆ram speed (压力机)滑块速度;冲击速度rationalize 合理化reaction 反响, 反作用,反动(力) reaction 反响,反作用力rear 后面, 后面的, 反面的;举起recalescence 金属潜热的释放,复辉,再炎热recalescence 再炎热recalescence 再辉,金属潜热的释放recast 重铸,重算,重做recess 凹进处reciprocate (使...)往复,互换recovery process 回复工艺,恢复过程rectifier 整流装臵reduction 削减,压下,缩减量,复原redundant work 附加功,关心作业redundant work 附加工,关心作业reeling 矫直refinish 再修整refractory material 难熔材料,耐高温材料refractory metal 耐高温金属refractory 难掌握的, 难熔的,耐火材料regime 状况,体制removal 切除reorientation 重取向repeater 转发器reposition重配臵reservoir 贮藏处,贮备retention 保持,保存reversible 可逆的reversing mill 可逆式轧机rheocasting 流变铸造rheological 流变的rheo-流变的rib 筋(jin),肋(lei)ridge 脊,隆起局部right angle 直角rigid block 刚性体rinse 漂洗,清洗rod 棒,杆roll 轧辊roll caster 滚动铸型铸造roll forming 滚轧成形,辊锻成形, 轧锻rolled sheet 辊轧板roller leveling 辊式矫直roller-leveler 辊式矫直机,钢板压平机roller-leveling machine 辊式矫直机rolling mill 轧钢厂rolling 滚压Rolls Royce 劳斯莱斯rotor 转子roughing stand 粗轧机座roughing train 粗轧机组rounded apex 圆形底流排出口rpm=revolutions per minuterunout table 输出辊道rupture裂开,折断,裂缝safetyvalve 安全阀scab 疤, 痂scale 鳞皮scalebreaker mill 破鳞机scalp 剥皮scratch 划伤screw 压下丝杠,螺丝seam 线,缝seam 缝,线,裂缝seating surface 支持面secondary dendrite arm spacing 二次枝晶臂间距secondary tensile stress 副拉应力sedimentation 沉淀segregated cell 偏析晶胞self-diffusion 自集中Sendzimir mill 森吉米尔式轧机, 二十辊冷轧机sensor 传感器servocontrol 伺服掌握severity 严峻,剧烈,.严峻强度sharp 锐利的, 锐利的, 明显的,猛烈的,刺耳的,急剧的,精明的,灵敏的shear plane 切变面, 剪切面shear strength 切变强度shearedplate 切边的中厚板shearingprocess 剪切工艺sheathing 掩盖物, 罩子,加护套sheet-metal金属薄板,金属片ShiroKobayashi 小林史郎shrinkage porosity 收缩疏松,缩松,松心shrunk shrink 的过去式和过去分词(使) 收缩,(使)缩小side view 侧视图simulate 模拟simulation 仿真,模拟sinking drawing 减径拉拔sinking 减径拔管sinter 烧结skin pass 外表冷轧,外表光轧skylab 太空试验室skylark 云雀,火箭slabanalysis 主应力法slabmethod 主应力法slabbing mill板坯初轧机slab扁锭(宽≥2⨯厚,横截面积>10⨯10cm2)slag 电渣sleeve 套slip-line field theory 滑移线场理论sliver 渣粒sliver 毛刺,渣粒slope 斜坡, 斜面, 倾斜; 使)顺斜slug 毛坯solidification front 凝固前沿solidification interval 凝固间隙solidus 固相线solidus 固相线soluble 可溶的solute partition 溶质分区solute profile 溶质分布solvus 固溶度曲线sound product 发声产品sounder casting 较好的铸件sounding rocket 探测火箭soundness 安定性,致密性specific heat 比热spheroidization 球化处理spiral constriction 螺旋收缩splat cooling 急冷,喷涂细片冷却法splat 椅背中间纵立长条木,薄片激冷金属spray method 喷射成型法spread law 宽展定律spread ratio 宽展比spread宽展spring temper 弹性回火spring wire 弹簧钢丝squeeze ratio 挤压比squirt 喷射stacking –fault energy 层错能stagnant 停滞的, 迟钝的stand 机架steel casing 钢外套stem 茎, 杆stepped cone multiple-pass wiredrawing 级轮多道次拉伸机stepped cone 级轮sticking friction 粘着摩擦stock 原料,坯料stool(模)底板straightener 矫直机strainhardening 应变硬化strain phase transformation 应变诱导相变strand stand 中间机架stretch forming 拉伸成形,拉伸造型stretcher strains 滑移线streteher leveling 拉伸矫直stringer 发纹stroke 行程subsequence 后继, 随后subsidiary 补充的,关心的substantially 充分地substantial 实质的,真正的substructure 亚构造succeeding 后续的sulfate 硫酸盐superalloy 高温合金superimpose 重叠,添加superimpose 添加, 双重, 迭叠,重叠superior surface finish 高级外表光滑度surface finishing 外表抛光,精整surface finish 外表抛光,外表精整surface melting 外表熔解suspending medium 悬浮介质swager 锻锤,锤锻机swagging 旋锻,环锻synchronize 同步tandem mill 连轧机,串列式轧机tandem 序列,串列的taper 渐渐变细Teflon=polytetrafluoroethylene 特氟隆(聚四氯乙烯) temper rolling 外表光轧,平坦temper 回火tenfold 十倍的,成十倍tension type process 张力型工艺terminal curve 末端曲线terminology 术语,词汇theory of plasticity 塑性理论thermal conductivity 热导率thermal shock 热冲击thermomechanical processing 形变热处理工艺thermomechanical treatment 形变热处理thermosolutal fluid 热熔流体thin skin 薄皮thixocasting 触变铸造thixotropic property 触变性质thixotropic slurry 触变浆料thixo-触变的thread rolling 滚丝, 搓丝throughput n.生产量, 生产力量, 吞吐量throughput 吞吐量tin 锡titanium 钛top view 上视图topological 拓扑的total system 综合系统toughness 刚度,韧性,toward this end 为此,因此tractable 易处理的transformer 变压器transverse 横的transverse 横向的trimmededge 铣过的底边trimming die 修整模,修边模trim 密封面,修整,装饰true strain 真应变tube drawing 拉管tube drawing 拉管tube winking 缩口拉拔,减径拉拔,空拉tubular 管状的tundish 漏斗tungsten 钨turbine blade 涡轮叶片turbine 涡轮turn to 求助于,转向,致力于turntable 转台,回转台two band caster 双带铸造机(Hazelett process 黑兹利特工艺)two-high mill 二辊轧机Ugine-Sejournel process 尤金-塞焦耐特热挤压工艺,玻璃润滑剂高速挤压法UHF(ultra-high frequency)超高频uncoiler 开卷机, 拆卷机undercool 过冷uniaxial compression 单向压缩uniaxial 单轴的uniform-deformation energy method 均匀变形能量法unilateral 单边的,单面的universal mill 万能轧机universal-mill plate 齐边钢板unpinning 脱钉upper- (lower-) bound solution 上(下)界解upper limit 上限upper-bound 上界upsetting 镦粗upset 倒转upward向上弯vacuum arc refining 真空电弧精炼valved nozzle 阀门嘴versatile 通用的, 万能的versatility 多功能性versatility 多功能性,多用性VHF(very high frequency)甚高频vicinity 邻近,四周violate 违犯的,亵渎的,干扰,违反,阻碍, 侵害virtually 事实上,实质上viscosity 黏度volume fraction solid 固相体积分数vortex 涡流war 翘曲, 扭曲, 热变形warping 翘面, 扭曲, 变形web 连接板,薄片,腹板,网状物wedge 楔;楔入,楔进wettability 润湿性wheel caster 轮铸(Properzi process 普罗佩兹工艺)wide strip 宽带材windupreel 缠绕卷盘wiredrawing 拉线,拔丝with respect to 关于, 至于withdrawal chamber 退锭室withdrawal 放出workability 可使用性,可加工性wrought 锻件\轧材\冷拔产品的总称yield point 屈服点yield-stress 屈服应力yield 屈服zipper break 拉链式裂开。
1)Corporate reorganization means_________the organizational structure of the corporation.(2) Problems arise when reorganizations are_______for the wrong reason.(3)Understanding the ________of both the company and the market in which it operates is vital for a successful reorganization.(4)Performance___________are regular reviews of employee performance within organizations.(5)Changing the______of the company should be part of the reorganization.正确答案:redesigning undertaken constraints appraisals incentives答案与解析(Key&analysis):(1)redesigning(Source:Task1,Reading Activity,Topic1),这个词的意思是“重新设计”,整个句子的意思是“公司改组意思是重新设计公司的组织结构”。
另外,两个动词不能并列,所以需要把redesign改成动名词”redesigningredesigning””。
(2)undertaken(Source:Task2,Reading Activity,Topic1),这个词的意思是“进行,做”,整个句子的意思是“如果进行改组的原因是错误的,问题就会出现”。
另外,是被动态,所以填“undertakenundertaken””。
A Generalized Framework for Constraint Planning1Roman BartákDepartment of Theoretical Computer ScienceCharles UniversityMalostranské námûstí 2/25Praha 1, Czech Republice-mail: bartak@kti.mff.cuni.czURL: http://kti.ms.mff.cuni.cz/~bartak/phone: +420-2 21 91 42 42fax: +420-2 53 27 42AbstractConstraint hierarchies have been proposed to solve over-constrained systems of constraints by specifying constraints with hierarchical preferences.They are widely used in HCLP, CIP and graphical user interfaces. A declarative expression of preferred constraints and the existence of efficient satisfaction algorithms are the advantages of constraint hierarchies. At present, there exists a lot of relatively independent constraint hierarchy solvers/satisfaction algorithms that could be classified into two categories as refining and local propagation algorithms. While the local propagation algorithms are fast but limited to equality (functional) constraints the more general refining algorithms are not incremental.In this paper we propose a generalized framework for solving constraint hierarchies, in particular, for constraint planning stage. Our approach is based on ideas of local propagation, however, it suppresses local propagation limits,i.e., restriction to LPB comparator and functional constraints. We introduce anew notion of a constraint cell and we generalize the concept of constraint network here. We also show that the proposed framework covers the refining method as well. Finally, we informally present an algorithm for constraint planning that is an instance of our framework. The proposed algorithm fits in our other concept of plug-in architecture of constraint hierarchy solvers. Keywords: constraint hierarchy, constraint solvers, constraint planning, constraint network.1The research was partially supported by the Grant Agency of Czech Republic under the contract No 201/96/0197.1. I NTRODUCTIONConstraint hierarchies were introduced for describing over-constrained systems of constraints by specifying constraints with hierarchical strengths or preferences2. It allows one to specify declaratively not only the constraints that are required to hold, but also weaker, so called soft constraints at an arbitrary number of strengths. Weakening the strength of constraints helps to find a solution of previously over-constrained system of constraints. Intuitively, the stronger a constraint is, the more it influences the solution of the hierarchy. Moreover, constraint hierarchies allow “relaxing” of constraints with the same strength by applying, e.g., weighted-sum, least-squares or similar methods.This constraint hierarchy scheme can be parameterized by a comparator C that allows one to compare different possible solutions to a single hierarchy and to select the best ones. Currently, there are two widely used groups of comparators, namely locally-better and globally-better comparators3. While the locally-better comparators consider each constraint individually, the globally-better comparators combine errors of all constraints at a given level using some combining function. Thus, the globally-better comparators can be used for inter-hierarchy comparison [22], i.e., comparison of solutions to two or more constraint hierarchies.The theory of constraint hierarchies was developed in [6] and it is also described in [8,22,23].The existence of efficient satisfaction algorithms is another important aspect of constraint hierarchies. Most current satisfaction algorithms, in other words constraint hierarchy solvers, can be classified into two groups: algorithms based on refining method and local propagation algorithms. However, there are also other algorithms, e.g., IHCS [16], which don't fit into any of above mentioned groups.The refining algorithms solve constraint hierarchy in a straightforward manner by completely satisfying the strongest level first and then weaker levels successively. Thus, the refining method can be used for solving all constraint hierarchies using arbitrary comparator. The refining method was first used in a simple interpreter for HCLP programs [8] and it is also employed in the DeltaStar algorithm [23] and in a hierarchical constraint logic programming language CHAL. We show later (Section 3.3) that our generalized framework for solving constraint hierarchies covers the refining method as well.The local propagation algorithms take an advantage of the potential locality of typical constraint networks, e.g., in graphical user interfaces. These algorithms gradually solve constraint hierarchies by selecting uniquely satisfiable constraints repeatedly. Many local propagation algorithms were developed for special purposes (DeltaBlue [18], SkyBlue [17], QuickPlan [20], DETAIL [10], Houria [9], Indigo [5]). Our generalized approach to solving constraint hierarchies is also primary based2Another method for describing over-constrained systems is PCSP (Partial Constraint Satisfaction Problems).3In [23] , the authors also introduce regionally-better comparators.on ideas of local propagation which are further generalized to suppress limits of local propagation.The paper is organized as follows. After brief introduction to constraint hierarchies and overview of algorithms for solving hierarchies in Section 1, we identify limits of current approaches to solving constraint hierarchies in Section 2. In Section 3, we propose a new framework for solving hierarchies, in particular, we concentrate on the planning stage of this framework. We introduce a new notion of a constraint cell and we compare it with the traditional approach to local propagation in Section 3.1. In Section 3.2, we generalize the concept of a constraint network and we justify the soundness of this generalization in Section 3.2.1. The Section 3.3 is dedicated to planning algorithms for construction of constraint networks and we describe there a simple planning algorithm that corresponds to the traditional refining method. The experimental implementation of planning and executing algorithms is described in Section 4. The paper is concluded with some final remarks on the proposed framework and with an Appendix containing formal description of the simple planning algorithm.2. L IMITS O F C URRENT A LGORITHMSAs we mentioned in the introduction section, most current algorithms for solving constraint hierarchies can be classified into two groups: the refining algorithms and the local propagation algorithms.The great advantage of the refining method is its generality, so it allows one to solve all types of constraints using arbitrary comparator. We proved this assumption in our plug-in architecture for solving constraint hierarchies [1]. However, the generality of the refining method is paid off by losing effectiveness as each constraint level has to be solved at a clap. In particular, the refining method does not support incremental update of the solution after adding or retracting a constraint respectively.Contrary, the local propagation is a fast incremental method for resatisfying constraints in hierarchies. Basically, it is efficient because it solves uniquely one selected constraint in every step (executing phase). In addition, when a variable is repeatedly updated, e.g., by user operation, it can easily evaluate only the necessary constraints to get a new solution. This straightforward execution phase is paid off by a foregoing planning phase that choose the order of constraints to satisfy. When we studied the current local propagation algorithms we identified the following limits:• solving conflicts among constraints is sometimes inappropriate• local propagation cannot handle cycles of constraints• local propagation works only with equality (functional) constraints• local propagation supports only locally predicate better comparators• local propagation cannot find multiple solutions.As the execution phase of the local propagation requires every variable to be computed by just one constraint, the foregoing planning phase has to choose among conflicting constraints which bound the variable. However, solving this conflict is sometimes impossible, e.g., when constraints have the same strength(x=1@strong, x=2@strong), and sometimes it is too restrictive, i.e., a weaker constraint (y=1@weak) is disabled (is assumed unsatisfied) to enable satisfying the stronger constraint (y=1@strong) even if the weaker constraint is also satisfied.The executing phase of the local propagation is a linear process. It means that when a constraint computes the value of one of its variables, the values of all other variables in the constraint are required to be known, i.e., the values of these variables have had to be computed by other constraints before. This feature disables solving the set of constraints containing the same variables, e.g., the system of equations (x+y=3, x-y=1). Such a system of constraints corresponds to the cycle in the constraint graph, hence we speak about cycles of constraints. Some local propagation algorithms solve constraint cycles by evoking an external solver [7].We mentioned the way a constraint is used to compute the value of one of its variables in the above paragraphs. The constraint is assumed there to be a total function that uniquely computes the value of the output variable from the values of input variables. However, this approach disables other types of constraints like inequalities.Every constraint, which is used in the executing phase, is completely satisfied while other constraints are entirely disabled during the planning phase. It implies the application of the predicate type of comparator in the classical local propagation. As every constraint is considered individually in the constraint graph it indicates the usage of the locally-better comparator.Local propagation also cannot find multiple solutions due to the uniqueness of satisfying functional constraints.3. A N EW A PPROACH T O C ONSTRAINT P LANNINGBy addressing limits of current algorithms we made the first step to improve generality of local propagation as well as to gain efficiency of refining method by encapsulating both approaches into one unified framework. Our framework for solving constraint hierarchies is based on dividing the constraint hierarchy into constraint cells as much as possible. The cells are partially ordered in a constraint network that reflects the relationships among constraints. Finally, the constraint network is traversed by an executing algorithm that propagates valuations through the constraint cells according to the partial order of cells given by the network.While the propagation of valuations and the partial order of cells defined by the constraint network correspond to the local propagation concept, solving all constraints in a cell at a clap matches the refining method. Hence, the proposed concept of solving constraint hierarchies is enough efficient and satisfactory general at the same time.Actually, we concentrate on the construction of constraint networks, that we call a planning stage, here.3.1 C ONSTRAINT C ELLSOne of the basic parts of our generalized framework for constraint planning is a notion of a constraint cell. Briefly speaking, the constraint cell is a set of equally preferred constraints which are solved in one step during the executing phase.Additionally, we associate a set of output variables with each constraint cell. Then the constraints in a constraint cell determine the output variables of the cell in an obvious manner of local propagation.D EFINITION 1: (constraint cell)1)Let C be a finite non-empty set of labeled constraints with the same preference and V be a set of all variables in constraints from C. Let In,Out ⊆V be arbitrary sets of variables such that In ∪Out=V and In ∩Out=∅. We define a constraint cell as a triple (C,In,Out).For arbitrary variable v we define a constraint cell ({},{},{v }) containing only the output variable v .2)We call the sets In and Out from the constraint cell (C,In,Out) input and output variables respectively.3)We also say that the constraint cell (C,In,Out) determinates each variable from the set Out.In an optimum case, each constraint cell contains just one functional constraint. In that case, it is possible to exploit the local propagation as much as possible and the executing phase is as effective as the traditional local propagation (see Section 4).However, by enabling more constraints in the cell one can also easily handle conflicts between constraints with the same strength as well as the constraint cell can naturally manage the constraint cycles. In addition, by encapsulating constraints into a constraint cell we also enable usage of more types of comparators including globally-better ones. Finally, while the traditional local propagation requires all constraints to be functions, i.e., to be able to uniquely compute values of output variable(s) when the values of input variables are given, the proposed constraint cells can contain other types of constraints, e.g., inequalities. The Figure 1 demonstrates the abovementioned advantages of constraint cells over the traditional local propagation.traditional local propagationconstraint cellsconflictconstraint cycleforbidden constraint type Figure 1 (traditional local propagation vs. constraint cells)The role of the constraint cell in the constraint network, that we formally introduce in the following section, depends on constraints which are inside the cell and on the distribution of variables into input and output sets of variables. To grasp formally the differences among constraint cells we classify the cells in the following manner.D EFINITION 2: (classification of constraint cells)We classify constraint cells into the following groups:• free variable cell ({},{},{v})• functional cell ({c@l},In,Out) such that Out≠∅ and for arbitraryevaluation θ of variables from In there exists a uniquevaluation σ of variable(s) from Out such that cθσ holds• generative cell (C,In,Out) such that C≠∅ and Out≠∅ and (C,In,Out)is not functional• test cell (C,In,∅)• potentially unsatisfied cell is a generative cell or a test cellThe traditional local propagation enables only free variable cells and functional cells. In fact, instead of the free variable x, the traditional local propagation usually uses a special type of constraint in the form x=user_input that uniquely determines the value of the variable x. Therefore, there are no free variables in traditional constraint graphs, i.e., each variable is uniquely determined by any constraint/functional cell. Thus the propagation of valuations is also unique.The planning phase of the traditional local propagation decides which constraints are satisfied, and thus they are used for value propagation, and which ones are disabled. The disabled constraints are assumed unsatisfied there. In our generalized approach we use a different mechanism for marking disabled, i.e., potentially unsatisfied, constraints, namely test cells. Thus, the constraints from the functional cells can always be satisfied by binding output variable(s) (see Definition 2) in our approach while the constraints from the test cells can still be satisfied (this is the difference from the traditional local propagation) but the satisfiability is not guaranteed. Namely, there is no degree of freedom to bind output variables in the test cell as there are just no output variables. In other words, all variables from constraints in the test cells are determined by other constraint cells. Note also, that the result of the satisfiability test can be reflected in the value propagation (see Figure 2).The contribution of this paper is the introduction of generative and test cells. The generative cell is a generalization of the functional cell so that it can contain a non-functional constraint, e.g., inequality, and even a set of constraints. Let us recall that in general the difference between the functional and generative cells depends not only on the constraint itself but also on the distribution of constraint variables into sets of input and output variables respectively. Thus, the cell consisting of the only constraint x2=y@strong is a functional cell if x is an input variable and y is an output variable while the similar cell with output variable x and input variable y is a generative cell (see Definition 2).Note also, that while each single constraint from a functional cell can always be satisfied, the same does not hold for the generative cell. Assume again the constraint x2=y@strong. Let x be the input variable, then for arbitrary value of x itis possible to find a unique value of y such that the constraint is satisfied (i.e., the constraint forms a functional cell). However, if y is the input variable (i.e., it is a generative cell), one cannot guarantee the satisfiability of the given constrain, namely, for negative values of y, the constraint is not satisfied, while for non-negative values of y one can find a value of x such that the constraint holds. Note also, that for positive values of y, it is possible to find two values of x which satisfy the constraint. Thus, the value propagation is not unique in the generative cell.The Figure 2 shows differences among above mentioned types of constraint cells. It also demonstrates the value propagation through the cell.generative cell test celly/{1}, x/{0,1}Figure 2 (differences among various constraint cells)Remind again, that the functional cell propagates values uniquely and the constraint in the functional cell can always be satisfied while the constraints in the potentially unsatisfied cells, i.e., the generative and test cells, are not guaranteed to be satisfied, the value propagation is not unique there and the potentially unsatisfied cells can even influence the values of input variables (see Figure 2). The impact of potentially unsatisfied cells on input variables develops only when the values of input variables are not unique.In the Figure 2 we omit the strengths of constraints for clarity purpose. Because all constraints in the constraint cell have the same strength (see Definition 1) we can bind this strength with the constraint cell. This so called internal strength of the constraint cell together with the cell type will be the key for partial ordering of cells that is discussed in the following section.D EFINITION 3: (internal strength)The internal strength of the constraint cell (C,In,Out) is the strength of anyconstraint in C. The internal strength of the constraint cell ({},{},{v}) is freewhich is the strength that is weaker than any other strength of constraints.We denote the internal strength of Cell as i_strength(Cell) and we say thatCell is stronger than Cell' if and only if i_strength(Cell)<i_strength(Cell').3.2 C ONSTRAINT N ETWORKSIn this section we concentrate on reasonable decompositions of constraint hierarchies into constraint cells. We also define a partial ordering of cells represented by the constraint network here. This partial ordering of cells guides the sound propagation of valuations through the constraint cells (see Section 3.2.1).Before formal definition of a hierarchy decomposition we can identify some decompositions in current approaches for solving hierarchies. While the refining method decomposes a given hierarchy into levels of equally preferred constraints, the local propagation atomizes the hierarchy into individual constraints. Both these decompositions and many others are covered by our approach.D EFINITION 4: (conflict-free hierarchy decomposition)Let H be a constraint hierarchy, i.e., a finite set of labeled constraints, and Vbe a set of all variables in constraints from H. We call the finite set CC ofconstraint cells a conflict-free decomposition of the hierarchy H if and only ifthe following conditions hold:1)the constraint cells from CC consist only of constraints from H, i.e.,∀Cell∈CC Cell=(C,In,Out) & C⊆H & In⊆V & Out⊆V2)every constraint from H is located in just one constraint cell, i.e.,∀c∈H ∃! Cell∈CC Cell=(C,In,Out) & c∈C3)every variable from V is determined by just one constraint cell, i.e.,∀v∈V ∃! Cell∈CC Cell=(C,In,Out) & v∈Out.The satisfaction of the conditions 1) and 2) of the Definition 4 guarantees that the set CC of cells is really a decomposition of the given hierarchy H. In particular, the satisfaction of the conditions In⊆V and Out⊆V implies that in the decomposition there do not appear new variables, i.e., variables outside the hierarchy. The satisfaction of the condition 3) of the Definition 4 ensures that each variable is determined by any cell and there is no conflict among cells in the decomposition, i.e., there is no variable that is determined by more than one cell. The following corollary describes this feature formally.C OROLLARY: (no conflicts)Let CC be a conflict-free decomposition of any constraint hierarchy H. Thenthe following implication holds:∀Cell,Cell'∈CC(Cell=(C,In,Out) & Cell'=(C',In',Out') & Cell≠Cell') ⇒ Out∩Out'=∅Nevertheless, the conflict-free feature of the decomposition does not imply that the decomposition gives an effective solution of the hierarchy by propagation of valuations. Namely, the decomposition, where each variable is determined by a free variable cell and all constraints are in the test cells, is a conflict-free decomposition according to the Definition 4 but this decomposition is of little help for effective constraint hierarchy solving.In the following paragraphs we concentrate on partial ordering of cells from conflict-free decompositions such that it enables one to effectively solve the constraint hierarchy. Note that some conflict-free decompositions, like the one mentioned in the previous paragraph, cannot be partially ordered according to our requirements, however for every constraint hierarchy there exists at least one conflict-free decomposition that can be represented by the constraint network, i.e., it is possible to partially order the cells in the decomposition.In what follows, we identify the partial ordering “<” of constraint cells with the directed acyclic graph that we call a constraint network. Thus, we write Cell<Cell' ifand only if there exist a directed path in the constraint network from Cell to Cell'. Also, we use the denotation G/T for potentially unsatisfied cells in the Definition 5.D EFINITION 5: (constraint network)Let CC be a conflict-free decomposition of the hierarchy H. We call thedirected acyclic graph (CC,E) with nodes CC and edges E a constraintnetwork of hierarchy H if and only if the following conditions hold:1)for every constraint cell Cell there exist edges in E directed fromconstraint cells determining the input variables of Cell, i.e.,∀Cell,Cell’∈CCCell=(C,In,Out) & Cell’=(C’,In’,Out’) & In∩Out’≠∅⇒(Cell’,Cell)∈E2) for every potentially unsatisfied cell there does not exist an upstreamconstraint cell which has the same or weaker internal strength, i.e.,∀Cell,Cell'∈CC(Cell is G/T & Cell'<Cell) ⇒ i_strength(Cell')<i_strength(Cell)3) there does not exist “downstream forking” in a potentially unsatisfiedcell directed to other potentially unsatisfied cells, i.e.,∀Cell,Cell’,Cell''∈CCCell,Cell',Cell'' are G/T & Cell<Cell' & Cell<Cell''⇒(Cell'=Cell'' ∨ Cell'<Cell'' ∨ Cell'>Cell'')The Definition 5 describes a constraint network that can be used for value propagation during the executing phase of the constraint hierarchy solving. The meaning of the condition 1) of the Definition 5 is clear as one requires the input variables of the cell to be computed before the value propagation can go through the cell and compute values of output variables. The conditions 2) and 3) of the Definition 5 ensures the correctness of the propagation as one requires the stronger constraints to be prefered to the weaker constraints. The meaning of these conditions is discussed in more detail in the following section. Note, that these two conditions are a new contribution of this paper.When the constrain network is defined, one may ask whether it is possible to represent an arbitrary constraint hierarchy as a corresponding constraint network. The answer is yes which is not really surprising as we present our approach as a generalization. We justify this claim in Section 3.3 where we describe a simple algorithm for constraint planning. This algorithm decomposes each hierarchy into levels of equally preferred constraints that corresponds to the refining method.It is also not really surprising that every constraint graph4 used by traditional local propagation can be represented as a constraint network according to the Definition 5. The satisfied constraint from the constraint graph forms a functional cell in the constraint network while the disabled constraint from the constraint graph is represented as a test cell in the constraint network. Note also, that there exist4We use the term “constraint graph” for graphs from the traditional local propagation while the “constraint network” is a generalized notion according to the Definition 5.constraint hierarchies which cannot be represented by a constraint graph but can be decomposed into a constraint network.Our approach to constraint networks allows various constraint networks to be constructed for a given constraint hierarchy, from a basic one, that decomposes the hierarchy into levels, to a more structured network, that keeps the cells as small as possible. It depends on the type of used constraints and comparators and on the quality of the planning and executing algorithm which constraint network is chosen. The Figure 3 shows two different constraint networks corresponding to one constraint hierarchy.Figure 3 (constraint networks)3.2.1 A T HEORETICAL J USTIFICATIONA theoretical foundation of our approach originates from ideas described in [11]. In [3]we tuned this theory to the framework proposed in this paper and we proved the soundness and completeness theorems there. A complete formal justification of the proposed framework for solving constraint hierarchies will also be a subject of a separate paper. In the meantime, we satisfy with the following justification.To keep our framework as general as possible we proceed with the subsequent rule for solving constraint hierarchy:the satisfaction of a stronger constraint is strictly preferred to the satisfaction of an arbitrary number of weaker constraints.Thus, what we want to show is that the satisfaction of a constraint does not cause the dissatisfaction of a stronger or equally preferred 5 constraint later on the value 5Equally prefered constraints which cannot be satisfied at once should be in one constraint cell so the propagation algorithm can relax them by applying weighted-sum or similar methods. Thus, the globally-better comparators are supported.propagation. Remind that unsatisfied constraints can be presented only in test or generative cells respectively while the constraints in functional cells can always be satisfied independently of valuation of input variables.The executing algorithm, that computes the solution to the hierarchy via value propagation, follows the partial order of cells defined by the corresponding constraint network, however it totally orders the cells first to preserve the linearity of the algorithm. The condition 2) of the Definition 5 ensures that there are only stronger cells on upstream path from the potentially unsatisfied cell. So, the only way, how a weaker or equally preferred cell can get before the potentially unsatisfied cell in the total order chosen by the executing algorithm, is that the cell originates from a parallel leaf of the constraint network (the leaf 2a in the Figure 4). The Figure 4 describes all such situations.Figure 4 (arrangement of potentially unsatisfied cells in the constraint network) First, note that the case D of the Figure 4 is prohibited in the constraint networks by the condition 3) of the Definition 5. In the remaining cases A, B and C, it is easy to show that the constraints/cells in the leaf 2a do not influence the satisfaction of the constraints/cells in the leaf 2b, i.e., the satisfaction of any constraint from the leaf 2a does not cause the dissatisfaction of the G/T constraint in the leaf 2b. To confirm this assumption one needs the following features of constraint cells:1)the functional cells propagate values uniquely2)the functional cells do not change values of input variables2)the G/T cells can change the values of input variables only when the values ofinput variables are not unique.Note, that when the leaf 1 is empty, the proposition is trivially satisfied. A more detail analysis of the soundness of constraint networks can be found in [3].3.3 C ONSTRAINT P LANNERSIn this section we briefly describe a simple planning algorithm for construction of constraint networks. This algorithm converts arbitrary constraint hierarchy into corresponding constraint network by decomposing the hierarchy into levels of equally preferred constraints. Thus, the existence of the simple planning algorithm confirms our claim that every constraint hierarchy can be represented by a constraint network.The simple planning algorithm constructs the constraint network by adding labeled constraints incrementally. If there exists a cell with the internal strength equal to the strength of the added constraint, then the algorithm adds the constraint into this cell. Otherwise, it creates a new cell containing this constraint. In the second phase the algorithm decides which variables of the added constraint are input and output。
历年CFA考试真题及答案解析1、The nominal (quoted) annual interest rate on an automobile loan is 10%. The effective annual rate of the loan is 10.47%. The frequency of compounding periods per year for the loan is closest to:【单选题】A.weekly.B.monthly.C.quarterly.正确答案:B答案解析:“The Time Value of Money,” Richard A. DeFusco, CFA, Dennis W. McLeavey, CFA, Jerald E. Pinto, CFA, and David E. Runkle, CFA2013 Modular Level I, Vol. 1, Reading 5, Section 3.3Study Session 2–5–c, dCalculate and interpret the effective annual rate, given the stated annual interest rate and the frequency of compounding. Solve time value of money problems for different frequencies of compounding:B is correct. Use the formula for effective annual rate:Iteratively substitute the possible frequency of compounding until the EAR is 10.47%.Thus, the correct answer is monthly compounding.2、Which of the following is a constraint as defined in the International Financial Reporting Standards (IFRS) Framework for the Preparation and Presentation of Financial Statements?【单选题】A.NeutralityB.TimelinessC.Going concern正确答案:B答案解析:“Financial Reporting Standards,” Thomas R. Robinson, CFA, Jan Hendrik van Greuning, CFA, Karen O’Connor Rubsam, CFA, R. Elaine Henry, CFA, and Michael A. Broihahn, CFATimeliness is a constraint in the IFRS Framework. Neutrality is a factor that contributes to reliability and going concern is an assumption of the Framework.3、Which method of calculating the firm’s cost of equity is most likely to incorporate the long-run return relationship between the firm's stock and the market portfolio?【单选题】A.Dividend discount modelB.Capital asset pricing modelC.Bond-yield-plus risk-premium正确答案:B答案解析:“Cost of Capital” Yves Courtois, CFA, Gene C. Lai, and Pamela Peterson Drake, CFAThe capital asset pricing model uses the firm’s equity beta, which is computed from a market model regression of the company's stock returns against market returns.4、For a 90-day U.S. Treasury bill selling at a discount, which of the following methods most likely results in the highest yield?【单选题】A.Money market yieldB.Discount-basis yieldC.Bond equivalent yield正确答案:C答案解析:“Working Capital Management,” Edgar Norton, Jr., Kenneth L. Parkinson, and Pamela Peterson Drake5、An investor gathers the following data.To estimate the stock's justified forward P/E, the investor prefers to use the compounded annual earnings growth and the average of the payout ratios over the relevant period (i.e., 2008–2011). If the investor uses 11.5% as her required rate of return, the stock's justified forward P/E is closest to:【单选题】A.10.B.12.C.21.正确答案:C答案解析:“Equity Valuation: Concepts and Basic Tools,” John J. Nagorniak and Stephen E.Wilcox6、A bond portfolio manager is considering three Bonds – A, B, and C – for his portfolio. Bond A allows the issuer to call the bond before stated maturity, Bond B allows the investor to put the bond back to the issuer before stated maturity, and Bond C contains no embedded options. The bonds are otherwise identical. The manager tells his assistant, “Bond A and Bond B should have larger nominal yield spreads to a U.S. Treasury than Bond C to compensate for their embedded options.”Is the manager most likely correct?【单选题】A.Yes.B.No, Bond A’s nominal yield spread should be less than Bond C’s.C.No, Bond B’s nominal yield spread should be less than Bond C’s.正确答案:C答案解析:“Understanding Yield Spreads,” Frank J. Fabozzi, CFAC is correct because Bond B’s embedded put option benefits the investor and the yield spread willtherefore be less than the yield spread of Bond C, which does not contain this benefit.7、Which of the following characteristics is best described that the information in financialstatements can influence user's economic decisions or affect user's evaluationof past events or forecasts of future events in accordance with the IFRSframework's definitions and recognition criteria?【单选题】A.Relevance.parability.C.Faithful representation.正确答案:A答案解析:根据IFRS的条款,财务报表的两个基本特性使得这些财务信息有用,这两个特性包括相关性(relevance)和公允陈述(faithful representation)。