Some classes of completely monotonic functions
- 格式:pdf
- 大小:308.32 KB
- 文档页数:12
期中检测--2024-2025学年八年级英语上册模拟测试卷(人教版2024)一、单项选择(共10小题;每小题1分,满分10分)1.People in our town plant __________ trees on Tree Planting Day every year.A.hundred B.hundred of C.hundreds D.hundreds of 2.—Are you free? I’d like you to go to the museum with me.—Sorry, there _______ some important meetings this coming weekend.A.is going to have B.will have C.are going to be D.is going to be 3.My brothers are ________. I can’t sleep well at night.A.noisy B.careful C.kind4.—Every year many people come to visit Nantong Museum. Do you know ________ Nantong Museum is?—It’s about 70,000 square metres in size.A.how tall B.how old C.how large D.how far 5.—This pair of shorts looks very nice. Would you like to try it on?—________A.That’s all right.B.Yes, I’d love to.C.Of course not.D.Thank you.6.—________ Mary ________ her homework?—No, she is helping her mom with the housework.A.Does;do B.Is;doC.Does;doing D.Is;doing7.—Who would you like to ask for help when in trouble?—Miss Lee, she is always ________ to us.A.dangerous B.strict C.sorry D.friendly8.—________?—At about seven ten in the morning.A.How do you usually go to school B.What do you usually do at schoolC.Where do you usually go after school D.What time do you usually go to school 9.—________ go out for a walk?—That sounds great!A.What about B.Let’sC.Why don’t D.Why not10.—Grandma, this is Emma speaking. I’d like to visit you this Sunday.—________! I’ll meet you at the bus stop.A.I’m sorry B.Too bad C.Don’t worry D.That’ll be great二、完形填空(共两节,满分20分)第一节阅读下面短文,从短文前的选项中选出能填入空白处的最佳选项。
阅读短文,选择最佳选项ity ofOxford. It w阅读短文,根据其内容选择最佳选项完成下列问题。
(每个小问题满分为2分)Have you ever tried to understand something new on your own but found it a bit too difficult in books or on the Internet? Don't be worried一you can get help at Khan Academy(可汗学院).Khan Academy is an online learning website created in 2007 by Salman Khan, an American teacher. In order to provide "a free world-class education to anyone anywhere",Khan offers more than 4,200 free micro lectures(讲座)atkhanacademy.The classes cover fields like mathematics, biology, chemistry and finance. They usually last for just 10 to 15 minutes. Unlike traditional classes, Khan mainly offers courses for students below college level. The classes can also help those who are planning to take the SAT, an exam often required for students who wish to enter a college or university in the US.So how can you start your learning journey at Khan Academy?First of all,enter the website with a personal e-mail account.Your personal home page at Khan Academy is designed to help you learn math. You can take a pre-test first to see your level. The academy then suggests exercises at the right level for you. It also allows you to watch videos and improve yourself until you reach level 5一the highest level.If you are interested in other subjects, click "LEARN" to see all topics on offer. Try "Art History",for example. This will take you to all the things in that area like text articles, videos and questions. You can also put key words into the search box to see related topics.Don't worry if you find it difficult to follow the courses in English. The courses have been translated into other languages, such as Chinese. Hundreds of Khan's courses in Chinese can be found on Netease( 163. ),which offers translations of courses from Harvard, Yafe, Oxford,Cambridge and other top universities.1.The underlined word “ account” in the fourth paragraph probably means“________”.A.账户B.存款C.报告D.理由2. Khan Academy is a learning website that________A. was started by an American teacherB. has a history of over 10 yearsC. doesn't provide free lecturesD. is fit only for college students3.The fifth paragraph is mainly about how to________.A. pass the math testB. learn math at Khan AcademyC. design your website D, write an e-mail to Khan4.Which of the following is TRUE?A. The classes at Khan Academy usually last 40 minutes.B. Nothing but videos can be found at Khan Academy.C. You can get courses from top universities on Netease.D. Khan Academy only offers courses in English.5.Which is the best title for this passage?A. Online LearningB. Salman KhanC. College EducationD. Language Courses•请阅读下面的文章。
高中英语学科词汇单选题40题1. In chemistry class, we use a lot of ___.A. toolsB. instrumentsC. apparatusD. utensils答案:C。
选项A“tools”通常指工具,如锤子、扳手等;选项B“instruments”主要指乐器或精密仪器;选项C“apparatus”特指科学实验用的仪器设备等,符合化学课场景;选项D“utensils”指餐具。
2. Math class often requires us to use ___.A. calculatorsB. computersC. abacusesD. typewriters答案:A。
选项A“calculators”是计算器,在数学课上常用;选项B“computers”电脑在数学课不是最常用;选项C“abacuses”算盘现在数学课很少用;选项D“typewriters”打字机与数学课无关。
3. In physics class, we study the laws of ___.A. natureB. universeC. cosmosD. space答案:A。
选项A“nature”自然,物理学研究自然规律;选项B“universe”宇宙;选项C“cosmos”宇宙;选项D“space”空间。
4. Biology class is about the study of ___.A. livesB. creaturesC. organismsD. animals答案:C。
选项A“lives”生活;选项B“creatures”生物,但比较宽泛;选项C“organisms”生物体,更符合生物学研究对象;选项D“animals”动物,生物学研究不只是动物。
5. In geography class, we learn about different ___.A. landsB. countriesC. regionsD. terrains答案:D。
Gamma函数的相关函数及其q化的完全单调性李晶晶;孙梅【摘要】A investigation to derive the complete monotonicities of functions involving the Gam-ma function and their q-analogues is presented.First of all,using the integral representations and the monotonicity theory,it is proved that the complete monotonicities of functions Fα(x)=ψ'(x)+α/x-1/(x+1),fα(x)=lnx-ψ(x)-α/x and f(x)=lnx-ψ(x)+1/x2 on(0,∞);Second,based on the properties of q-analogues,the complete monotonicities of q-analogues about these three functions are respectively discussed by the series expansion of functions,the monotonicity theory and induction,and two inequalities are obtained because of the property.%为研究Gamma函数的相关函数及其q化的完全单调性,首先利用函数的积分表达式以及单调性理论证明函数Fα(x)=ψ'(x)+α/x-1/(x+1),fα(x)= lnx-ψ(x)-α/x以及f(x)=lnx-ψ(x)+1/x2在(0,∞)上的完全单调性;其次基于q化的性质,利用函数的级数展开、单调性理论以及归纳法讨论上述相应函数的q化的完全单调性,并利用该性质得到2个不等式。
Locating Dependence Clusters and Dependence Pollution David Binkley Mark HarmanLoyola College King’s College LondonBaltimore MD Strand,London21210-2699,USA WC2R2LS,UK.binkley@ Mark@Keywords:ripple effect,dependence analysis,system dependence graphAbstractA dependence cluster is a set of program statements all of which are mutually inter–dependent.Such clusters can cause problems for maintenance,because a change to any statement in the cluster will have a potential impact on all statements in the cluster.This paper introduces the concept of dependence clusters and dependence pollution and shows how a simple visual-isation can be used to quickly and effectively locate them. The paper presents the results of two empirical studies and several case studies which evaluate the approach.The results indicate the importance of dependence clus-ter analysis:for a set of20programs,ranging in size from 1,170LoC to179,623LoC,99.6%of clusters identified were within1%tolerance of being identical,while depen-dence clusters were found to be surprisingly common:80% of the programs studied contained clusters of10%or more of the program.1IntroductionThe impact of change is one of the most pressing prob-lems facing software maintainers at the source code level. It is well known that a simple source code change can have far-reaching and unexpected consequences.The problem of tracking the impact of such source-code-level changes is one of the important cost drivers behind remedial main-tenance activities such as Y2K remediation,Euro currency conversion,zip code,telephone and bank account number-ing changes[22,23].From the maintainer’s point of view,the less dependence there is in a system,the lower the chances of some unex-pected knock-on effect(or‘ripple effect’[9]).Therefore,a set of statements in a program that are all mutually inter–dependent should be viewed with a certain amount of cau-tion and concern;a change to one is a change to all.In this paper,such sets of mutually inter–dependent statements are called‘dependence clusters’.Where a pro-gram contains a large dependence cluster,software modi-fication may cause significant ripple effects and,as a re-sult,problems for maintainers.Furthermore,it may turn out that the dependence that binds together the statements in the dependence cluster is avoidable.In this paper,this phenomenon is called dependence‘pollution’because it is dependence which has‘leaked out’from one part of the pro-gram to influence another part,with potentially harmful ef-fects on maintainability.Dependence clusters are formally defined as the solution to a reachability problem over a program’s System Depen-dence Graph(SDG)[21].However,as with other forms of pollution(like noise pollution[30]),what constitutes de-pendence pollution is an inherently subjective matter,de-termined by whether dependence is avoidable.The paper gives case study examples of what might(and might not)be deemed to be dependence pollution,illustrating the way in which dependence cluster analysis can be used to support and inform maintenance activities.The paper introduces a simple approach forfinding de-pendence clusters in terms of slice sizes and a visualisation (the Monotonic Slice-size Graph,MSG).The approach ap-proximates whether statements(or,more precisely,nodes of the dependence graph)are in a dependence cluster,by checking to see if the size of their slice is identical.This ‘same size slice’approach is a conservative(and therefore safe)approximation to the true dependence cluster relation;it may produce false positives,but never false negatives.Two empirical studies are presented.These are designed to evaluate the concept of a dependence cluster.One of these studies provides verification,while the other is con-cerned with validation.The verification question addressed is:1“How precise is the approximation which under-pins the MSG?”Verification is concerned with whether the approach works.Since the approach is a conservative approximation, capable of yielding false positives,it is import to gauge how often these false positives turn up in practice.If they are too common then the approach is not viable.For very small slices,it is expected that two slices could have the same size and yet be different.However,theoretically at least,it seems likely that as slice size increases,two slices which happen to have the same size are likely to share a great deal of similar content.It is just too much of a co-incidence to find two large slices of the same size but entirely different content.The verification study bears out this claim,show-ing that for99.6%of clusters,the slices in these clusters are within1%of being completely identical.The validation question is:“How common are large dependence clusters?”Validation is concerned with whether large dependence clusters exist in real programs(making dependence cluster analysis a valid course of action).Of course,what consti-tutes a large dependence cluster depends upon the definition of‘large’.The validation study adopts a cautious approach: a dependence cluster is deemed to be large if it contains 10%or more of the program’s slices.Even with this high threshold,the validation study reveals that large dependence clusters are surprisingly common:In real code,14out of20 programs studied contained at least one dependence cluster of at least10%of the total number of slices of the program.The programs studied were20programs,all written in C, mostly open source,with some industrial programs from the European Space Agency.A summary of information about the programs studied is contained in Figure1.In total,the set of programs represent just over450,000lines of code.Overall,thefindings of the paper suggest that depen-dence clusters and dependence pollution are worthy of fur-ther study.The paper shows that dependence clusters are easy to define,to locate and to investigate and provides evidence to suggest that the MSG visualisation is helpful in analysing them.It also shows that dependence clusters occur with surprising frequency and size in real programs. Despite inherent subjectivity,the paper demonstrates that it is possible to identify some of these clusters as pollutants, which can be removed by refactoring.This indicates indi-cating that the study of dependence pollution can act as a supporting mechanism for software maintenance.The primary contributions of the paper are as follows: 1.The concept of a dependence cluster is introducedtogether with a simple visualisation(the Monotone Slice–size Graph)to help with identification of large dependence clusters.Program Brief Description10,1825,92616,7636,62612,9301,17019,81113,5799,59722,05018,55829,2466,72447,93614,86414,8149,564179,6238,0095,407TotalFigure1.The subject programs studied.2.The paper contains two empirical studies,which giveevidence to support the central claims that:•The MSG represents a precise approximation toallow identification of dependence clusters.•Large dependence clusters are prevalent.3.Case study evidence is presented to demonstrate thatdependence clusters can be used to identify and helpremove possible sources of dependence pollution.The remainder of the paper is organized as follows:Sec-tions2and3introduce dependence clusters and Monotone Slice-size Graphs.Sections4and5present the results of the two empirical studies concerned with verification and validation.Section6presents case studies which illustrate the use of dependence pollution as part of the maintenance process.Section7considers the threats to validity of the results.Section8briefly describes related work on depen-dence and clustering and Section9concludes.2Dependence ClustersA dependence cluster is a set of program points whichmutually depend upon one another.In this paper‘program points’will be taken to mean nodes of the Control Flow Graph(CFG).Any change to the computation represented at one point in a dependence cluster potentially affects the computations represented all other points.Definition1be-low,defines the concept of a dependence cluster more for-mally.2Figure2.Agreement Levels for up to1%of ToleranceDefinition1Dependence ClusterA dependence cluster is a set of nodes,{N1,...,N m} (m>1),of the Control Flow Graph(CFG)such that for all i,1≤i≤m and for all j,1≤j≤m N i depends on N j.Dependence clusters can be identified using program slicing.A static backward program slice is a semantically meaningful portion of a program that captures a subset of the program’s computation[34].A slice is computed from a‘slicing criterion’(a program point and variable of inter-est)and contains those parts of the program which poten-tially affect the slicing criterion.The results presented in this paper use the System Dependence Graph(SDG)[21]to compute program slicesTwo nodes which depend upon each other must have the same slice.Furthermore,since a slice contains the node for which it is constructed,where two nodes have identical slices,then each node must be in the slice of the other and therefore,each node must depend upon the other.Conse-quently,it is possible to use the fact that two nodes have the same slice as a way of determining whether they depend upon each other.That is,according to Definition1,two nodes are in a dependence cluster iff they have the same slice.3Monotone Slice-size GraphsIt would be possible to locate dependence clusters by finding slices and checking to see which slices are identi-cal.However,in this paper,an approximation is used for ‘same slice’which is not only more efficient,but also leads to a useful visualisation for identifying clusters:the Mono-tone Slice-size Graph(MSG).Rather than checking to see if two nodes of the SDG yield identical slices,the approach is to simply check whether the two nodes yield slices of the same size.The conjecture which underpins this approximation is that two slices which are the same size are likely to be the same slice.Clearly,this approximation is conservative be-cause any cluster identified may contain real clusters and no real clusters will fail to be identified in this way.That is, two slices may differ yet,coincidentally,may have identical sizes.However,two slices which are identical must clearly have the same size.The verification study in Section4di-rectly addresses the question of approximation quality.Definition2Monotone Slice-size Graph(MSG)A Monotone Slice-size Graph(MSG)is a graph of the func-tion of slice size,plotted for monotonically increasing size.That is,slices are sorted according to increasing slice size and the sizes are plotted on the vertical axis against slice number in order on the horizontal axis.The MSG visualisation plots a landscape of monotoni-cally increasing slice sizes,in which dependence clusters correspond to sheer–drop cliff faces.The goal of the visu-alisation is to assist with the inherently subjective task of deciding whether a cluster is large(how long is the plateau at the top of the cliff face relative to the surrounding land-scape?)and whether it denotes a discontinuity in the de-pendence profile(how steep is the cliff face relative to the surrounding landscape?).For example,looking ahead to Figure6which contains the MSGs of10programs.Con-sider the MSG of the program userv at the bottom of the figure.The graph shows(reading along the horizontal axis) that thefirst38%or so of slices are extremely small,but then reveals a sharp increase at about44%,where the re-maining slices are all comfortably over50%of the program in size.The slice size approach is inherently more efficient than comparing slice content:Computing and comparing all slices’sizes for a given program is O(n2),while computing and comparing all slices’content is O(n3),where n is the number of edges in the SDG.3The MSG is not only efficient to compute,it also helps with the essentially subjective task of determining whether a cluster is large,relative to the code which contains it.As an example,consider the MSG of the program userv in Fig-ure6once more.The sharp increase in slice size,which oc-curs after about44%of slices have been considered,is fol-lowed by a long plateau in which slice size does not change. The length of the plateau indicates a large cluster of slices of identical size;in other words,a large dependence cluster. 4Empirical Verification:How precise is the Dependence Cluster Detection?This section presents the results of an experiment into whether similarity in slice size can be used as a close ap-proximation to similarity of slice content.The experiment seeks to provide evidence to support the claim that MSGs are a suitable and reliable technique forfinding dependence clusters.That is,the research question to be answered is whether a set of slices that have the same size will tend to have nearly the same vertices.Of course,the answer will depend upon the interpretation given to‘nearly the same’. This will be referred to as the‘tolerance’;the degree to which two slices can differ in content while being deemed to be essentially the same.Figure2plots the tolerance(on the horizontal axis) against the agreement between slice size and slice contents (on the vertical axis)for the20programs studied.Both axes are represented as percentages.A tolerance of x%means that,of the total number of nodes in the two slices,the per-centage of nodes upon which they differ is x%,thus it is possible to speak of slices being‘identical within a certain tolerance’.For a given value of tolerance,x%,an agree-ment of y%means that y%of the total number of slices are identical(within x%tolerance).As can be seen from thefigure,almost total agreement is reached for most programs with a very small tolerance.The horizontal axis is cut at1%tolerance,so all the data shown in Figure2concern slices which are within1%of contain-ing identical sets of nodes.In total,99.6%of the clusters are represented on this graph.That is,99.6%of clusters are within1%tolerance of total agreement.If thefigure were to be redrawn,with a horizontal axis extended out to100%, then the detail would be completely lost,because almost all programs would appear to almost immediately reach100% agreement on the vertical axis.Of course,there are a few programs where there are some slices that simply happen to be the same size,but which contain completely different sets of nodes.This should be expected to occur in a suitably large data set. Since the data presented in this paper comes from the anal-ysis of almost half a million lines of code,it is likely that this mayoccur.Figure3.Sparsity of high tolerance To get a sense for how common this occurrence is,con-sider the data presented in Figure3.Thisfigure shows all the data for which a tolerance of more than1%is required to reach100%agreement.The horizontal axis shows each program studied.The vertical axis shows the percentage of same-size slices which require more than1%tolerance in order to agree100%.As thefigure shows,for almost half of these programs,there are simply no slices of the same size which require more than1%tolerance in order to agree 100%.However,there are a few which do;these constitute false positives.For the20programs studied,only0.4%of the clusters required a tolerance of more than1%in order to achieve100%agreement.Furthermore,even this lowfigure of0.4%is perhaps unduly pessimistic because it records the number of clus-ters which require more than1%tolerance in order to fully agree.However,even in a cluster which requires more than 1%tolerance for full agreement,many of the individual slices in the cluster may fully agree,with only a few dis-agreeing.Thefigure for the number of pairwise slice com-parisons which fail to agree within1%tolerance is only0.00533%.These results provide strong evidence for theclaim that‘size agreement’is a good approximation for ‘slice content agreement’and thus for locating dependence clusters using MSGs.5Empirical Validation:Do Dependence Clusters Occur in Practice?In the empirical study reported in this section,the con-cern is to validate the approach.In determining whether or not a program has a large dependence cluster or not,there is a value judgment to be made concerning the size of a cluster.Clearly,most programs will have small clusters of slices.For the empirical study,the threshold above which a cluster is considered to be‘large’was set to10%.In other words,a cluster is large if10%or more of its slices are in the cluster.This relatively large threshold was chosen in order to provide a conservative answer to the validation question.That is,in a medium size program of(perhaps) 450,000slices,a cluster of5,000slices of identical size is important.It would not be likely to arise by chance.How-ever,smaller clusters(of a few thousand slices)may also be interesting and worthy of further investigation.Therefore, the results presented in this section can be thought of as a lower bound estimate of the frequency of large dependence clusters in the programs studied.In total,only6of the20programs were found to contain no large clusters according to the10%threshold.Of the re-maining programs,4were found to have spectacularly large clusters which encompass most of the program.These4are explored in more detail in Section6.The20programs were thus divided onto three categories:those which contained no large clusters,those which contained large clusters and those with spectacularly large clusters.Figure4shows the data for each of these three categories as a distribution of slice sizes for each program.Figure4 contains one three dimensional plot for each of the three categories.One of the two horizontal axes is used to set out the data for each(named)program.The other refers to the size of the slice(expressed as a percentage of the program which contains it).Tofit the data into a singlefigure,the sizes of slices are grouped into ranges for each10%.The vertical axis shows the proportion of slices having that size. As can be seen,for Figure4(a)slice sizes fall fairly evenly over the10%ranges when compared to Figure4(c),which shows how dramatically the presence of exceptionally large clusters biases the distribution of slice sizes.Figure5shows the MSGs of the category of6programs for which there were no large clusters.These programs show only small‘cliff drops’in the landscape of their MSG. However,most of the programs studied were found to con-tain large dependence clusters,some were so large that they suggest possibly severe problems for continued software maintenance.Figure6shows the MSGs for the category of10programs which were found to contain evidence for the presence of large clusters(more than10%of the slices in a single cluster).Figure7shows the MSGs of the cate-gory of4programs where these clusters were particularly pronounced.Visually,the MSGs clearly help identify large depen-dence clusters:compare the MSGs in Figures6and7 (which clearly show large,tell-tale,cliff faces)with those in Figure5(which have comparatively smooth landscapes). 6Case Study ResultsThis section defines dependence pollution and uses this concept to investigate,in more detail,the four programs where extremely large dependence clusters were found(see Figure7).These are indicative examples only;any of the programs for which the MSGs are depicted in Figures6 and7could have been chosen,as they all show signs of large dependence clusters.Unfortunately,space doesnot(a)Absence of ClustersSee Figure5for the correspondingMSGs.(b)Clusters PresentSee Figure6for the correspondingMSGs.(c)Enormous Clusters PresentSee Figure7for the corresponding MSGs.Figure4.Size/Frequency for each of the threecategoriesallow a detailed treatment of all four,so two(the industrial program,copia and the open source program bc)are de-scribed in more detail as case studies,while only brief(but indicative)observations are made about the remaining two 5Figure 5.MSGs for Programs with an Absence of Large Dependence Clusters (No Cliff Faces in the MSGs)(the editor,ed and the board game go ).The purpose of this section is to give a flavour for the kind of maintenance investigation which could take place supported by depen-dence cluster analysis.6.1Dependence PollutionLarge dependence clusters are inhibitors to successful software maintenance because of the way in which a change to one element of the cluster can ripple to all other mem-bers.Therefore,this paper adopts the term ‘pollution’for unwanted and avoidable dependence clusters.As with other forms of pollution,such as ‘noise pollution’[30],what con-stitutes pollution is inherently subjective.In the case of de-pendence pollution,the judgment is determined by whether dependence is large and whether it is deemed to arise as the result of avoidable coding styles.In practice,it may well be a matter of degree:how problematic a dependence clus-Figure 6.MSGs of Programs with Large De-pendence Clusters (Denoted by Cliff Faces in the MSGs)6Figure7.MSGs of Programs with PotentiallySevere Dependence Pollutionter is,weighed against the ease with which its dependence structure can be broken up and refactored.Definition3Dependence PollutionDependence pollution occurs when a program contains a large dependence cluster which arises because of the use of some avoidable programming construction or feature.Two possible sources of dependence pollution are Mutu-ally Recursive Clusters(MRCs)and Capillary Data Flows (CDFs).Both of these are likely to lead to large dependence clusters and might be avoided by refactoring or otherwise transforming the program in which they occur.Mutual recursion naturally leads to large dependence clusters because each function calls all the others,making the outcome of each function dependent upon the outcome of some call to each of the others.A Capillary Data Flow is a dataflow which occurs between two large and other-wise unconnected clusters through a single variable.The variable acts as a small‘capillary vessel’along which the dependence‘flows’creating one large cluster from two or more,otherwise unconnected,sub-clusters.6.2Case Study:Dependence Pollution FoundThe program copia implements a collection of anal-yses on an input table.As can be seen from Figure7,the MSG contains a large plateau of slices which appear tohaveBeforerefactoringAfter RefactoringFigure8.Removing Dependence Pollution the same size;certainly a large dependence cluster and a candidate for dependence pollution.However,zooming in on the plateau in the MSG reveals that this single plateau actually consists of15smaller plateaus.Thefirst5of these summarize over99%of the slices that make up the‘single’plateau and differ by no more than4vertices(about0.27% of the program).This observation provides evidence for the robustness of the MSG visualisation;although the slices are not of identical size,they are all closely related.The in-terpretation of the visualisation is correct;there is a large dependence cluster.Inspection of the code reveals that copia has234small functions that call one large function,seleziona,which in turn,calls the smaller functions.This mutual recursion produces the large dependence cluster.Therefore,this is an example of a Mutually Recursive Cluster(MRC).The large function calls the smaller functions using a234-case switch statement which selects the function to be called based upon the value of an integer,derived from the param-eter passed to it.This use of a numeric function index playsa role very similar to that of a function pointer(functionpointers are known to cause large static dependence[6,31]).This is a clear case of dependence pollution,because the use of a numeric function index and mutual recursion was entirely avoidable in this case.A simple refactoring was performed(by hand)to remove the need for the large switch statement and the associated calls through the nu-meric function index.Following this refactoring,the av-erage size of slices of the program dropped from13,033 nodes to149nodes,indicating a massive drop in overall de-pendence levels.Figure8shows the two MSGs before and after the refactoring.These two MSGs reveal the extent of dependence pollution in the original program.Not only is dependence significantly reduced by refactoring,but a more detailed landscape of dependence emerges,free from the ‘tell-tale’cliff drop dependence cluster.7BeforerefactoringAfter RefactoringFigure 9.Dogged Dependence Pollution6.3Case Study:No Pollution FoundThe program bc is a calculator program in which most of the code depends on the accumulator and the accumu-lator’s value depends on most of the code.While this is clearly a property which emerges because of the nature of the application,it points to significant problems for main-tenance because of the large-scale impact of almost any change.However,there will be little hope of refactoring to avoid the large dependence cluster as it is the nature of an accumulator–based calculator that much of the code de-pends upon one single variable.It is an example of a ‘Capil-lary Data Flow (CDF)’with the accumulator and associated variables acting as the capillary variable that binds together the computations.An attempt was made to refactor the program by find-ing the variable which contributed most to an increase in dependence.This analysis was performed by forming ver-sions of the program,each of which was missing one global variable,to see which contributed most to dependence.The analysis revealed a global variable,cmpres .6.4Observations about ed and goThe program ed is a text editor.Many of the opera-tions in the editor depend upon and affect the contents of the current document state.The program contains a set of operations,such as cut and paste,insert and delete,mark,etc.which all affect a common data structure.The data structure could be refactored to separate out the cursor lo-cation,the currently marked block and the text itself.This would allow commands to be grouped according to the parts of the (previously homogeneous)data structure which theyaffect and upon which they depend.This would have the ef-fect of breaking the dependence cluster into several smaller clusters.As such,this observation suggests that there is de-pendence pollution due to Capillary Data Flow (CDF).The game program,go is a strategy game in which most of the code is concerned with the state of the board.A re-lated study found that only 13%of the slices were unique [15],confirming the finding here that this program contains a large dependence cluster.Like the program ed ,the pro-gram go has a large dependence cluster as a result of CDF.In this case,the board state plays the role of the channel for capillary data flow,whereas in the editor it is the state of the document.However,unlike the text editor,this state cannot easily be refactored into sub-structures,so the large dependence cluster in the program go may be unavoidable and not,therefore,a case of dependence pollution.7Threats to ValidityThis section considers the threats to the internal and ex-ternal validity of the results presented in the two empiri-cal studies.In the experiments,the primary external threat arises from the possibility that the selected programs are not representative of programs in general,with the result that the findings of the experiments do not apply to ‘typi-cal’programs.This is a reasonable concern,but it applies to any study of properties of programs.Future work will be required to see if these results are replicated.Fortu-nately,however,the study did consider a large code base of twenty programs,covering a wide variety of different tasks including,applications,utilities,games,operating system code etc.The code base also contained both commercial and open source programs.There is,therefore,reasonable cause for confidence in the results obtained and the conclu-sions drawn from them.Internal validity is the degree to which conclusions can be drawn about the causal effect of the independent variable on the dependent variable.In this experiment,the possible threats come from the potential for faults in the slicer and the values chosen of acceptable tolerance (which affects the verification study)and cluster size (which affects the vali-dation study).A mature and widely used slicing tool (CodeSurfer)was used to mitigate the first concern.For tolerance,the re-sults showed that an overwhelming proportion (99.6%)of clusters of same size slices are within a tolerance of 1%of having the same content.For the applications of depen-dence clustering,this value of tolerance is well within ac-ceptable limits.For cluster size,the value of 10%of the slices was chosen.That is,a cluster was deemed to be ‘large’if it contained more than 10%of the slices of the pro-gram.Once again,this was a conservative choice of thresh-old,well within that which would be considered important in the application of dependence cluster analysis to mainte-8。
陕西省黄陵中学2019届高三英语上学期开学考试试题(高新部)编辑整理:尊敬的读者朋友们:这里是精品文档编辑中心,本文档内容是由我和我的同事精心编辑整理后发布的,发布之前我们对文中内容进行仔细校对,但是难免会有疏漏的地方,但是任然希望(陕西省黄陵中学2019届高三英语上学期开学考试试题(高新部))的内容能够给您的工作和学习带来便利。
同时也真诚的希望收到您的建议和反馈,这将是我们进步的源泉,前进的动力。
本文可编辑可修改,如果觉得对您有帮助请收藏以便随时查阅,最后祝您生活愉快业绩进步,以下为陕西省黄陵中学2019届高三英语上学期开学考试试题(高新部)的全部内容。
陕西省黄陵中学2019届高三英语上学期开学考试试题(高新部)第一部分:听力(共两节,满分30分)第一节(共5小题;每小题1.5分,满分7。
5分)第一节(共5小题;每小题1。
5分,满分7.5分)听下面5段对话.每段对话后有一个小题,从题中所给的A、B、C三个选项中选出最佳选项,并标在试卷的相应位置.听完每段对话后,你都有10秒钟的时间来回答有关小题和阅读下一小题。
每段对话仅读一遍。
1.What is the man?A。
A weather forecaster.B。
A pilot.C。
A trainer.2.What does the man imply?A。
The woman should go on playing chess.B。
He wants to play chess with the woman.C.The woman is weak inplaying chess。
3。
Why does the man stop his talk with the woman?A。
He isn’t interested in her words.B。
He is expecting another call。
C.He is angry with her。
Pseudo-Boolean OptimizationYves Crama¤Peter L.Hammer yDecember20,2000AbstractThis article brie°y surveys some fundamental results concerning pseudo-Boolean functions,i.e.real-valued functions of0-1variables.Hundreds of papers have beendevoted to the investigation of pseudo-Boolean functions,and a rich and diversi¯edtheory has now emerged from this literature.The article states local optimalityconditions,outlines basic techniques for global pseudo-Boolean optimization,andshows connections between best linear approximations of pseudo-Boolean functionsand game theory.It also describes models and applications arising in various¯elds,ranging from combinatorics to management science,and points to several classes offunctions of special interest.Keywords:integer programming,nonlinear0-1optimization,max-sat,max-cut,graph stability,game theory.1Pseudo-Boolean functions1.1De¯nitions and representationsA pseudo-Boolean function is a mapping from f0;1g n to<,i.e.a real-valued function of a¯nite number of0-1variables.Pseudo-Boolean functions have been introduced in [15],and extensively studied in[16]and in numerous subsequent publications;a detailed survey appears in[4].Pseudo-Boolean functions generalize Boolean functions,which are exactly those pseudo-Boolean functions whose values are in f0;1g,i.e.those f(X)for which f2(X)¡f(X)´0 in f0;1g n.Since the elements of f0;1g n are in one-to-one correspondence with the subsets ¤Ecole d'Administration des A®aires,University of Liµe ge,Boulevard du Rectorat7(B31),4000Liµe ge, Belgium,Y.Crama@ulg.ac.bey RUTCOR,Rutgers University,640Bartholomew Road,Piscataway,NJ08854-8003,U.S.A.,Ham-mer@of N=f1;2;:::;n g,every pseudo-Boolean function can be interpreted as a real-valued set function de¯ned on P(N),the power set of N=f1;2;:::;n g.Viewing pseudo-Boolean functions as de¯ned on f0;1g n,rather than on P(N),provides an algebraicviewpoint which sometimes carries clear advantages.It is easy to see for instance that the set of all pseudo-Boolean functions in n variables forms a vector space over<,andthat the elementary monomials Q i2A x i(A2P(N))de¯ne a basis of this space.In particular,every pseudo-Boolean function f(x1;x2;:::;x n)can be uniquely representedas a multilinear polynomial of the formf(x1;x2;:::;x n)=c0+mX k=1c k Y i2A k x i;(1)where c0;c1;:::;c m are real coe±cients,and A1;A2;:::;A m are nonempty subsets of N. When viewed as a function on[0;1]n,the right-hand side of(1)de¯nes a continuous extension of the pseudo-Boolean function f,to be denoted f c.Note that every pseudo-Boolean function also admits(many)representations of the formf(x1;x2;:::;x n)=b0+mX k=1b k³Y i2A k x i Y j2B k x j´;(2)where b0;b1;:::;b m are real coe±cients,and x j=1¡x j for j=1;2;:::;n.If b k¸0for all k=1;2;:::;m,then we say that the expression(2)is a posiform of f.It is easy to see that every pseudo-Boolean function can be expressed as a posiform.Other representations of pseudo-Boolean functions have been recently investigated in[12].1.2Representative modelsBesides nonlinear binary optimization,pseudo-Boolean functions can also be used to model a wide variety of problems in di®erent¯elds of appplication. Maximum satis¯ability.Consider a collection of Boolean clauses³W i2A k x i_W j2B k x j: k=1;2;:::;m´.The maximum satis¯ability problem is to¯nd a vector(x1;x2;:::;x n) in f0;1g n which satis¯es the largest possible number of clauses in the collection.This problem,which generalizes the NP-complete satis¯ability problem,is equivalent to that of minimizing the posiform(2)with b k=1for k=1;2;:::;m;see e.g.[17].Graph theory.Consider a graph G=(N;E)with positive weights w:N!<on its vertices,and capacities c:E!<on its(undirected)edges.For every SµN,the cut (S;N n S)is the set of edges having exactly one endpoint in S;the capacity of this cut is P f i;j g2(S;N n S)c(i;j).The max-cut problem is to¯nd a cut of maximum capacity in G. This problem is equivalent to maximizing the quadratic pseudo-Boolean functionf(x1;x2;:::;x n)=X1·i<j·n c(i;j)(x i x j+x i x j);(3)under the interpretation that(x1;x2;:::;x n)is the characteristic vector of S.A stable set in G is a set SµN such that no edge has both of its endpoints in S; the weight of S is P i2S w(i).The weighted stability problem is to¯nd a stable set of maximum weight in G.This can be seen to be equivalent to maximizing the quadratic pseudo-Boolean functionf(x1;x2;:::;x n)=nX i=1w(i)x i¡(1+min1·i·n w(i))X1·i<j·n x i x j;(4)where(x1;x2;:::;x n)is the characteristic vector of S.Other connections between the weighted stability problem and pseudo-Boolean optimization(in particular,posiforms) have been exploited in[11].Linear0-1programming.Consider the linear0-1programmaximize z(x1;x2;:::;x n)=nX j=1c j x j(5)subject tonX j=1a ij x j=b i;i=1;2;:::;m(6) (x1;x2;:::;x n)2f0;1g n:(7)This problem is equivalent to the quadratic pseudo-Boolean optimization problemmaximize f(x1;x2;:::;x n)=nX j=1c j x j¡M m X i=1(n X j=1a ij x j¡b i)2(8)subject to(x1;x2;:::;x n)2f0;1g n;(9) for a su±ciently large M.Management applications.Constraints of the form:x i=1if and only if y=1;i2Aare encountered in many management applications(e.g.,capital budgeting,plant loca-tion,tool management problems,etc.;see[7,16,22]).Since such constraints simply express that y=Q i2A x i,pseudo-Boolean formulations of such problems often arise quite naturally by elimination of the y-variables.Game theory.A game in characteristic function form is nothing but a pseudo-Boolean function f.If(x1;x2;:::;x n)is the characteristic vector of the set S,then f(x1;x2;:::;x n) is interpreted as the payo®that the players indexed in S can secure by acting together. The multilinear representation of f and its continuous extension f c play an interesting role in this context;see e.g.[2]and Section3below.1.3Special classes of pseudo-Boolean functionsSeveral authors have investigated special classes of functions with\nice"properties.Let us simply mention here monotonic functions(whose¯rst derivatives{see below{haveconstant sign on f0;1g n),supermodular functions(whose second derivatives are nonnega-tive on f0;1g n),polar functions(which have a posiform(2)such that,for every k,either A k=;or B k=;),unimodular functions(which are polar up to a switch x i$x i on a subset of variables),(completely)unimodal functions(which have a unique localmaximizer in(every face of)f0;1g n),etc.Strongly polynomial combinatorial algorithms for the maximization of supermodular functions have been recently proposed in[20,21]. Unimodular functions can be maximized by max-°ow algorithms[19].Unimodal and re-lated classes of functions have been introduced in[10],where the applicability of greedy algorithms for their optimization has also been investigated.The recognition problem for all these classes of functions is examined in[5].2OptimizationOptimization of pseudo-Boolean functions over subsets of f0;1g n is also known as non-linear0-1optimization.A survey of this¯eld is presented in[18].We shall only mention a few fundamental facts,restricting ourselves mostly to the unconstrained case.2.1Local optimaIf f(x1;x2;:::;x n)is a pseudo-Boolean function,let us de¯ne its i th derivative¢i to be the pseudo-Boolean function¢i=f(x1;:::;x i¡1;1;x i+1;:::;x n)¡f(x1;:::;x i¡1;0;x i+1;:::;x n):It was shown in[16]that all the local maxima of the function f are characterized by the system of implications:if¢i>0then x i=1;if¢i<0then x i=0;for i=1;2;:::;n:Let now m i and M i be arbitrary lower and upper bounds of¢i,e.g.the sums of the negative,respectively the positive,coe±cients in the polynomial representation of¢i. Then it is clear that an equivalent characterization of the local maxima of the pseudo-Boolean function f is given by the system of inequalities¢i¡M i x i·0;¢i¡m i x i¸0;for i=1;2;:::;n:2.2Global optimaThe continuous extension f c has the attractive feature that its global maximizers are at the vertices of the hypercube[0;1]n and hence,coincide with those of f(this is because(1) is linear in every variable).This implies that continuous global optimization techniques can be applied to f c in order to compute the maximum of f.This approach did not prove computationally e±cient in past experiments,but remains conceptually valuable.As an amusing corollary,one may note for instance that the optimum of the max-cut function(3)is at least f c(12)=12P1·i<j·n c(i;j).Thus,we¯nd that every graph contains a cutof capacity at least equal to one-half the total edge capacity,a well-known result of graph theory.Moreover,such a cut can be found e±ciently.A combinatorial variable elimination algorithm for pseudo-Boolean optimization was pro-posed by Hammer,Rosenberg and Rudeanu[15,16].The following streamlined version and an e±cient implementation of this algorithm are described in[8].Let f(x1;x2;:::;x n) be the function to be maximized.We can writef(x1;x2;:::;x n)=x1¢1(x2;x3;:::;x n)+h(x2;x3;:::;x n);where h and the¯rst derivative¢1do not depend on x1.Clearly,there exists a maximizer of f,say(x¤1;x¤2;:::;x¤n),with the property that x¤1=1if and only if¢1(x¤2;x¤3;:::;x¤n)> 0.This suggests to introduce a function t(x2;x3;:::;x n)such that t(x2;x3;:::;x n)=¢1(x2;x3;:::;x n)if¢1(x2;x3;:::;x n)>0and t(x2;x3;:::;x n)=0otherwise.Letting f1=t+h,we have reduced the maximization of the original function f in n variables to the maximization of f1,which only depends on n¡1variables.Repeating n times this elimination process eventually allows to determine a maximizer of f.An e±cient implementation of this algorithm is proposed in[8],where it is also proved that the algorithm runs in polynomial time on pseudo-Boolean functions with bounded tree-width. Another classical approach consists in transforming the problem max f f(X):X2f0;1g n g into an equivalent linear0-1programming problem by substituting a variable y k for the k th monomial T k of(1)(or(2))and setting up a collection of linear constraints which enforce the equality y k=T k.The continuous relaxation of this linear formulation yields an easily computable upper-bound on the maximum of f.Properties of this upper-bound and of related formulations of f have been investigated in[1,13]and in a series of subsequent paper;see[6]for a brief account and Section2.3for related considerations.2.3Quadratic0-1optimizationQuadratic0-1optimization is an important special case of pseudo-Boolean optimization, both because numerous applications appear in this form,and because the more general case is easily reduced to it.Indeed,consider a function f of the form(1),assume that j A1j>2and select j;l2A1.Then,the functiong(x1;x2;:::;x n;y)=c0+c1³Y i2A1nf j;l g x i´y+m X k=2c k Y i2A k x i¡M(x j x l¡2x j y¡2x l y+3y)where y is a new variable,and M is large enough,has the same maximum value as f (y=x j x l in every maximizer of g).Applying recursively this procedure yields eventually a function of degree2.Best linear majorants and minorants of pseudo-Boolean functions can provide important information on the function.It was shown in[13]that for any quadratic pseudo-Boolean function f,one can construct a linear functionl(x1;x2;:::;x n)=l0+n X j=1l j x j;called the roof dual of f,majorizing f(x1;x2;:::;x n)in every binary point(x1;x2;:::;x n)and having the following property of strong persistency:if l j is strictly positive(resp. negative),then x j must be equal to1(resp.0)in every maximizer of f.In other words,roofduality allows the determination of the optimal values of a subset of variables.Moreover, max f f(X):X2f0;1g n g=max f l(X):X2f0;1g n g if and only if an associated2-SAT problem is satis¯able.While the determination of the roof dual in[13]was accomplishedvia linear programming,it was shown later[3]that this problem can be reduced to a max°ow problem in an associated network.3Linear approximationsIn order to¯nd the best linear approximation L(f)of a pseudo-Boolean function f in the norm L2,it is su±cient to know how to determine the best linear L2-approximation of a monomial.Indeed,considering the polynomial representation(1),it is clear thatL(f)=c0+mX k=1c k L³Y i2A k x i´:On the other hand,it was shown in[14]thatL³Y i2A x i´=12j A j(1¡j A j+2X i2A x i)for all AµN:It was shown in the same paper that the best quadratic,cubic,...L2-approximations can also be obtained by similar simple closed formulas.Important game-theoretical applications of best L2-approximations consist in¯nding the Banzhaf indices of the players of a simple game,or the Shapley values of the players of an n-person characteristic function game.Indeed,as shown in[14],these indices are simply the coe±cients of best(weighted)linear L2-approximations of pseudo-Boolean functions describing these games.Another important application of these results allows the e±cient determination of excel-lent heuristic solutions of unconstrained nonlinear binary optimization problems[9].Acknowledgements.This work was partially supported by ONR grants N00014-92-J-1375and N00014-92-J-4083,by NSF grant DMS-9806389,and by research grants from the Natural Sciences and Engineering Research Council of Canada(NSERC). References[1]E.Balas and J.B.Mazzola,Nonlinear0-1programming.I.Linearization techniques,Mathematical Programming30(1984)1-22.[2]J.M.Bilbao,Cooperative Games on Combinatorial Structures,Kluwer AcademicPublishers,Boston/Dordrecht/London,2000.[3]E.Boros and P.L.Hammer,A max-°ow appraoch to improved roof duality inquadratic0-1minimization,RUTCOR-Rutgers University Center for Operations Research,Research Report RRR15-1989,Piscataway,NJ,1989.[4]E.Boros and P.L.Hammer,Pseudo-Boolean optimization,Discrete Applied Mathe-matics(2001),forthcoming.[5]Y.Crama,Recognition problems for special classes of polynomials in0-1variables,Mathematical Programming44(1989)139-155.[6]Y.Crama,Concave extensions for nonlinear0-1maximization problems,Mathemat-ical Programming61(1993)53-60.[7]Y.Crama,Combinatorial optimization models for production scheduling in auto-mated manufacturing systems,European Journal of Operational Research99(1997) 136-153.[8]Y.Crama,P.Hansen and B.Jaumard,The basic algorithm for pseudo-Booleanprogramming revisited,Discrete Applied Mathematics29(1990)171-185.[9]T.Davoine,P.L.Hammer and B.Vizvari,A heuristic for Boolean optimization prob-lems,RUTCOR-Rutgers University Center for Operations Research,Research Re-port RR42-2000,Piscataway,NJ,2000.[10]D.De Werra,P.L.Hammer,T.Liebling and B.Simeone,From linear separability tounimodality:A hierarchy of pseudo-Boolean functions,SIAM Journal on Discrete Mathematics1(1988)174-184.[11]Ch.Ebenegger,P.L.Hammer and D.de Werra,Pseudo-Boolean functions and sta-bility of graphs,Annals of Discrete Mathematics19(1984)83-97.[12]S.Foldes and P.L.Hammer,Disjunctive and conjunctive normal forms of pseudo-Boolean functions,RUTCOR-Rutgers University Center for Operations Research, Research Report RRR1-2000,Piscataway,NJ,2000.[13]P.L.Hammer,P.Hansen and B.Simeone,Roof duality,complementation and persis-tency in quadratic0-1optimization,Mathematical Programming28(1984)121-155.[14]P.L.Hammer and R.Holzman,Approximations of pseudo-Boolean functions:Ap-plications to game theory,ZOR-Methods and Models of Operations Research36 (1992)3-21.[15]P.L.Hammer,I.Rosenberg and S.Rudeanu,On the determination of the minima ofpseudo-Boolean functions(in Romanian),Stud.Cerc.Mat.14(1963)359-364. [16]P.L.Hammer and S.Rudeanu,Boolean Methods in Operations Research and RelatedAreas,Springer,Berlin,1968.[17]P.Hansen and B.Jaumard,Algorithms for the maximum satis¯ability problem,Computing44(1990)279-303.[18]P.Hansen,B.Jaumard and V.Mathon,Constrained nonlinear0-1programming,ORSA Journal on Computing5(1993)97-119.[19]P.Hansen and B.Simeone,Unimodular functions,Discrete Applied Mathematics14(1986)269-281.[20]S.Iwata,L.Fleischer and S.Fujishige,A combinatorial,strongly polynomial-timealgorithm for minimizing submodular functions,in:Proceedings of the32nd ACM Symposium on Theory of Computing,2000,pp.97-106.[21]A.Schrijver,A combinatorial algorithm minimizing submodular functions in stronglypolynomial time,Journal of Combinatorial Theory(Series B)80(2000)346-355. [22]K.E.Stecke,Formulation and solution of nonlinear integer production planning prob-lems for°exible manufacturing sytems,Management Science29(1983)273-288.。
DOV M.GABBAY INTUITIONISTIC BASIS FOR NON-MONOTONICLOGIC1INTRODUCTIONClassical provability,which we denote by ,as intuitionistic provability,which we denote by|=,are monotonic notions.This means they satisfy the rule:1.1A YA,B YIn other words whatever can be obtained from assumption A can still be obtained if we add to A more assumptions.In many cases,reasoning is non-monotonic.Consider the story below of the World-Trave-Agency(WTA).The story shall serve as the basis for introducing a system of non-monotonic logic.WTA offers package tours from New York to Paris and Stuttgart.Every Sunday, a group of New Yorkers board a WTA plane in Now York,some get off at Paris and the rest continue to Stuttgart.The Stuttgart office arranges for hotels and tours in Europe.Our non-monotonic reasoning will be done by the the Stuttgart office.At12:00Stuttgart time,3hours after the scheduled New York take-off of the WTA group,the Stuttgart office learns that a terrorist attack was launched at Paris Airport,2planes were hijacked and that allflights nearing France are rerouted.Stuttgart has to decide what to do with the WTA-group.All telephone lines to New York are busy.The only established fact is that the WTA-plane did take off (some passenger’s wife happened to leave a message).Stuttgart reasons that in this case the WTA-plane will be rerouted to London, the Paris passengers will stay in London and the Stuttgart passengers will arrive tonight.This reasoning is performed on the basis of knowledge of WTA proce-dures and connections.1.2Thus the reasoning at12:00Stuttgart time is as follows(a)Established as true at12:00hours:p=Groupflight too off.q=Terrorist attack on Paris airport.(b)Assumed true at12:00hours on basis of company procedures,and no evi-dence to the contrary:r=Flight rerouted through London.2DOV M.GABBAY(c)Conclusion at12:00:C=The Stuttgart passengers are arriving tonight from London.Two hours later,at14:00,Stuttgart manages to establish connection with New York.They learn that the plane was two and a half hours late in take-off,mainly because a new large group of Paris passengers was added/The New York office did not know what the captain would decide to do in midair,when hearing of the terrorist attack.Again,based on company policy and without evidence to the contrary,that the Stuttgart office assumes the captain would return the short distance to New York and let the passengers spend the night at home.1.3Thus at14:00we have(a)Established as true at14:00:p,q andp =Take-off delayed in New York.A large number of Paris passengers is onflight.(b)Assumed true on the basis of company procedures,and no evidence to thecontrary:r =Flight returning to New York.(c)Conclusion at14:00:¬C:Stuttgart passengers not arriving from London tonight.1.4To formalise the above Stuttgart reasoning,we need three no-tions,besides the usual connectives¬,∧,∨,→(a)Notion ∗of deducibility,the nature of which to be determine.(b)A notion of Having established A,we further assume B,on the basis ofour knowledge of the way our world runs under normal conditions and no evidence to the contrary.(c)A notion ∗for the non-monotonic reasoning involved,which is somehowdefined using(a)(b)above and possibly other considerations.The cain of reasoning in this particular story can be represented as follows:INTUITIONISTIC BASIS FOR NON-MONOTONIC LOGIC 31.5At 12:00(a)We know p ∧q ,we assume or expect r .(b)(p ∧q ∧r ) ∗C .(c)p ∧q ∗ C .1.6At 14:00(a)We know (p ∧q ∧p ),we assume or expect r .(b)p ∧q ∧p ∧r ∗ ¬C .(c)p ∧q ∧p ∗ ¬C .It is clear from 5.(c)and 6.(c)that ∗ is non-monotonic.McDermott and Doyle [1980]introduce the extra connective M into the lan-guage and read MC as it is consistent to assume C .The main formal equation for M ,besides properties following from its intuitive meaning is:McDermott and Doyle take ∗to be classical logic provability .They define |∼form using M in a certain way.Thus 5.(a)above is formalised in their system as p ∧q Mr where is classical provability.Our approach,to be introduced in detail in Section 8,is to introduce a binary relation A >B reading B is expected on the basis of A ,and some intuitive axioms for >.We take ∗ to be intuitionistic provability |=and define ||∼using |=and >.Thus for example 4(a)is represented as (p ∧q )>r in our system with >.The exact details and the justification for our approach are given in Section 8.The essential difference between us and McDermott and Doyle is in basing the non-monotonic system on intuitionistic logic.Whether one uses M or >is not soimportant,as differences can be compensated in the manner of defining ∗ from ∗and M or from ∗and >.We are going to introduce two systems of non-monotonic logic.One,called µwhich is essentially McDermott and Doyle type (using M )based on intuitionistic logic and another,called γ,is also based on intuitionistic logic but with >as primitive.We proceed as follows.In Section 2we study the difficulties in McDermott and Doyle’s system.Section 3is general discussion;Sections ??,5,6,7study µ.Section 8introduces the system γwith >;and Section 9discusses the connection with McCarthy’s circumscription,ant other general remarks.4DOV M.GABBAY2THE NEED FOR AN INTUITIONISTIC BASIS McDermott and Doyle formally add to the language of classical logic the addi-tional operator M.They use to define the notion|∼of non-monotonic provabil-ity in a certain way.2.1Here are some examples of their deductions,given in their pa-per.PC stands for the classical predicate calculus.(a)The theory T1=PC∪(MC→¬D,MD→¬C)non-monotonicallyproves MC∨MD.(b)The theory T2=PC∪(MC→¬C)is non-monotonically inconsistent.(c)The deduction theorem does not hold for non-monotonic logic.(d)The theory T3=PC∪(MC→C)proves non-monotonically exactly C(which is not classically provable).(e)The theory T4=PC∪(∀x[MP(x)→[P(x)∨(∀y=x)−P(y)]])provesnon-monotonically∃xMP(x).(f)The theory T6=PC∪(MC→¬C,¬C)is non-monotonically consistentand proves¬MC but its subtheory{MC→¬C}is not consistent.McDermott and Doyle[1980],page69,list the following difficulties with their logic as evidence that it“fails to capture a coherent notion of consistency”.Difficulty A:The theory T13=P C∪(MC→D,¬D)is inconsistent in their logic mainly because¬C does not follow from T13,thus allowing MC to be assumed.They say in[McDermott and Doyle,1980]that“this can be remedied by extending the theory to include¬C,the approach taken by the our system of non-monotonic logic will prove:¬D,MC→D|∼C.Difficulty B:They continue to say:“Another incoherence of our(i.e.,McDermott and Doyle) logic is that consistency is not distributive;MC does not follow from M(C∧D)”.INTUITIONISTIC BASIS FOR NON-MONOTONIC LOGIC5Difficulty C:Another difficulty is that the logic“tolerates axioms which force an incoherent notion of consistency,as in T14=PC∪(MC,¬C).Let us see what is required to remedy the above difficulties.¿From the elimination of Difficulty A we want the validity in non-monotonic logic of the following inferenceMC→D,¬D¬Cor equivalently:¬D,¬D→¬MC¬CSince D is arbitrary,the rule really means:¬MC¬C(One can obtain this also by taking D=MC.)The second difficulty leads to the ruleM(A∧B).MAThis rule seems intuitive.If we are given¬C it is indeed not consistent to add C.In fact,one of the goals of T MS is to prevent both¬C and MC being“in”.The above already shows that classical reasoning is not involved with the in-tuitions behind M.MC and C are equivalent,according to the above rules,if understood classically,and it is not surprising that they are not valid in[McDer-mott and Doyle,1980].Furthermore,let us see what T MS does in case of contradiction.We quote page67of[McDermott and Doyle,1980].“For example,the existing theory may be(MC→E)in which bothMC and E are believed.Adding the axiom MD→¬E leads to aninconsistent theory,as MD is assumed(there being no proof of¬D),which leads to proving¬E.The dependency-directed backtrackingprocess would trace the proofs of E and¬E,find that two assump-tions,MC and MD were responsible.Just concluding¬MC∨¬MDdoes no good,since this does not rule out any assumptions,so theT MS adds the new axioms E→¬D,which invalidates the assump-tion MD and so restores consistency”.We thus have the rule:MCrarrE,MD→¬EE→¬D6DOV M.GABBAYThis deduction is intuitionistic.We think that at this stage it is safe to conclude that there is enough evidence for trying to base non-monotonic logic on intuitionistic logic and see whether we can get a satisfactory system!3CAUTIONThe construction of McDermott and Doyle’s system,as indeed the construction of any logical system systems from two basic considerations.3.1The intuitive interpretation and intended meaning for the no-tions to be formallised.3.2A partial list of theorems and non-theorems that the expectedlogic must have.We have the above for non-monotonic logic.We have argued that it is worthwhile checking whether non-monotonic logic can be based on intuitionistic logic.We have to be careful in our examination of the new candidate.It is not enough to say:Here is a logic that can derive what we want to have as theorems and keeps out what we don’t want to get(i.e.,a system for M compatible with3.2).It must also be the case that a natural interpretation for the new system is also available,which is compatible with3.1.To give an example from tense logics(see[Gabbay,1976]),suppose we intro-duce the operator Nq for the English progressive reading:“q is true now and has been true a little bit before now and will continue to be true a little bit in the future of now”.This is the main property of the progressive in English.For example:Nq=“He is writing now”.has this reading.If we write down the formal properties of the progressive N,we obtain:Nq→qNq→NNqN(p∧q)↔Np∧Nq.We can come to you and say:“We know of such a system”.It is one of Lewis’modal systems called S4(for necessity2).Its interpretation is:2q is true at a certain situation iff q is true in all resembling situations.So2 may have all the properties on N and hence satisfy(2)but it is wrong to say that the progressive is really a sort of a modal necessity operator and its logic is the logic of necessity.Theflaw is that it may not satisfy(1).To show its weakness,INTUITIONISTIC BASIS FOR NON-MONOTONIC LOGIC7 suppose we want to talk about N q,which is not“sf He is writing”but”Hi has just started writing”.N is very similar to N Can youfind a2 very similar to2 to mach N ?The notion of“beginning”does not enter at all the interpretation of 2.So when we present our systemµwe must watch that both3.1and3.2above are well matched.For this reason we start by presentingµthrough its semantical interpretation.4SEMANTICAL PRESENTATION OFµWe are looking for a logical system in the language of¬(negation),∧(conjunc-tion),∨(disjunction),→(if-then),∀(universal quantifier),∃(existential quanti-fier)and the special additional unary connective M.We want the following to hold for|=,the provability of our system.4.1If A ¬p then A Mp.Mp reads“p is consistent”.Consider aflow of time(T,≤),where T is the set of moments of time and≤is the earlier-later relation;it is transitive and reflexive.We write t≤s to mean,s is in the future of t or s is t itself.Atomic propositions represent unit statements. At each moment of time there are those statements which are known to hold true. As time goes on we learn more and more about the world and more and more statements become true.So the advance of time does not bring new events but more knowledge!Mp reads:it is consistent to assume at this stage that p is true.So we know it is possible that sometimes in the future we establish p.But we are not sure,it may turn out that¬p is established.Thus we have the following properties:8DOV M.GABBAY4.2If A is established now then A will always be established.4.3If¬A is established now then¬A will always hold.4.4A→B is established now iff it is already clear now that if weestablish A then we must establish B.We may not know yet that A is established but whenever it is,B will also follow.4.5Mp holds now if it is consistent with what we know now that pis true.4.6∃xA(x)holds now iff for some element a,A(a)holds now.4.7∀xA(x)is established now iff it is clear now that for any latermoment and any new or old element a,A(a)will be estab-lished.Notice that4.1above holds in this interpretation.In factA ¬p∨Mp is valid.4.8Formal description of the modelµin the propositional case. The language contains¬,∧,∨,→,M.The structures are of the form(T,≤,h). The function h is the assignment.For atom q and each t∈T,h(t,q)gives a truth value.If h(t,q)=truth and t≤s then h(s,q)=truth.The function h can be extended on all Wffs as follows:1.h(t,A∧B=truth iff h(t,A)=h(t,B)=truth.2.h(t,A∨B=truth iff h(t,A)=truth or h(t,B)=truth.3.h(t,¬A)=truth iff∀s≥t(h(s,A)=false.4.h(t,A→B=truth iff∀s≥t(if h(s,A)=truth then h(s,B)=truth).5.h(t,MA)=truth iff∃s≥t(h(s,A)=truth).6.We say A proves B(intuitionistically)(notation A B)iff for any t,hh(t,A)=truth implies h(t,B)=truth.The non-monotonic provability notion||∼will be based on .It will be defined in the end of this section in Definition3INTUITIONISTIC BASIS FOR NON-MONOTONIC LOGIC9 4.9ExampleLet us continue to get the feel of the model.The following are valid rules.Remem-ber that the logic is based on intuitionistic logic and so many classical equivalences may not hold!1. MAV¬A.2.¬MA is equivalent to¬A.3.MA→B is equivalent to¬A∨B.(That is,either¬A if not then MA can be assumed and hence B.)4.A particular case of(3.)is:MC→C is the same as¬C∨C.5.CV¬C is not a theorem of the logic because it is equivalent to MC→C and it is not aa theorem.It says that if C is not proved now then it isinconsistent to assume it!6.MC→¬C is equivalent to¬C.7.if A∧Mq B,we can stillfind out that¬Mq(¬q)and so it is quite possiblethat also A∧¬q B.REMARK1.We have to be careful how to translate from McDermott and Doyle logic into our logic.Since their logic is based on classical logic and ours on in-tuitionistic logic,different formulations equivalent over classical logic may not be equivalent in our logic.We must therefore translate the meaning into our system, which amounts to choosing one of the equivalent classical versions.A simple example is a∨¬A.It is equivalent to A→A in classical logic but not in intuitionistic logic.The logicµis not really weaker than classical logic.It just affords more oppor-tunities for formulation and therefore is much richer.For example for any A,B of classical propositional logic we haveA ¬B iff A ¬Bclassically intuitionisticallyThis shows that intuitionistic logic is really richer,allowing for more play.The reader can continue to check that many desirable properties of non-monotonicreasoning are available.Note further that the rule AMA is valid.Note that the interpretation which was used to introduce the system is quite compatible with the intuitive meaning behind M.Let us see whether difficulties A,B,C are resolved.10DOV M.GABBAYDifficulty A.The rule¬MC¬Cis indeed valid.Difficulty B.The ruleM(A∧B)MAis also valid.Difficulty C.(MC,¬C)is indeed inconsistent.Let us see now what the TMS would do in case MC→E and MD→E. According to our logicµ,this means we cannot have both MC and MD and hence we can add either¬C or¬D immediately.If we add¬D,then we can still hold MC and hence obtain E(compare with the description is Section2,where TMS takes E→¬D).Or we can add¬E,in which case we can still hold MD and hence get¬E.Notice that in our logic MC→E is the same as¬C∨E and MD→¬E is the same as¬D∨¬E and so we can also let TMS take on from here.The logicµis not exactly the same as McDermott and Doyle’s logic.We pay the price for resolving difficulties A,B,C.For example the theory(MC→C)does not prove C as in McDermott and Doyle.It proves¬C∨C which says either C is determined now of¬C is de-termined now.It does not say which one.McDermott and Doyle’s logic would say,since neither is determined,assume MC,therefore C.This is what TMS would do.But this is unjustified since(MC→C,¬C)is also consistent and is equivalent to¬C.REMARK2(Further Remarks).Observe that in proposing the logicµwe have only proposed a new basis for non-monotonic logic.We can further strengthen the logic by allowing for a proce-dure forfindingfixed points as proposed by McDermott and Doyle.If we do that, we would get(MC→C) C as they do.The difficulties A,B,C will still be removed since they are removed by certain rules.These rules will still be available after thefixed points procedure is added.Another difference between the two logics is in case(MC→¬C).This theory is consistent in our case and proves¬C.According to McDermott and Doyle, this theory is non-monotonically inconsistent.They also get in their logic that (MC→¬C)is inconsistent but if they add¬C,i.e.have(MC→¬C,¬C), then what they get is consistent!INTUITIONISTIC BASIS FOR NON-MONOTONIC LOGIC 11We regard this as an undesirable feature and in our logic both theories are con-sistent and are equivalent to ¬C .The theory T 4of (2.1f),i.e.the theory ∀x (P (x )∨¬P (x )).As we shall see in Section 9,this is a McCarthy’s circumscription on P as represented in our system.We are now in a position to define our notion of non-monotonic provability,based on the intuitionistic system µ.We denote this provability by ||∼.It is dif-ferent from McDermott and Doyle’s and more like Reiter’s [Reiter,1980]default reasoning.DEFINITION 3.We say that A ||∼B iff there exists formulas C 0=A,C 1,...,C n =B ,called intermediate stages of the non-monotonic deduction from A to B and set of formulasMX i i ,...,MX i k (i )......MX n i ,...,MX n k(n )called the extra assumptions (similar to defaults),such that for all i ≤i <n i ≤j ≤k (i )we have:C i ∧k (i ) j −iMX i j C i +iEXAMPLE 4.1.(MC →C )∧MC CThus (MC →C )||∼C using MC as default.2.IfA ∧Mq C 1C 1∧MrBThen by definition A ||∼BWe reason as follows:With q consistent,we get C 1from A .Assume Mq ,get C 1.Having C 1,r is possible.Believe r .So since C 1∧Mr B we got B .Hence A ||∼B non-monotonically.3.(MC →C )∧M ¬C ¬CHence (MC →C )||∼¬C .We thus get (MC →C )can non-monotonically prove either ¬C or C .This means we can take either alternative and go on from there.REMARK 5.We can take McDermott and Doyle’s construction of fixed points and obtain,by basing their construction on µ,another non-monotonic probability compatible with theirs.We can denote it by MD.Personally we prefer the default type ||∼of Definition 3.12DOV M.GABBAY5FURTHER PROPERTIES OF THE SYSTEMµ1.Letαbe a wff built up from¬,∨,∧,→,M.Then there existsβi built upwithout M and aα∗is built up fromβi using∧,∨,M only,with the property ∗α↔α.α∗is said to be in normal for,.To prove this claim we observe that M can be pulled out from under the scope of→,¬.The following are valid rules to be used:(a)(x∨y→z)=(x→z)∧(y→z)(z→x∧y)=(z→x)∧(z→y)¬(x∨y)=(¬x∧¬y).We can thus consider only nested occurrences of the following forms: i.x∧My→z∨Mνii.¬(x∧MA).Notice that the following are further valid rules to be used:(b) (x∧My→z∨Mν)↔[(x∧¬∨)→(¬y∨z)].Also(c) ¬(xwedgeMA)↔(x→¬A).2.Letαbe in normal form.Then there exists a wffγwithout M satisfying¬α↔ γ.The proof is long.We do not give it here.3.For anyα,βwithout Mα ¬βiffα ¬β(where is classical provability).4.is an important rule,connecting theµlogic with classical logic.It saysthat without M,we still have classical logic if we want.6AN AXIOM SYSTEM FOR QUANTIFIEDµWITH∀AND∃.The language contains M,¬,∧,→,∀,∃,∨.1.Axioms for wffs without M:(a)A→(B→A)(b)[A→(B→C)]→[(A→B)→(A→C)]INTUITIONISTIC BASIS FOR NON-MONOTONIC LOGIC13(c)A→(¬A→B)(d)(A→¬B)→(B→¬A)(e)A→(B→A∧B)(f)A∧B→A(g)A∧B→B(h)∀xA(x)→A(y);A(y)→∃xA(x).2.Axioms for M:(a)A→MA(b)MMA→MA(c)¬(x∧MA)↔(x→¬A)(d)(x∧My→z∨Mν)↔(x∧¬ν→¬y∨z)(e)∀x[A∨MB(x)]↔∀x[¬B(x)→A](f)¬∃y[A∧MB(y)]↔∀y(a→¬B(y))3.Rules:A, A→BBA→B(x)A→∀xB(x). B(x)→A∃xB(x)→Ax not free in A.x not free in A.The completeness theorem can be proved for this axiom system and the seman-tics given earlier.7FURTHER EVIDENCE FORµWe can give further evidence thatµis a good basis for non-monotonic logic by showing how the ruleA ¬B iffA MBIs represented in our systemµ.We have seen that¬B∨MB is valid but not exactly the same as the rule above.Consider the fragment with¬,∧,∀only.Let T be the set of wffs of this frag-ment.Thus the moments or states of knowledge,in this case,are the formulas themselves.Let A≤B mean B→A.So if B proves A,it represents a state of knowledge greater and stronger than A.For any atom q let:h(A,q)=truth iffA q.I.e.an atom q is true at a state of knowledge A iff A proves q.It can be shown that for any Bh(A,B)=truth iffA B.14DOV M.GABBAYThis means for any B,B is true at state of knowledge A iff A proves B.The above is exactly the interpretation we want for M.SinceA ¬B iffA∧A is consistentiff the state of knowledge represented by A∧A is consistent and of course A≤A∧B.So A MNB holds,since,as we have seen,there is a future state(namely A∧A)in which B is established.Under this very natural interpretation,A ¬B iffA MBholds.8THE SYSTEMγWITH>(PRELIMINARY VERSION)The language of our system contains the connectives¬,∧,∨,→,>and quantifiers ∀and∃.The system is based on intuitionistic provability and has all the axioms of intuitionistic logic and the additional defining axioms for>.The following is an axiom system forγ.1.Axioms and rules for intuitionistic logic,namely group axioms1.and grouprules3.of Section6.2.Axioms for>.(a) (A>B)∧(A>C)↔(A>B∧C)(b) A→BA>B ; A↔A(A>B)↔(A >B)(c) (A>falsity)¬A(d) (A>B)∧(A∧B>C)→A>C(e) (A>B)∨¬(A>B)3.The following features are not taken as axioms because they are not wanted.(a) (A>B)∧(B>C)→(A>C)(b) (A>B)∧(B>C)→(A>C)(c) (A>B)→(¬B>¬A)(d) (A>B)→(A∧A >B)(e) B→(A>B)INTUITIONISTIC BASIS FOR NON-MONOTONIC LOGIC 15To discuss the axioms for >,note that 2.a and 2.b axioms intuitively from the meaning of A >B as “B is expected on the basis of A and world knowledge”.2.c says one cannot expect falsity.This axiom makes >more restrictive than M because A MB says B is consistent,so it is possible that both B and ¬B are consistent with A .A >B chooses only one of the two.Axioms 2.d is a form of transitivity.We reject full transitivity 3.a because it is not intuitive.For example,on the basis of total stockmarket collapse (¬A ),we may expect Jones to lose his savings (¬B )and on the basis of Jones losing his savings we may expect his wife to give him hell (¬C )but we may not expect Jones’wife to give him hell on the basis of total stock-market collapse and his losing his savings.The meaning of 2.e is that A >B is independent of whether we know A is true or not;we can expect a drunken driver to cause an accident indepen-dently of whether there are such drivers today.For a similar reason e.d is rejected.The fact that B is true does not imply it was expected.The form in which 2.e is written has to do with the fact that our base is intuitionistic logic.It says that right at the beginning it is established whether A >B or ¬(A >B ).The rejection of 3.c has to do with non-monotonicity.3.c is an axiom for monotonicity.3.b is rejected for the same reason.3.b does not allow for our expectation to go wrong.We now define non-monotonic provability γ.It should be compared with the non-monotonic µof Definition 3.DEFINITION 6.Let A ||∼B iff for some X such that A >X we have A ∧X B .LEMMA 7.If A ||∼B and A ∧B ||∼C then A ||∼C .Proof.If A ||∼B then for some X,(A >X )∧(A ∧X B )by Axiom 2.b we get (A ∧X )>B and from 2.d we get A >B .¿From A ∧B ||∼C we get that for some Y,A ∧B >v and A ∧B ∧Y C .We use 2.d again to get A >Y .But since also A >X ,we get by Axiom 2.a A >(X ∧Y ).Thus we have that for some X ∧Y ,A >(X ∧Y )and A ∧X ∧Y C (since A ∧X B and A ∧Y ∧B C )and hence A ||∼C . REMARK 8.The previous lemma allows us to define γwithout resorting to a sequence of intermediaries X 1,X 2,...as we did in Definition 3for µ.9FURTHER REMARKS1.Propositional µis decidable.The result follows from the decidability of theintuitionistic propositional calculus (see [Gabbay,1981]).16DOV M.GABBAYWe don’t know whether propositionalγis decidable;probably it is.2.µ,as well as McDermott and Doyle’s system cannot be based on classicallogic.M will collapse(MC=C)if we want to have reasonable rules(no difficulties).The systemγcan be based on classical logic.It would not be satisfactory,however but>will not collapse.3.We can represent some cases of McCarthy’s circumscription in our system.Take a wff A(P),where the notation signifies that A“talks”about the prop-erty P.Then circumscription on P and no more.Thus if we are at a stage of knowledge t(which can be taken to be A itself, according to Section7,then if we know P(a)then we assume by circumscription that¬P(a).So what we are saying in this case is thatP(a)∨¬P(a)holds.Remember that in our logic zv¬z is not a theorem.We have an intuitionistic basis!ThusA+McCarthy s circumscription Bis the same asA+(∀x[P(x)∨¬P(x)]for all P) B.We cannot deal with other forms of circumscription,e.g.those giving induction.BIBLIOGRAPHY[Gabbay,1976] D.M.Gabbay.Investigation in Modal and Tense Logic with Applications.D.Reidel, 1976.[Gabbay,1981] D.M.Gabbay.Semantical Investigations in Heyting’s Intuitionistic Logic.D.Reidel, 1981.[McCarthy,1980]J.McCarthy.Circumscription,a form of non-monotonic reasoning.Artificial Intellli-gence,13,pp.27–39,1980.[McDermott and Doyle,1980] D.McDermott and J.Doyle.Non-monotonic logic I.Artificial Intelli-gence,13,pp.41–72,1980.[Reiter,1980]R.Reiter.Alogic of default reasoning.Artificial Intelligence,13,pp.81–132,1980.。
Graded DominanceLibor Bˇe hounek1,Ulrich Bodenhofer2,Petr Cintula1,and Susanne Saminger-Platz31Institute of Computer Science,Academy of Sciences of the Czech RepublicPod V od´a renskou vˇeˇz´ı2,18207Prague,Czech Republic{behounek|cintula}@cs.cas.cz2Institute of Bioinformatics,Johannes Kepler University LinzAltenberger Str.69,A-4040Linz,Austriabodenhofer@bioinf.jku.at3Dept.of Knowledge-Based Math.Systems,Johannes Kepler University LinzAltenberger Str.69,A-4040Linz,Austriasusanne.saminger-platz@jku.at1IntroductionThe relation of dominance between aggregation operators has recently been studied quite inten-sively[9,12,10,11,13,14].We propose to study its‘graded’generalization in the foundational frame-work of higher-order fuzzy logic,also known as Fuzzy Class Theory(FCT)introduced in[1].FCT is specially designed to allow a quick and sound development of graded,lattice-valued generalizations of the notions of traditional‘fuzzy mathematics’and is a backbone of a broader program of logic-based foundations for fuzzy mathematics,described in[2].This short abstract is to be understood as just a‘teaser’of the broad and potentially very interest-ing area of graded dominance.We sketch basic definitions and properties related to this notion and present a few examples of results in the area of equivalence and order relations(in particular,we show interesting graded generalization of basic results from[6,12]).Also some of our theorems are,for expository purposes,stated in a less general form here and can be further generalized substantively.In this paper,we work in Fuzzy Class Theory over the logic MTL∆of all left-continuous t-norms [7].The apparatus of FCT and its standard notation is explained in detail in the primer[3],which is freely available online.Furthermore we use X Y for∆(X⊆Y).2Inner Truth Values and Truth-Value OperatorsAn important feature of FCT is the absence of variables for truth values.However,many theorems of traditional fuzzy mathematics do speak about truth values or quantify over operators on truth values like aggregation operators,copulas,t-norms,etc.In order to be able to speak of truth values within FCT,truth values need be internalized in the theory.This is done in[4]by a rather standard technique, by representing truth values by subclasses of a crisp singleton.4Thus we can assume that we do have variablesα,β,...for truth values in FCT;the class of the inner truth values is denoted by L.Binary operators on truth values(including propositional connectives&,¬,...)can then be re-garded as functions c:L×L→L or as fuzzy relations c L×L.Consequently,graded class relations can be applied to such operators,e.g.,fuzzy inclusion c⊆d≡(∀α,β)(αcβ→αdβ).Many crisp classes of truth-value operators(e.g.,t-norms,continuous t-norms,copulas,etc.)can be defined by formulae of FCT.The apparatus,however,enables also partial satisfaction of such conditions.In the 4Cf.[15]for an analogous construction in a set theory over a variant of G¨o del logic.See[4]for details of the construction and certain metamathematical qualifications regarding the representation.Observe also a parallel with the power-object of1in topos theory.following,we therefore give several fuzzy conditions on truth-value operators and use them as graded preconditions of theorems which need not be satisfied to the full degree.This yields a completely new graded theory of truth-value operators and allows non-trivial generalizations of well-known theorems on such operators,including their consequences for properties of fuzzy relations.Definition1.In FCT,we define the following graded properties of a truth-value operator c L×L:Com(c)≡df(∀α,β)(αcβ→βcα)Ass(c)≡df(∀α,β,γ)((αcβ)cγ)↔(αc(βcα))MonL(c)≡df(∀α,β,γ)(∆(α→β)→(αcγ→βcγ))MonR(c)≡df(∀α,β,γ)(∆(α→β)→(γcα→γcβ))UnL(c)≡df(∀α)(1cα↔α)UnR(c)≡df(∀α)(αc1↔α)For convenience,we also defineMon(c)≡df MonL(c)&MonR(c)wMon(c)≡df MonL(c)∧MonR(c)and analogously for Un.The following theorem provides us with samples of basic graded results.Theorem1.FCT proves the following graded properties of truth-value operators:1.Mon(c)&Un(c)→(c⊆∧)2.wMon(c)&(∀α)(αcα↔α)→(∧⊆c)3.Mon(c)&Un(c)→[(αcα↔α)↔(∀β)((αcβ)↔(α∧β))]The three assertions above are generalizations of well-known basic properties of t-norms.Theo-rem1.1corresponds to the fact that the minimum is the greatest(so-called strongest)t-norm.Theorem 1.2generalizes the basic fact that the minimum is the only idempotent t-norm,while1.3is a graded characterization of the idempotents of c.[8].3Graded DominanceDefinition2.The graded relation of dominance between truth-value operators is defined as fol-lows:c d≡df(∀α,β,γ,δ)((αdγ)c(βdδ)→(αcβ)d(γcδ))Theorem2.FCT proves the following graded properties of dominance:1.∆Com(c)&Ass4(c)&Mon(c)→(c c)2.Un(c)&Un(d)&(c d)→(c⊆d)3.∆Com(c)&Ass4(c)&Mon2(c)&(d c)&(c⊆d)→(c d)4.∆Com(d)&Ass4(d)&Mon2(d)&(d c)&(c⊆d)→(c d)5.Mon(c)&(& c)&((α→β)c(γ→δ))→((αcγ)→(βcδ))6.Mon(c)&(& c)&((α↔β)c(γ↔δ))→((αcγ)↔(βcδ))2Theorems2.1and2.2are generalizations of two basic facts,namely that every t-norm dominates itself and that dominance implies inclusion/pointwise order.Theorems2.3and2.4have no corre-spondences among known results;they provide us with bounds for the degree to which(c d)holds, where the assumption(d c)&(c⊆d)would be obviously useless in the crisp non-graded frame-work(as it necessitates that c and d coincide anyway).Theorem2.5provides us with strengthened monotonicity of an aggregation operator c provided that c fulfills Mon(c)and dominates the conjunc-tion of the underlying logic.Theorem2.6is then a kind of“Lipschitz property”of c(if we view↔as a kind of generalized closeness measure).Theorem3.FCT proves the following graded properties of dominance w.r.t.∧:1.Mon(c)→(c ∧)2.∆Mon(c)&∆Un(c)→((∧ c)=(∧⊆c))3.wMon2(c)→((∧ c)↔(∀α,β)((αc1)∧(1cβ)↔(αcβ)))Theorem3.1is a graded generalization of the well-known fact that the minimum dominates any aggregation operator[12].Theorem3.2demonstrates a rather surprising fact:that the degree to which a monotonic binary operation with neutral element1dominates the minimum is nothing else but the degree to which it is larger.Theorem3.3is an alternative characterization of operators dominating the minimum;for its non-graded version see[12,Prop.5.1].Example1.Assertion2.of Theorem3can easily be utilized to compute degrees to which standard t-norms on the unit interval dominate the minimum.It can be shown easily that(∧⊆c)=inf(x⇒c(x,x))x∈[0,1]holds,i.e.the largest“difference”of a t-norm c from the minimum can always be found on the diag-onal.In standardŁukasiewicz logic,this is,for instance,0.75for the product t-norm and0.5for the Łukasiewicz t-norm itself.So we can infer that the product t-norm dominates the minimum with a de-gree of0.75(assuming that the underlying logic is standardŁukasiewicz!);with the same assumption, theŁukasiewicz t-norm dominates the minimum to a degree of0.5.4Graded Dominance and Properties of Fuzzy RelationsThe following theorems show the importance of graded dominance for graded properties of fuzzy relations.Theorem4is a graded generalization of the well-known theorem that uses dominance to characterize preservation of transitivity by aggregation[12,Th.3.1](compare also[6]).Theorem4.FCT proves:Mon(c)→((∀E,F)(∆Trans(E)&∆Trans(F)→Trans(Op c(E,F))↔(& c)))where Op c is the class operation given by c,i.e., x,y ∈Op c(E,F)≡Exy c Fxy.The following theorem provides us with results on the preservation of various properties by sym-metrizations of fuzzy relations.3Theorem5.FCT proves the following properties of the symmetrization of relations:(c)→(Sym(Op c(R,R−1)))2.(&⊆c)&Refl2R→(Refl(Op c(R,R−1)))3.(&⊆c)→AntiSym(Opc (R,R−1))R4.Mon(c)&(& c)&∆Trans R→(Trans(Op c(R,R−1)))In the crisp case,the commutativity of an operator trivially implies the symmetry of symmetriza-tions by this operator.In the graded case,Theorem5.1above states that the degree to which a sym-metrization is actually symmetric is bounded below by the degree to which the aggregation operatorc is commutative.Theorems5.2–4are also well-known in the non-graded case[5,6,16].Obviously,5.4is a simple corollary of Theorem4.AcknowledgmentsLibor Bˇe hounek was supported by the program Information Society project No.1ET100300517. Petr Cintula was supported by grant No.A100300503of GA A VˇCR and Institutional Research Plan A V0Z10300504.The cooperation of the team was enabled by Program Kontakt/WTZ Czech Republic–Austria project No.6–07–17/2–2007“Formal foundations of preference modeling”. References1.Libor Bˇe hounek and Petr Cintula.Fuzzy class theory.Fuzzy Sets and Systems,154(1):34–55,2005.2.Libor Bˇe hounek and Petr Cintula.From fuzzy logic to fuzzy mathematics:A methodological manifesto.Fuzzy Setsand Systems,157(5):642–646,2006.3.Libor Bˇe hounek and Petr Cintula.Fuzzy Class Theory:A primer v1.0.Technical Report V-939,In-stitute of Computer Science,Academy of Sciences of the Czech Republic,Prague,2006.Available at www.cs.cas.cz/research/library/reports900.shtml.4.Libor Bˇe hounek and Martina Daˇn kov´a.Relational compositions in Fuzzy Class Theory.Submitted to Fuzzy Sets andSystems,2007.5.Ulrich Bodenhofer.A similarity-based generalization of fuzzy orderings preserving the classical axioms.InternationalJournal of Uncertainty,Fuzziness and Knowledge-Based Systems,8(5):593–610,2000.6.Bernard De Baets and Radko Mesiar.T-partitions.Fuzzy Sets and Systems,97(2):211–223,1998.7.Francesc Esteva and Llu´ıs Godo.Monoidal t-norm based logic:Towards a logic for left-continuous t-norms.FuzzySets and Systems,124(3):271–288,2001.8.Erich Petr Klement,Radko Mesiar,and Endre Pap.Triangular Norms,volume8of Trends in Logic.Kluwer,Dordrecht,2000.9.Radko Mesiar and Susanne Saminger.Domination of ordered weighted averaging operators over t-norms.Soft Com-puting,8(8):562–570,2004.10.S.Saminger,B.De Baets,and H.De Meyer.On the dominance relation between ordinal sums of conjunctors.Kyber-netika,42(3):337–350,2006.11.S.Saminger,P.Sarkoci,and B.De Baets.The dominance relation on the class of continuous t-norms from an ordinalsum point of view.In H.de Swart,E.Orlowska,M.Roubens,and G.Schmidt,editors,Theory and Applications of Relational Structures as Knowledge Instruments II,volume4342of Lecture Notes in Artificial Intelligence,pages 334–354.Springer,Heidelberg,2006.12.Susanne Saminger,Radko Mesiar,and Ulrich Bodenhofer.Domination of aggregation operators and preservation oftransitivity.International Journal of Uncertainty,Fuzziness and Knowledge-Based Systems,10(Suppl.):11–35,2002.13.P.Sarkoci.Domination in the families of Frank and Hamacher t-norms.Kybernetika,41:345–356,2005.14.P.Sarkoci.Dominance is not transitive on continuous triangular norms.Aequationes Mathematicae,2007.accepted.15.Gaisi Takeuti and Satoko Titani.Fuzzy logic and fuzzy set theory.Archive for Mathematical Logic,32:1–32,1992.16.Llorenc¸Valverde.On the structure of F-indistinguishability operators.Fuzzy Sets and Systems,17(3):313–328,1985.4。