Reviving functional decomposition in object-oriented design
- 格式:pdf
- 大小:76.08 KB
- 文档页数:11
Software DesignTheoretical principlesDesign methodsThe Programmer’s Approach to Software EngineeringSkip the requirements engineering and design phases, start writing code3Why this Programmer’s Approach?Requirements engineering and design are a waste of timeThe wish (or need) to show something as soon as possibleAssessment on the basis of LOC/monthReal or perceived time pressureWhat is design?Decomposition of the system into a set of interacting componentsQuestion: How to do it?5Preliminary remarksDesign is an iterative processThere often is interaction between requirements engineering, design and (formal) specificationSystem-oriented approachThe central question: how to decompose a system into parts such that each part has lowercomplexity than the system as a whole, while the parts together solve the user’s problem?In addition, the interactions between the components should not be too complicated7Design criteriaProperties of a system which make it flexible, maintainable, … Abstraction ModularityProper system structure Information hidingComplexityAbstractionProcedural abstraction: natural consequence of stepwise refinement; the name of a procedure denotes a sequence of actionstimeProcedural abstraction Problem decomposition9Abstraction (cont.)Data abstraction: aimed at finding a hierarchy in the dataApplication-oriented data structuresSimpler data structuresGeneral data structuresModularityStructural criteria which tell something about individual modules and their interconnectionsCohesion and couplingCohesion: glue inside the moduleCoupling: strength of connections between modules11Functional cohesion Coincidental cohesion Logical cohesion Temporal cohesion Procedural cohesionCommunicational cohesion Sequential cohesion Functional cohesionData cohesionModules that encapsulate an ADT (OO)How to determine the cohesiontype?Describe the purpose of a module in one sentenceIf the sentence is compound, contains a comma or more than one verb→it probably has more than one function: logical or communicational cohesionIf the sentence contains time-related words like: “first”, “then”, “after” →temporal cohesionif the verb is not followed by a specific object →probably logical cohesion (example: edit all data)words like “startup”, “initialize”imply temporal cohesion13Content coupling Common coupling Control coupling Stamp couplingData couplingCoupling levels are technology(data type) dependentData coupling assumes scalars or arrays, no recordsControl coupling assumes passing of scalar dataNowadaysModules may pass complex data structuresModules may allow some modules access to their data, and deny it to others (so there are many levels of visibility)Coupling need not be commutative (A may be data coupled to B, while B is control coupled to A)15Strong cohesion, weak coupling→simple interfaces→Simpler communication Simpler correctness proofsChanges influence other modules less often Reusability increasesComprehensibility improvesInformation hidingEach module has a secretDesign involves a series of decisions. For each such decision, questions are: who needs to know about these decisions? And who can be kept in the dark?Information hiding is strongly related toAbstraction: if you hide something, the user may abstract from that factCoupling: the secret decreases the coupling between a module and its environmentCohesion: the secret is what binds the parts of a module17ComplexityMeasure certain aspects of the software Use these numbers as criterion to assess a decomposition, or guide decompositionInterpretation: higher value →higher complexity →more effort requiredTwo kinds:intra-modular: inside one module inter-modular: between modulesComplexity measures: summaryFor small programs, the various measures correlate well with programming timeHowever, a simple length measure such as LOC does equally wellComplexity measures are not very context sensitiveComplexity measures take into account few aspectsLook at complexity density19System structureLook at complexity of dependencies between modulesDraw modules and their dependencies in a graphThen arrows may denote several things A contains B A precedes B A uses BWe are mostly interested in the latter type of relationSystem structureThis is the uses relationIn a well-structured piece of software, the dependencies show up as procedure calls. Therefore this graph is known as the call-graphPossible shapes:Chaos (directed graph) Hierarchy (acyclic graph) Strict hierarchy TreeIn a picturechaos hierarchystrict hierarchy tree21Design methodsFunctional decompositionData flow design (SA/SD)Design based on data structures (JSD/JSP)*Finite state machine (FSM) and StatechartsObject-Oriented design(*) Jackson System Development/Structured Programming23Functional decompositionTwo extremes: top-down and bottom-upIn their pure form, none of these are used (design is not a rational process)Clients do not know what they want, they are not able to tell everything they know Changes influence earlier decisions People make errorsPeople assume certain knowledge to be present Projects do not start from scratchRather, design exhibits a yoyo characterData flow designYourdon and Constantine (early 70s)Nowadays version: two-step processStructured Analysis (SA) resulting in a logical design drawn as a set of data flow diagrams Structured Design (SD) transforming the logical design into a program structure drawn as a set of structure charts25Top-level DFD: context diagrammanagementclientlibrary systemdirection reportrequest ackFirst-level decompositionmanagementclientprelim.proc.directionreportrequestackborrow titlemanage reportlog filelog data returnrequestborrow requestcatalog administrationtitletitlelog data return title27Second-level DFD for “preliminaryprocessing”check client datarequestprocess requestlog file log datareturnrequestnot OKclient data baseborrow requestOK Client infoExample minispecIdentification: process request Description:1. Enter type of request1.1. If invalid, issue warning andrepeat step 11.2. If step 1 repeated 5 times,terminate transaction 2. Enter book identification2.1. If invalid, issue warning andrepeat step 22.2. If step 2 repeated 5 times,terminate transaction3. Log client identifier, request type and book identification4. ...29Data dictionary entriesborrow-request = client-id + book-id return-request = client-id + book-idlog-data = client-id + [borrow | return] + book -idbook-id = author-name + title + (isbn) + [proc | series | other]Conventions:[ ] : include one of the enclosed options | : separates options + : AND( ) : enclosed items are optionalDataflow diagrams ÆStructure chartsResult of SAlogical model consisting of a set of DFDsaugmented by minispecs, data dictionary, etc.Structured DesignTransition from DFDs to structure chartsHow to carry out transition?Heuristics for this transition are based on notions of coupling and cohesionThe major heuristic concerns the choice for the top-level structure chart (as most systems are transform-centered)31Transformation guidelinesFor top-level structure: as most system aretransformation-based (i.e. they take some input, apply elaboration that transforms this input, and produce some output) one way to define the top-level structure is1.To trace the input through the data flow diagram until it cannot be considered input anymore2.The same applies in the other direction for the output3.Then every process between input and output belongs to the top-level process4.Processes in the DFD become modules in the Structure chart5.Data flows become module flows (in the opposite direction)6.If a central transformation is not identifiable, a dummy root module is definedTransform-centered designA BCHKDEF GInput Transform Output B do jobACDEFGH KSummaryEssence of design process: decompose system into partsDesirable properties of a decomposition: coupling/cohesion, information hiding, (layers of) abstractionAttempts to express these properties in numbers (metrics)Design Methods: many different techniques available …33。
组织变革担当的影响因素和效果探析-组织行为学论文-社会学论文——文章均为WORD文档,下载后可直接编辑使用亦可打印——摘要:变革担当是指员工自愿付出建设性努力来发起组织功能性变革, 以便在自己的岗位、部门或组织情境中更加有效地开展工作。
文章介绍了变革担当的概念、测量以及前因后效。
其中前因包括个体因素(如前瞻性人格、组织支持感、积极情绪等) 和情境因素(如工作自主性、管理开放性、创新氛围等) 两大类, 后效主要有工作绩效评价、工作态度和变革型领导知觉等。
未来的研究需要进一步完善测量工具、考察组织外部因素的影响、检验影响后效的其他调节因素以及探讨领导者的变革担当行为。
关键词:组织公民行为; 挑战行为; 变革担当; 影响因素; 影响效果;Abstract:In recent years, change-oriented organizational citizenship behaviors (OCBs) have received a great deal of attention from scholars in the field of managerial psychology. There has been growingemphasis on extra-role behavior or employee behavior that goes beyond role expectations in the organizational behavior literature. Scholars have argued that this phenomenon is critical for organizational effectiveness because managers cannot fully anticipate the activities that they may desire or need employees to perform. Although these extra-role activities are important, they are not sufficient for ensuring the continued viability of an organization, and organizations also need employees who are willing to challenge the present state of operations to bring about constructive changes. Hence, in this study, we focus on a form of extra-role behavior that has been largely neglected, namely taking charge.Taking charge refers to voluntary and constructive efforts by individual employees to affect organizationally functional change with respect to how work is executed within the contexts of their jobs, work units, or organizations. This paper introduced taking charges definition, measurement, and relationships with relevant variables, and then summarized the antecedents and consequences of such behavior. Taking charge is conceptually distinct from these more traditional forms of extra-role behavior, such as OCB, models that have been advanced to explain those behaviors are inappropriate for explaining taking charge, and scholars suggest that it is motivated by factors that have not previously been studied in the context of these more traditional forms of extra-role behavior. Taking charge may be viewed as threatening bypeers or supervisors. Thus, an employee who is trying to bring about improvement may actually incite disharmony and tension that will detract from performance.The factors that positively affect taking charge can be classified into two categories: (1) Individual-level factors, such as proactive personality, perceived organizational support and positive emotions; and (2) Contextual factors, such as job autonomy, management openness, and innovative climate. The consequences of taking charge that past research has examined include in-role performance evaluation, job satisfactory, affective commitment, and perception of transformational leadership. Finally, the paper recommends that future research should focus particularly on the following four aspects: (1) Improving the measurement of taking charge;(2) Examining the impact of factors outside the organizations (e.g., environment dynamism and industry competition) ; (3) Investigating more contingencies that moderate the consequences of taking charge, and (4) exploring the issue of leader taking charge in Chinese organizational context.This study expands current understanding of extra-role behavior and suggests ways in which organizations can motivate employees to go beyond the boundaries of their jobs to bring about positive changes. Despite a growing body of work in this area, existing research has provided a limited view of extra-role behavior by neglecting activities aimed at changing the status quo. We provideinsight into more challenging, risky and effortful forms of discretionary employee behavior. It thereby broadens current conceptualizations of extra-role behaviors within organizations, going beyond the more mundane cooperative and helping behaviors that have been the focus of the existing research.Keyword:organizational citizenship behavior; challenging behavior; taking charge; antecedents; consequences;1、前言长期以来, 组织行为学领域的学者对组织公民行为及其前因与后效一直保持着极大的研究热情。
小学下册英语第2单元测验试卷考试时间:80分钟(总分:100)A卷一、综合题(共计100题共100分)1. 听力题:The Earth is located in the Orion ______.2. 听力题:The chemical symbol for selenium is ______.3. 选择题:What do we call the force that pulls objects toward each other?A. MagnetismB. GravityC. FrictionD. Tension答案:B4. 填空题:I can ______ (灵活应变) to new challenges.5. 听力题:Chemical reactions can be classified as synthesis, decomposition, and _____.6. 选择题:What is the main ingredient in oatmeal?A. RiceB. WheatC. OatsD. Corn答案:C. Oats7. 选择题:What do we call the time when schools are closed for summer?A. HolidayB. VacationC. BreakD. Session8. 听力题:Endothermic reactions require energy, usually in the form of ______.9. 选择题:What do you call a person who helps in an emergency?A. TeacherB. ParamedicC. ChefD. Driver答案:B10. 填空题:A _____ (章鱼) can squirt ink to escape danger.11. 听力题:The Battle of Hastings took place in the year ________.12. 选择题:What do you call the part of the plant that absorbs water?A. LeafB. StemC. RootD. Flower答案:C13. 填空题:A ______ (生态系统服务) is crucial for human well-being.14. 选择题:What do we call a person who plays a musical instrument?A. MusicianB. ComposerC. ConductorD. Singer答案: A. Musician15. 填空题:The _____ (户外) is a perfect place for a picnic among the flowers.16. 填空题:The kangaroo uses its powerful legs to ______ (跳跃).17. 填空题:A monkey can _______ (爬) trees easily.18. 听力题:My favorite subject is _____ (math/science).19. 听力题:A ____ is a gentle giant that can be very friendly.20. 听力题:Planets are classified as terrestrial or ______.21. 填空题:A ____(green certification) recognizes sustainable practices.22. 填空题:The capital of the Gambia is ________ (班珠尔).23. 听力题:The __________ is a region known for its unique languages.24. 填空题:My dad enjoys cooking on the ____.25. 选择题:Which animal is known for its ability to swim fast?A. DogB. DolphinC. CatD. Elephant答案:B26. 听力题:A prism can separate light into different ______.27. 填空题:Many plants are _____ (可食用) and nutritious.28. 填空题:I enjoy _______ (喝果汁) in the summer.29. 选择题:What is the opposite of 'fast'?A. QuickB. SlowC. SpeedyD. Rapid30. 填空题:The ________ (机场) is busy with travelers.31. 选择题:What do bees make?A. MilkB. HoneyC. ButterD. Jam答案:B32. 听力题:I like to _____ in the garden. (play)33. 听力题:Matter is anything that has ______.34. 听力题:The chemical formula for beryllium oxide is ______.35. 选择题:What is the opposite of 'hot'?'热'的反义词是什么?A. ColdB. WarmC. CoolD. Boiling答案: A36. 选择题:What is the main ingredient in a salad?a. Meatb. Vegetablesc. Fruitd. Bread答案:B37. 听力题:The Titanic sank on its maiden _______.38. 听力题:Some _______ are grown for their beauty.What is the name of the popular game where you try to guess words based on clues?A. PictionaryB. CharadesC. TabooD. Scattergories答案: A40. 选择题:What do you call a young female goat?A. KidB. CalfC. LambD. Pup答案: A41. 填空题:The ________ is small and cute.42. 填空题:The ________ (历史遗迹) tell stories of the past.43. 填空题:I saw a __________ form in the sky before the storm. (乌云)44. 填空题:I hear birds singing when it’s ______ (晴天).45. 选择题:What is the capital city of Russia?A. MoscowB. St. PetersburgC. KazanD. Novosibirsk答案: A46. 选择题:What do we call a computer program that helps us to create documents?A. BrowserB. Word processorC. SpreadsheetD. Game答案:BA ______ (多样的生态系统) promotes resilience.48. 填空题:My favorite number is ______.49. 填空题:The __________ (地貌) shapes the landscape.50. 听力题:Organic chemistry is the study of ______ compounds.51. 听力题:Some _______ can be climbing or trailing.52. 选择题:Which item is used to tell time?A. CalendarB. ClockC. MapD. Book答案:B53. 听力题:The park is ___ (full/empty) of children.54. 听力题:__________ is the process of separating a mixture into its components.55. 听力题:The process of ______ can lead to the discovery of fossils.56. 填空题:A _____ (狒狒) can be quite mischievous.57. 选择题:What is the primary function of the lungs?A. To pump bloodB. To digest foodC. To absorb oxygenD. To filter waste答案: C58. 填空题:The owl has _______ (大眼睛) for night vision.Electric fields can exert ______ (forces) on charged particles.60. 填空题:The ________ (养分) in the soil is important for growth.61. 听力题:She likes to wear ________ shoes.62. 听力题:They are going to ________ a concert.63. 填空题:The ancient Greeks believed in many _____ and myths.64. 填空题:My brother loves to __________ (参加) sports tournaments.65. 填空题:A __________ (生态旅游) promotes awareness of environmental issues.66. 选择题:What is the capital of Vietnam?A. Ho Chi Minh CityB. HanoiC. Da NangD. Can Tho答案:B. Hanoi67. 听力题:The wind is ______ (blowing) gently today.68. 填空题:A horse is used for riding and ________________ (工作).69. 选择题:How many months are there in a year?A. TenB. TwelveC. ElevenD. Nine70. 填空题:The musician plays the _____ (小号) in the band.What do you call a story with animals that talk?A. Fairy taleB. FableC. BiographyD. Novel72. 选择题:What is the process of changing from liquid to gas called?A. FreezingB. MeltingC. EvaporationD. Condensation答案: C73. 选择题:How do you say "再见" in English?A. HelloB. GoodbyeC. PleaseD. Thank you74. 听力题:This is my best ____ (friend). We go to school together.75. 选择题:How many days are in a week?A. FiveB. SixC. SevenD. Eight76. 填空题:The ______ (植物的生长速度) can vary based on conditions.77. 填空题:On a sunny day, I decided to visit the ______ (1) with my family. We packed a picnic basket filled with sandwiches, fruits, and drinks. When we arrived, the park was filled with ______ (2) enjoying the beautiful weather. My little brother ran stra78. 填空题:The scientist discovered a new _____ (物种).79. 填空题:I have a new _______ (背包).We are having a ___. (picnic) this weekend.81. 填空题:The goldfish can live for several _________ (年).82. 听力题:The ______ teaches us about different countries.83. 选择题:What is the capital of Australia?A. SydneyB. CanberraC. MelbourneD. Brisbane答案: B84. 听力题:Birds have ______ to help them fly.85. 填空题:My __________ (玩具名) is very __________ (形容词) to play with.86. 填空题:I enjoy reading ______ (故事) before going to sleep.87. 填空题:My family loves to __________ on holiday. (度假)88. 选择题:How many sides does a pentagon have?A. 4B. 5C. 6D. 7答案: B89. 听力题:He is _____ (running) very fast.90. 选择题:What do we call the process of a caterpillar turning into a butterfly?A. EvolutionB. MetamorphosisC. DevelopmentD. Transformation答案:B91. 填空题:Ancient Egyptians believed in many ________.92. 听力题:The first man to reach the South Pole was _______ Amundsen.93. 填空题:My teacher encouraged us to create our own ________ (漫画). I drew a funny ________ (角色).94. 选择题:Which one is a popular sport?A. SwimmingB. EatingC. SleepingD. Writing95. 选择题:What do you call a person who writes music?A. ComposerB. MusicianC. LyricistD. All of the above答案:D96. 听力题:Vinegar is an example of an _______.97. 听力题:Reactions that happen quickly release more ______.98. 听力题:The first human to fly in space was _______ Gagarin.99. 听力题:I like to play ______ (video games) on weekends.100. 填空题:A _______ (蜥蜴) can be found on warm rocks.。
第一章语言学导论Chapter1 Invitations to LinguisticsLinguistics is nowadays coming into wide use with combination of theories and practice as wellas linguistics and other disciplines.Linguistics is of great use with very wide application. —人工智能,人机对话,机器翻译The research of linguistics has already gone beyond language itself.Definition of LinguisticsHow do you define linguistics? What is linguistics?—— Linguistics can be defined as the scientific or systematic study of language. It is a sciencein the sense that it scientifically studies the rules, systems and principles of human language.What are we going to learn about linguistics?1.It is generally agreed that linguistics should include at least five parameters, namely, phonological, morphological, syntactic, semantic and pragmatic. These can be called microlinguistics.语音学 (phonetics); 音系学 (phonology); 形态学 (morphology); 句法学 (syntax) — Schools of Modern Linguistics 现代语言学流派; 语义学 (semantics) ; 语用学 (pragmatics) (chapter2-6) 2. Macrolinguistics —— interdisciplinary learningSaussure, father of modern linguistics( 现代语言学之父) were intended to establish the autonomy of linguistics, giving it a well-defined subject of study and freeing it from reliance onother disciplines. However, the interactive links between linguistics and other sciences are developing fast.尽管索绪尔的目的是给予语言学自主性,给它定义明确的研究对象,将它从对其他学科的依赖中解放出来。
Declarative Semantics for Contradictory Modular LogicProgramsAnastasia Analyti & Sakti PramanikDepartment of Computer Science, Michigan State University, E. Lansing, MI 48824 E-mail: analyti@, pramanik@Abstract. A complex reasoning system can be designed as an interaction between reasoning modules. A module consists of a declaration of exported/imported predicates and a set of rules containing both negation by default and classical negation. A prioritized modular logic programon the predicate definitions (M, p), (PMP) consists of a set of modules and a partial order <defwhere M is a module and p is a predicate exported by M. Because of the classical negation, conflicts may arise within and among modules. The partial order <denotes the relativedefreliability of the predicate definitions contributing to the conflict. We present the reliable semantics for PMPs. The goal of the reliable semantics is to draw reliable conclusions from possibly contradictory PMPs.1 IntroductionModules in a reasoning system arise as a result of a functional decomposition of a complex reasoning task into a number of simpler subtasks. Each module is an interactive reasoning subsystem that is used for the (often partial) definition of its exported predicates. Each module contains a set of rules viewed as an open logic theory [1] with a set of input literals. A module represents an incomplete specification of some domain because its input literals are defined in other modules. However, a module becomes a standard extended logic program (closed module) when the truth values of its input literals are known.A prioritized modular logic program(PMP) consists of a set of modules and aon the predicate definitions (M, p), where M is a module and p is a partial order <defpredicate exported by M. We assume that modules are internally consistent. However, aexpresses our PMP is possibly not globally consistent. When a conflict occurs, <defrelative confidence in the predicate definitions contributing to the conflict. Each module has a set of local literals that are inaccessible to other modules. Literals that can be accessed (imported) by any module have the form {M1,...,M n}:A, {M1,...,M n}: ¬A or {M1,...,M n}: ~A, where M i are module names and A is a conventional atom whose predicate is exported by all M i. Intuitively, a literal {M1,...,M n}:A is evaluated as true if (i) A is derived from a module M i, and (ii) if ¬A is derived from a module M j≠ M i then result A has higher priority than result ¬A. A literal {M1,...,M n}: ~A is evaluated as true if {M1,...,M n}: ¬A is true or~A is true in all modules M i.We present a semantics for PMPs, called r eliable semantics (RS), which assigns a truth value true, false or unknown to every literal. Every PMP has at least one stable m-model. The RS of a program P is the least fixpoint of a monotonic operator and the least (w.r.t. ⊆) stable m-model of P. When a PMP is contradictory, exported results (represented by indexed literals) are considered unreliable if: (i) they contribute to a contradiction, and (ii) they do not have higher priority than the other contributingresults. The RS of a PMP, P,represents the skeptical meaning of P and thus, none of its conclusions is based on unreliable exported results. Credulous conclusions are obtained by isolating the conflicting results in the multiple stable m-models of P.2 Informal Presentation and IntuitionsOur framework can be used for the representation of result-sharing cooperating agents [14]. A complex task is statically decomposed into a set of simpler subtasks, each assigned to an agent. Agents may have overlapping or even identical capabilities. Therefore, it is possible that they export agreeing or contradictory results. When agents M1, M2 export contradictory conclusions about a literal L, the truth value of L w.r.t. M1, M2 (expressed by the truth value of the literal {M1, M2}:L) is unknown. Yet, agents M1, M2 maintain their individual beliefs about L which is expressed by the truth value of the literals {M1}:L, {M2}:L, respectively.Example 2.1: Sensors S1, S2 are gathering information from two different angles about the persons entering a building. Modules M1, M2 are assigned with the identification of terrorists based on the information collected from sensors S1, S2, respectively. Each module M1, M2 exports the result entered(terrorist) (resp. ¬entered(terrorist)) iff it reaches the conclusion that an (resp. no) terrorist has entered the building. It is possible that M1, M2 disagree, i.e., M1 exports entered(terrorist) whereas M2 exports ¬entered(terrorist). The results exported by M1, M2 can be queried by other modules in various ways. For example,• Query1←{M1}:entered(terrorist), {M2}:entered(terrorist).Query1 is true if entered(terrorist) is true in both M1, M2. Query1 is false by default if entered(terrorist) is false (by default or classically) in at least one of M1, M2. •Query2←{M1, M2}: entered(terrorist).Query2 is true if entered(terrorist) is true in at least one of M1, M2 and M1, M2 do not disagree. Query2 is false by default if entered(terrorist) is false in both M1, M2. •Query3←{M1}: ~entered(terrorist).Query3 is true if entered(terrorist) is false in M1 (even if entered(terrorist) is true in M2).Individual agent theories are assumed to be consistent. Yet, the consistency of the union of agent theories is not assured. As we saw in Example 2.1, one case of contradiction is when independent modules export contradictory results. In this case, the contradiction depends only on the independent modules and it is relatively easy to resolve. Yet, generally, contradictions may involve several module interactions. For example, an agent exports a faulty result, this result is imported by another agent which exports a faulty result based on the imported faulty result. After a few module interactions, contradiction may arise in two ways : (i) Complementary literals are derived inside an module. (ii) Complementary literals are exported from two different modules.When contradiction appears, the sources of the contradiction are traced back to the contributing exported results. Domain specific information might indicate that some exported results are more reliable (have higher priority) than others. Let res1 and res2 be2two exported results contributing to the contradiction. If res1 has higher priority than res2 and no contradiction arises without res2 then only res1 is taken into account. If the priority of res1, res2 cannot be compared then both are eliminated from RS (skeptical approach).3. m-models for Prioritized Modular Logic programsOur alphabet contains a finite set of constant, predicate and variable symbols from which ordinary atoms are constructed in the usual way. It also contains a finite set of module names. An indexed atom has the form {M1,...,M n}:A, where A is an ordinary atom and M i are module names. A classical literal is either an atom A or its classical negation ¬A. The classical negation of a literal L is denoted by ¬L,where ¬(¬L)=L. A default literal is the default negation ~L of a classical literal L,where ~(~L)=L.A literal is called indexed when its corresponding atom is indexed. We define {M1,...,M n}:¬A = ¬{M1,...,M n}:A and {M1,...,M n}:~A= ~{M1,...,M n}:A, where M i are module names and A is an ordinary atom. The classical literals L, ¬L are called complementary to each other. The predicate of a literal L is denoted by pred L.A module with name M is a tuple <Exp M, Imp M, R M>. The set Exp M contains the predicates that are exported (defined) by M. The set Imp M contains the predicates imported by M. The set R M contains rules of the form: L←L1,...,L m,~L m+1,...,~L n, where L is a non-indexed classical literal and L i are classical literals. If an indexed literal {M1,...,M n}:L is in the body of a rule then M imports L from the modules M1,...,M n. If a non-indexed literal L is in the body of a rule and pred L∈Imp M then M imports L from all modules that export pred L.A prioritized modular logic program (PMP), P, is a pair <Mod P, <>. Mod P is adefset of modules and <def a partial order on Def P, where Def P={(M,p) | M∈Mod P and p∈Exp M}. Each pair (M,p)∈Def P represents the definition of predicate p in module M. If {M1,...,M n}:L is an indexed literal appearing in P then (M i,pred L)∈Def P, ∀i≤n.Individual modules are assumed to be consistent but their union may be inconsistent. Thus, when complementary literals are derived within a module M, it is because of unreliable information imported by M. When literals L, ¬L are derived from different modules M, M', it is because the definition of pred L in M, M' is unreliable or the information imported by M, M' is unreliable. When conflict occurs, <def expresses our relative confidence in the predicate definitions contributing to the conflict. Let (M,p) and (M',p') be in Def P. The notation (M,p) < (M',p') (resp. (M,p) M',p')) means that the definition of p in M is less (resp. not less) trusted than that of p' in M'. Intuitively, a literal L exported by a module M is reliable if it does not contribute to any derivation of complementary literals caused by definitions (M',p') (M,pred L). Note that, the reliability of L is not affected by predicate definitions less trusted than the definition of pred L in M.We define S L={M∈Mod P | pred L∈Exp M}, where L is a literal. The indexed literals S L:L are called globally indexed. To simplify the presentation of the semantics a3renaming mechanism is employed. Let r be a rule in a module M. Then, the head L of r is replaced by a new literal M#L. Every non-indexed literal L in the body of r with pred L∉Imp M is replaced by M#L. Every non-indexed literal L in the body of r with pred L∈Imp M is replaced by the globally indexed literal S L:L. Literals M#L are called local to M and they are not accessed by other modules. In contrast to local literals, indexed literals are accessible to all modules. When we refer to a PMP, we assume that the above renaming has already been done. Note that after renaming, only local and indexed literals appear in a module M. We define M#¬A =¬M#A and M#~A =~M#A, where M is a module name and A is an ordinary atom.The set of all instantiations of the classical literals appearing in P and the globally indexed classical literals S L:L, where L appears in P, is called the Herbrand Base (HB P) of P. Let M be a module. We define close u(M) as the extended program that results if we replace every indexed literal in M with the special proposition u (meaning undefined). We say that M is internally consistent iff the extended well-founded semantics [PeAl92] of close u(M) is defined (the WFM of close u(M) is not contradictory). All modules of a PMP are assumed to be internally consistent.If S is a set of literals then ~S ={~L | L∈S} and ¬S ={¬L | L∈S}. The notations Head r and Body r denote the head and body of a rule r.Example 3.1: A contig is a set of overlapping DNA fragments that span some region of a genome. One method to detect overlaps uses common features. The idea is that if the same feature is found in two DNA fragments then they probably overlap. The overlap is not certain because of unreliable experimental results and feature repetition in the genome. Consider the following PMP (before renaming) with modules OF (OverlapFeature), OD (OverlapData), F (Feature): (variables start with capital)feature ×DNA fragment 1feature ×DNA fragment 2module OF exports overlap imports feat/* If the same feature Feat is found in Frag1 and Frag2 then the fragments overlap */ rules overlap(Frag1,Frag2)←feat(Frag1,Feat), feat(Frag2,Feat).module OD exports overlap module F exports feat/* Very reliable Overlap data *//* A feature x is found in frag1 and frag2 */ rules¬overlap(frag1,frag2). rules feat(frag1, x). feat(frag2, x).(F,feat)< (OD,overlap)/* overlap data in OD are more reliable than data in F */ Even though the modules OD, OF and F are internally consistent, their union is inconsistent because both overlap(frag1,frag2) and ¬overlap(frag1,frag2) can be derived from the above rules. Note that ¬overlap(frag1,frag2) is derived from module OD and that the derivation of overlap(frag1,frag2) from module OF depends on feat(frag1,x) and feat(frag2,x), exported by module F. Since (F,feat)<(OD,overlap), i.e., the definition of feat in F is less reliable that of overlap in OD, the results feat(frag1,x) and feat(frag2,x) are considered unreliable. Thus, the truth value of the literals feat(frag1,x) and feat(frag2,x) is considered unknown and the literal4¬overlap(frag1,frag2) is evaluated as true. After the renaming mechanism is employed, modules become:module OF exports overlap imports featrules OF#overlap(Frag1,Frag2)← {F}:feat(Frag1, Feat), {F}:feat(Frag2, Feat). module OD exports overlap module F exports featrules¬OD#overlap(frag1,frag2). rules F#feat(frag1, x). F#feat(frag2, x). Definition 3.1 (interpretation): Let P be a PMP. An interpretation I is a set of literals T∪~F,where T and F are disjoint sets of classical literals. I is consistent iff T∩¬T =Ø. I is coherent iff it satisfies the coherence property: ¬T⊆F.In the above definition, T contains the classically true literals, ¬T contains the classically false literals and the set F contains the literals false by default. The coherence operator (coh) [PeAl92] is used to transform an interpretation to a coherent one. If I is a set of literals then coh(I)=I∪{∼L| ¬L∈I}.Definition 3.2 (truth valuation of a literal): A literal L is true (resp. false) w.r.t. an interpretation I iff L∈I (resp. ~L∈I). A literal that is neither true nor false w.r.t. I, it is undefined w.r.t. I.An interpretation I can be seen equivalently as a function from the set of classical literals to {0,1/2,1}, where I(L)=1 when L is true w.r.t. I, I(L)=0 when L is false w.r.t. I and I(L)=1/2when L is undefined w.r.t. I. We define I(Ø)=1 and I(S)= min{I(L)| L∈S}, where S is a non-empty set of literals.The truth value of a literal M#L represents the truth value of L in module M. A classical literal {M1,...,M n}:L is true iff M i#L is true for an i≤n and {M1,...,M n}:L is reliable. A default literal ~{M1,...,M n}:L is true iff ~M i#L is true for all i≤n and ~{M1,...,M n}:L is reliable. Let I be a set of literals known to be true. In Def. 3.5, the concept of reliable indexed literal is defined which is used in defining the m-models and in the fixpoint computation of the reliable semantics. To decide if an indexed literal S:L is reliable w.r.t. a definition (M,p), all possible derivations of complementary literals caused by definitions (M',p') (M,p)should be considered. S:L is reliable if it does not contribute to any such derivation of complementary literals. Intuitively, Pos[M,p],I contains the literals possible to derive when results exported by modules M' with (M',pred L) < (M,p) are ignored.Definition 3.3 (possible literal set):Let P be a PMP, I,J sets of literals and (M,p)∈Def P. The possible literal set w.r.t. (M,p) and I, denoted by Pos[M,p],I, is defined as follows:• PT[M,p],I(J)={M'#L | ∃M'#L←L1,...,L n in module M' s.t. L i∈J, ∀i≤n} ∪{S:L∈HB P| ¬S:L∉I and ∃M'∈S s.t. M'#L∈J and (M',pred L)(M,p)}• PF[M,p],I(J) is the greatest set U ⊆HB P s.t.(i) if Μ'#L∈U and r is a rule s.t. Head r = M'#L then ∃K∈Body r s.t. K∈U or ~K∈J,(ii) if S:L∈U then∀M'∈S,Μ'#L∈U and (M',pred L)(M,p).• PW[M,p],I(J) = coh(PT[M,p],I(J) ∪ ~PF[M,p],I(J)).• Pos[M,p],I is the least (w.r.t. ⊆) fixpoint of the operator PW[M,p],I.Intuitively, a literal S:L is reliable w.r.t. (M,p) and I iff there are no K, ¬K in Pos[M,p],I s.t. the derivation of K depends on S:L. The dependency set of K w.r.t. (M,p)5and I is the set of literals in Pos[M,p],I that the derivation of K depends on. Since I is a set of literals known to be true, the dependency set of a literal K∈I equals {}.Definition 3.4 (dependency set):Let P be a PMP, I a set of literals and (M,p)∈Def P. The dependency set of a literal K w.r.t. (M,p) and I, denoted by Dep[M,p],I(K), is the least (w.r.t. ⊆) set D(K) s.t. if K∈I then D(K)={} else(i) If K= ~M'#L is a default literal then(a) D(¬M'#L) ⊆ D(~M'#L) and(b) ∀M'#L←L1,...,L n in M', if ~L i∈Pos[M,p],I for i≤n then D(~L i)⊆D(~M'#L).(ii)If K=M'#L is a classical literal thenif M'#L←L1,...,L n in M' s.t. {L1,...,L n}⊆Pos[M,p],I then D(L i)⊆D(M'#L),∀i≤n.(iii)If K= ~S:L is a default literal then(a) D(¬S:L) ⊆D(~S:L) and(b) if ∀M'∈S, (M',pred L)(M,p) and ~M'#L∈Pos[M,p],I thenD(~M'#L) ⊆D(~S:L), ∀M'∈S and ~S':L∈D(~S:L),∀S'⊆S.(iv)If K= S:L is a classical literal then(a) D(M'#L) ⊆D(S:L), ∀M'∈S s.t. (M',pred L)(M,p) and S':L∈D(S:L),∀S'⊆S. Definition 3.5 (reliable indexed literal):Let P be a PMP, I a set of literals, S:L a literal, (M,pred L)∈Def P, M∈S and p be equal to pred L.• The literal S:L is unreliable w.r.t.(M,p) and I iff ∃ ¬K∈Pos[M,p],I s.t.if K = S':L with S'⊃ S then S:L∈Dep[M,p],I(M'#L) for an M'∈S' s.t. (M',p)(M,p)else S:L∈Dep[M,p],I(K).• The literal S:L is reliable w.r.t.(M,p) and I iff it is not unreliable w.r.t.(M,p) and I.Assume that S:L is unreliable w.r.t.(M,p) and I. If K of Definition 3.5 is a local literal M'#L' then (i) the literals K, ¬K are derived inside module M' and (ii) S:L contributes to the derivation of K. If K = S':L' ≠ S:L is an indexed literal then: (i) there are literals M'#L', ¬M"#L' derived in modules M'∈S' and M"∈S' s.t. (M',p)(M,p) and (M",p)(M,p), and (ii) S:L contributes to the derivation of M'#L'. If K = S:L then there are literals M'#L, ¬M"#L derived in modules M'∈S and M"∈S with (M',p)(M,p) and (M",p)(M,p).Example 3.2: Let P be the PMP of Example 3.1 and I=Ø.Pos[OD,overlap],I = coh({¬OD#overlap(frag1,frag2), ¬{OD}:overlap(frag1,frag2),¬{OD,OF}:overlap(frag1,frag2)})The literal ¬{OD}:overlap(frag1,frag2) is reliable w.r.t. (OD,overlap) and I because¬{OD}:overlap(frag1,frag2) ∉ Dep[OD,overlap],I(¬K), ∀K∈Pos[OD,overlap],I. Similarly, ¬{OD,OF}:overlap(frag1,frag2) is reliable w.r.t. (OD,overlap) and I.Pos[F,feat],I = coh({F#feat(frag1,x), {F}:feat(frag1,x), F#feat(frag2,x),{F}:feat(frag2,x), OF#overlap(frag1,frag2), {OF}:overlap(frag1,frag2),{OD,OF}:overlap(frag1,frag2), ¬OD#overlap(frag1,frag2),¬{OD}:overlap(frag1,frag2), ¬{OD,OF}:overlap(frag1,frag2)})The literal {F}:feat(frag1,x) is unreliable w.r.t. (F,feat) and I because(i) ¬{OD,OF}:overlap(frag1,frag2) ∈ Pos[F,feat],I and6(ii) {F}:feat(frag1,x) ∈ Dep[F,feat],I({OD,OF}:overlap(frag1,frag2)).Similarly, {F}:feat(frag2,x) is unreliable w.r.t. (F,feat) and I.Pos[OF,overlap],I = Pos[F,feat],I. The literal {OF}:overlap(frag1,frag2) is reliable w.r.t. (OF,overlap) and I whereas the literal {OD,OF}:overlap(frag1,frag2) is not.Definition 3.6 (m-model): Let P be a PMP. A consistent, coherent interpretation I is an m-model of P iff (i) ∀ rule r, I(Head r)≥I(Body r) and (ii) ∀ classical literal S:L, both of the following are true:− if I(S:L)≠1 then ∀M∈S s.t. I(M#L)=1, S:L is unreliable w.r.t. (M,pred L) and I− if I(S:L)=0 then I(¬S:L)=1 or ∀M∈S, I(M#L)=0.Since condition (i) defines 3-valued models [9], an m-model of P is a 3-valued model of every module of P. In condition (ii), the first subcondition expresses that if S:L is a classical literal, M∈S and I(M#L)=1 then I(S:L) can be ≠1 only if S:L is unreliable w.r.t. (M,pred L) and I. The purpose of the second subcondition in condition (ii) is to allow S:L to be false when ¬S:L holds, even if ∃M∈S s.t. I(M#L)>0.Example 3.3: Let P be as in Example 3.1. Then, I is an m-model of P whereI=coh({F#feat(frag1,x), F#feat(frag2,x), ¬OD#overlap(frag1,frag2),¬{OD}:overlap(frag1,frag2), ¬{OD,OF}:overlap(frag1,frag2)}).4. Reliable Semantics for Prioritized Modular Logic ProgramsIn this Section, we define the reliable model, stable m-models and reliable semantics of a PMP, P. We show that the reliable model of P is the least (w.r.t. ⊆) stable m-model of P.Definition 4.1 (m-unfounded set): Let P be a PMP and J a set of literals. A literal set U ⊆HB P is m-unfounded w.r.t. J iff(i) if M#L∈U and r is a rule with Head r=M#L then ∃K∈Body r s.t. K∈U or ~K∈J,(ii) if S:L∈U then∀M∈S,Μ#L∈U and~S:L is reliable w.r.t. (M,pred L) and J.The W P operator extends the W P operator of the WFS [11], to PMPs.Definition 4.2 (W P operator): Let P be a PMP and J a set of literals. We define:• T(J)={M#L | ∃M#L←L1,...,L n in module M s.t. L i∈J, ∀i≤n} ∪{S:L∈HP P| M#L∈J, M∈S and S:L is reliable w.r.t. (M,pred L) and J}• F(J) is the greatest m-unfounded set w.r.t. J.• W P(J)=coh(T(J)∪~F(J)).The union of two m-unfounded sets w.r.t. J is an m-unfounded set w.r.t. J. So, F(J) is the union of all m-unfounded sets w.r.t. J. We define the transfinite sequence {I a} as follows: I0={}, I a+1=W P(I a) and I a= ∪{I b| b<a} if a is a limit ordinal.Proposition 4.1: Let P be a PMP. {I a} is a monotonically increasing (w.r.t. ⊆) sequence of consistent, coherent interpretations of P.Since{I a} is monotonically increasing, there is a smallest ordinal d s.t. I d=I d+1.Proposition 4.2: Let P be a PMP. Then, I d is an m-model of P.7Definition 4.3 (reliable semantics): Let P be a PMP. The reliable model of P, RM P, is the m-model I d. The reliable semantics of P is the "meaning" represented by RM P.It is possible that a local literal M#L∈RM P but {M}:L∉RM P. This, intuitively, means that module M concludes L but that conclusion may be erroneous.Example 4.1: Let P be the program of Example 3.1 and I be the m-model of Exa mple 3.3. Then, I is the reliable model of P. When <def ={}, RM P = coh({F#feat(frag1,x), F#feat(frag2,x), ¬OD#overlap(frag1,frag2)}).Proposition 4.3: Let P be a PMP. The complexity of RM P is polynomial w.r.t. |P|.The reliable model of a PMP corresponds to its skeptical meaning. Credulous meanings can be obtained using the transformation P/m I, where I is an interpretation of P. The transformation P/I is defined in [5, 9] for a normal program P.Definition 4.4 (transformation P/m I): Let P be a PMP and I an interpretation. The program P/m I is obtained from P as follows:(i) Remove all rules that contain in their body a default literal ~L s.t. I(L)=1.(ii) Remove any rule r s.t. I(¬Head r)=1.(iii) Remove from the body of the remaining rules any default literal ~L s.t. I(L)=0.(iv) Replace all remaining default literals ~L with u.(v) For all S:L∈HB P s.t. I(S:L)=1/2,− for all M∈S, if I(M#L)=1 then add S:L←u else add S:L←M#L,− if ∃M∈S s.t. ~S:L is unreliable w.r.t. (M,pred L) and I then add S:L←u.(vi) For all S:L∈HB P s.t. I(S:L)≠1/2 and I(¬S:L)≠1, add S:L←M#L, ∀M∈S.We say that a stable m-model I of P is the least v model of P iff I(L)≤I'(L), for any stable m-model I' and classical literal L of P.Definition 4.5 (stable m-model): Let P be a PMP and I an m-model of P. I is a stable m-model of P iff least v(P/m I)=I.Let I be a stable m-model of P. If S:L is unreliable w.r.t. (M,pred L) and I,for an M∈S then the truth value of S:L can be unknown w.r.t. I even if I(M#L)=1. If ~S:L is unreliable w.r.t. (M,pred L) and I, for an M∈S then the truth value of S:L can be unknown w.r.t. I even if I(M#L)=0, for all M∈S.The export rule set of P is defined as ER P = {S:L←M#L| S:L∈HB P and M∈S}∪ {~S:L←~M1#L,...,~M n#L | S:L∈HB P and S ={M1,...,M n}}. An interpretation I of P satisfies r∈ER P iff I(Body r)≠1 or I(Head r)=1. Let I1, I2 be two stable m-models of P. We say that I1 ≤sat I2iff (i) ∀r∈ER P, if I1 satisfies r then I2 satisfies r and (ii) I1 ⊆ I2or ∃r∈ER P s.t. I2 satisfies r and I1 does not satisfy r. In other words, I1 ≤sat I2iff I2 satisfies more export rules than I1 or (I1 ⊆ I2 and I2 satisfies the same export rules as I1). Maximal (w.r.t. ≤sat) stable m-models can be seen as the credulous meanings of P. Example 4.2: Let P be the program of Example 3.1. Then P has four stable m-models:I1=RM P, I2=RM P ∪ coh({{F}:feat(frag1,x)}),8I3=RM P ∪ coh({{F}:feat(frag2,x)}) and I4=RM P ∪ coh({{F}:feat(frag1,x),{F}:feat(frag2,x), OF#overlap(frag1,frag2), {OF}:overlap(frag1,frag2)}).I2, I3and I4 are maximal (w.r.t. ≤sat) stable m-models of P. Note thatModel I2 does not satisfy {F}:feat(frag2,x)← F#feat(frag2,x),Model I3 does not satisfy {F}:feat(frag1,x)← F#feat(frag1,x) andModel I4 does not satisfy {OD,OF}:overlap(frag1,frag2)← OF#overlap(frag1,frag2). Proposition 4.4: Let P be a PMP. The reliable model of P is a stable m-model of P. Proposition 4.5: The reliable model of a PMP is its least (w.r.t. ⊆) stable m-model.According to RS presented in the previous sections, the confidence in a globally indexed default literal ~L, derived by the default rule for negation, depends on the minimal priorities of (M, pred L)∈Def P. Thus, in case of conflict, ~L may not be considered less reliable than literals that their derivation is not based on closed-world assumptions. When this is undesirable for a set of predicates Pred~ , a new module M~ can be added which has no rules but exports all predicates in Pred~. Moreover, (M~,p') <(M,p) for all p'∈Pred~ and definitions (M,p) other than (M~,p).An extended program with rule prioritization (EPP) is naturally translated into a PMP by considering each rule as a module that imports the predicates appearing in its body and exports its head predicate. Thus, we have also defined the reliable semantics for EPPs. The RS for EPPs extends the well-founded semantics for normal programs [11] and extended well-founded semantics for extended programs [7].6. Related WorkThe contradiction removal semantics(CRS) for extended programs [8] avoids contradictions brought about by closed world assumptions. For example, the CRS of P={¬p←∼a. p←. b←.} is {p, b} which is non-contradictory. Yet, contradictions not based on closed world assumptions cannot be resolved. For example, nothing is concluded from P' ={¬p←. p←. b←.} though b should be true. The same is true for the semantics in [3, 12]. However, the RM P' ={}.The conservative reasoning for extended programs, presented in [13], is as follows: if r is a rule and Body r is true then Head r is true iff for every rule r' s.t. Head r' = ¬Head r, Body r' cannot be derived. For example, the conservative semantics of P = {r1: ¬a ← b. r2: a. r3:b.} is {b}. In RS, r3is considered unreliable and RM P' ={}.Prioritization of rules is investigated in the various variations of ordered logic [4, 6]. Even though these semantics are defined for all ordered logic programs, negation by default is not supported. Moreover, the rule ordering in ordered logic represents exceptions and not reliability. For, example, the ordered logic semantics of P = {r1: ¬a ← b. r2: a. r3:b.} with r3<r2<r1 is {b} where as RM P' ={a, ~¬a} since r3<r2. When the prioritization of the rules is ignored, ordered logic and conservative reasoning [13] behave similarly. Prioritization of rules is also supported in [2]. However, there rules are considered to be clauses, i.e., there is no distinction between the head and the body of a rule. Thus, in [2], program P' = {p ← ~p.} is considered equivalent with {p.} whereas9。
ity to air pollution among different population subgroups. (Am. J. Respir. Crit. Care Med. .197,155, ,68-76) Dioxin trends in fish Concentrations of PCBs, polychlori-nated dibenzo-p-dioxins, and poly-chlorinated dibenzofurans are often gauged in terms of toxic equivalence factors. S.Y. Huestis and co-workers reported temporal (1977-93) and age-related trends in concentration and toxic equivalencies of these compounds in Lake Ontario lake trout. Analysis of the stored frozen fish tissue used a single analysis pro-tocol, which allowed improved com-parison of data from different time periods. Results showed that contami-nant levels have declined substantially since 1977 but concentration levels have stabilized approaching a steady state or a very slow decline The pro-portion of the total toxic equivalency ascribed to each compound has changed little in each of the three sets examined (Environ Toxicol Chem 1997 16(2) 154-64) Herbicide extractionEfficient extraction and analysis of phenoxyacid herbicides are difficult because of their high polarity and low volatility. T. S. Reighard and S. V Olesik reported the use of methanol/ C02 mixtures at elevated tempera-tures and pressures to extract these herbicides from household dust. The experiments were done at conditions covering both subcritical and super-critical regimes. They found that the highest recoveries (between 83% and 95% for the four herbicides studied) were observed at 20 mol % methanol and at temperatures of 100 °C or 150 °C. In addition, when a 200-uLvolume of hexane was added to the1-g dust sample, a preextraction withC02 and no methanol removed muchof the extraneous matrix. These ma-trix compounds, when present, cre-ate a more complex chromatogramand require more reagent. (Anal.Chem. 1997, 69, 566-74)Overcoming NIMBYPublic participation programs canhelp citizens get past "not-in-my-backyard" (NIMBY) responses to thesiting of hazardous waste facilities.J. J. Duffield and S. E Depoe de-scribed the effects of citizen partici-pation in the storage of 2.4 millioncubic yards of low-level radioactivewaste from the Fernald, Ohio, nu-clear weapons complex. Among theparticipants were labor representa-tives, academicians, 8X63. residents,and activists. Because the task forcehad the opportunity to questiontechnical experts and dispute evi-dence a democratic formatated for two-way communicationbetween officials and citizens (RiskPol Rev 1997 3(2) 31-34)RISKProbabilistic techniquesEfforts to improve risk characteriza-tion emphasize the use of uncer-tainty analyses and probabilistictechniques. K. M. Thompson andJ. D. Graham describe how MonteCarlo analysis and other probabilis-tic techniques can be used to im-prove risk assessments. A probabilis-tic approach to risk assessmentincorporates uncertainty and vari-ability, calculating risk with variablesthat include resources expended andpolicy mandates. Despite these ad-vantages, there are significant barri-ers to its widespread use, includingrisk managers' inexperience withprobabilistic risk assessment resultsand general suspicion of themethod. The authors describe waysto promote the proper use of proba-bilistic risk assessment. (Hum. Ecol.Risk Assess. 1996,2(4), 1008-34)Uncertainty modelingMonte Carlo modeling is a powerfulmathematical tool with many advan-tages over traditional point estimatesfor assessing uncertainty and vari-ability with hazardous waste site ex-posure and risk. However, a lack ofvariability information hindersMonte Carlo modeling. As a solu-tion, W. J. Brattin and colleaguesproposed running repeated MonteCarlo simulations using differentcombinations of uncertainty param-eters. The amount of variationamong the different simulationsshows how certain or uncertain anyindividual estimate of exposure orrisk may be An example of thisproach is provided including an es-timation of the average exposure toradon daughter products in indoorair (Hum Ecol Risk Assess .1992(4) 820-40)SOILDecomposition modelClay has a stabilizing effect on or-ganic matter in soil and thus reducesthe rate of decomposition. Currentcomputer simulation models, how-ever, do not adequately describe theprotection of organic matter by clay.J. Hassink and A. P. Whitmore devel-oped and tested a model that pre-dicts the preservation of added or-ganic matter as a function of theclay fraction and the degree of or~ganic matter saturation of the soil. Itclosely followed the buildup and de-cline of organic matter in 10 soils towhich organic matter was addedBetter than conventional modelsthis model was able to predict theaccumulation and degradation oforganic matter in soils of differenttextures and the contents of initialorganic matter (Soil Sci Soc Am J1997 61 131-39)Tracing trace metal complexationChemical reactions determine the fate of trace metals released into aquatic envi-ronments. J. M. Gamier and colleagues investigated the kinetics of trace metal complexation by monitoring chemical reactions in suspended matter from a French river. Radiotracer experiments on Mn, Co, Fe, Cd, Zn, Ag, and Cs identified sea-sonal variations in the predominant species. In summer, Cd and Zn were com-plexed by specific natural organic matter ligands. Cs remained in an inorganicform, whereas Fe and Ag were either organic complexes or colloidal species. In winter, a two-step process occurred for Mn and Co. They were rapidly complexed by weak ligands, followed by slower complexation by stronger ligands. The authors conclude that low concentrations of natural ligands may control the speciation of trace elements. (Environ. Sci. Techno!..,his issue, pp. .597-1606)2 60 A • VOL. 31, NO. 6, 1997 / ENVIRONMENTAL SCIENCE & TECHNOLOGY / NEWS。
Building counterexamples to generalizations for rational functions of Ritt’s decompositionTheoremJaime Gutierrez a,1David Sevilla b,1a Dpto.de Matem´a ticas,Estad´ıstica y Computaci´o n,Universidad de Cantabria,E–39071Santander,Spainb Dpt.of Computer Science and Software Engineering,University of Concordia,Montreal,CanadaAbstractThe classical Ritt’s Theorems state several properties of univariate polynomial de-composition.In this paper we present new counterexamples to thefirst Ritt theorem, which states the equality of length of decomposition chains of a polynomial,in the case of rational ly,we provide an explicit example of a rational function with coefficients in Q and two decompositions of different length. Another aspect is the use of some techniques that could allow for other coun-terexamples,namely,relating groups and decompositions and using the fact that the alternating group A4has two subgroup chains of different lengths;and we pro-vide more information about the generalizations of another property of polynomial decomposition:the stability of the basefield.We also present an algorithm for com-puting thefixing group of a rational function providing the complexity over the rational numberfield.1IntroductionThe starting point is the decomposition of polynomials and rational functions in one variable.First we will define the basic concepts of this topic.Definition1If f=g◦h,f,g,h∈K(x),we call this a decomposition of f in K(x)and say that g is a component on the left of f and h is a component on the right of f.We call a decomposition trivial if any of the components is a unit with respect to decomposition.1Partially supported by Spain Ministry of Science grant MTM2004-07086 Preprint submitted to Elsevier Science18May2006Given two decompositions f=g1◦h1=g2◦h2of a rational function,we call them equivalent if there exists a unit u such thath1=u◦h2,g1=g2◦u−1,where the inverse is taken with respect to composition.Given a non–constant f,we say that it is indecomposable if it is not a unit and all its decompositions are trivial.We define a complete decomposition of f to be f=g1◦···◦g r where g i is indecomposable.The notion of equivalent complete decompositions is straight-forward from the previous concepts.Given a non–constant rational function f(x)∈K(x)where f(x)=f N(x)/f D(x) with f N,f D∈K[x]and(f N,f D)=1,we define the degree of f as deg f=max{deg f N,deg f D}.We also define deg a=0for each a∈K.Remark2From now on,we will use the previous notation when we refer to the numerator and denominator of a rational function.Unless explicitly stated,we will take the numerator to be monic,even though multiplication by constants will not be relevant.Thefirst of Ritt’s Theorems states that all the decomposition chains of a polynomial that satisfies a certain condition have the same length.It is well known that the result is not true for rational functions,see for example[]. Here we explore new techniques related to this,and include a counterexample in Q(x).Another result in this fashion states that if a polynomial is indecomposable in a certain coefficientfield,then it is also indecomposable in any extension of thatfield.This is also false for rational functions,see[4]and[1].We look for bounds for the degree of the extension in which we need to take the coefficients if a rational function with coefficients in Q has a decomposition in a larger field.In this paper we present a computational approach to this question and our conclusions.In Section2we study how to compute bounds for the minimalfield that contains all the decompositions of a given rational function.In Section3we introduce several definitions and properties of groups related to rational func-tions,which we use in Section4to discuss the number of components in the rational case.In particular,we present an algorithm for computingfixing2group of a rational function and we provide the complexity over the rationalnumberfield.Finally,in Section4we present an example of a degree12ratio-nal function with coefficients in Q and two decompositions of different length;as far as we know this is thefirst example in Q of this kind.2Extension of the coefficientfieldSeveral algorithms for decomposing univariate rational functions are known,see for instance[18]and[1].In all cases,the complexity of the algorithmgrows enormously when the coefficientfield is extended.A natural questionabout decomposition is whether it depends on the coefficientfield,that is,the existence of polynomials or rational functions that are indecomposable inK(x)but have a decomposition in F(x)for some extension F of K.Polynomials behave well under certain conditions,however in the rational case this is nottrue.We will try to shed some light on the rational case.Definition3f∈K[x]is tame when char K does not divide deg f.The next theorem shows that tame polynomials behave well under extensionof the coefficientfield,see[8].It is based on the concept of approximate rootof a polynomial,which always exists for tame polynomials,and is also the keyto some other structural results in the tame polynomial case.Theorem4Let f∈K[x]be tame and F⊇K.Then f is indecomposable inK[x]if and only if it is indecomposable in F[x].The next example,presented in[1],shows that the previous result is false forrational functions.Example5Letf=ω3x4−ω3x3−8x−1 2ω3x4+ω3x3−16x+1whereω∈Q butω3∈Q\{1}.It is easy to check that f is indecomposable in Q(x).However,f=f1◦f2wheref1=x2+(4−ω)x−ω2x2+(8+ω)x+ω,f2=xω(xω−2)xω+1.We can pose the following general problem:Problem6Given a function f∈K(x),compute a minimalfield F such that3every decomposition of f over an extension of K is equivalent to a decompo-sition over F.It is clear that,by composing with units in F(x)⊇K(x),we can always turn a given decomposition in K(x)into one in F(x).Our goal is to minimize this, that is,to determinefields that contain the smallest equivalent decompositions in the sense of having the smallest possible extension over K.Given a decomposition f=g(h)of a rational function in K(x),we can write a polynomial system of equations in the coefficients of f,g and h by equating to zero the numerator of f−g(h).The system is linear in the coefficients of g.Therefore,all the coefficients of g and h lie in some algebraic extension of K.Our goal is tofind bounds for the degree of the extension[F:K]where F contains,in the sense explained above,all the decompositions of f.One way tofind a bound is by means of a result that relates decomposition and factorization.We state the main definition and theorems here,see[9]for proofs and other details.Definition7A rational function f∈K(x)is in normal form if deg f N> deg f D and f N(0)=0(thus f D(0)=0).Theorem8(i)Given f∈K(x),if deg f<|K|then there exist units u,v such that u◦f◦v is in normal form.(ii)If f∈K(x)is in normal form,every decomposition of f is equivalent to one where both components are in normal form.We will analyze the complexity offinding the units u and v later.Theorem9Let f=g(h)with f,g,h in normal form.Then h N divides f N and h D divides f D.This result provides the following bound.Theorem10Let f∈K(x)and u1,u2be two units in K(x)such that g= u1◦f◦u2is in normal form.Let F be the splittingfield of{g N,g D}.Then any decomposition of f in K (x),for any K ⊃K,is equivalent to a decomposition in F(x).PROOF.By Theorems8and9,every decomposition of g is equivalent to another one,g=h1◦h2,where the numerator and denominator of h2divide those of g,thus the coefficients of that component are in F.As the coefficients of h1are the solution of a linear system of equations whose coefficients are4polynomials in the coefficients of g and h2,they are also in F.We also have u1,u2∈K(x),therefore the corresponding decomposition of f lies in the same field.2This bound,despite being of some interest because its generality and simplic-ity,is far from optimal.For example,for degree4we obtain[F:K]≤3!·3!= 36.As we will show next,we can use computational algebra techniques,in particular Gr¨o bner bases,tofind good bounds for different degrees of f.The following theorem completes Example5.Theorem11Let f∈Q(x)of degree4.If f=g(h)with g,h∈Q(x),there exists afield K with Q⊂K⊂Q and a unit u∈K(x)such that g(u−1),u(h)∈K(x)and[K:Q]≤3.PROOF.Without loss of generality we assumef=x4+r3x3+r2x2+r1xs3x3+s2x2+s1x+1,g=x2+axbx+1,h=x2+cxdx+1.Then we have the following system of polynomial equations:ac−r1=0,2d+bc−s1=0,c2+cad+a−r2=0,d2+bcd+b−s2=0,2c+ad−r3=0,bd−s3=0.Let I⊂C[r1,r2,r3,s1,s2,s3,a,b,c,d]be the ideal generated by these ing elimination techniques by means of Gr¨o bner bases(see for example[3]),wefind polynomials in I involving r1,r2,r3,s1,s2,s3and each of the variables a,b,c,d:{a3−r2a2+r1r3a−r21,b3−s2b2+s1s3b−s23,2c3−2r3c2+12r23+12r1s1c−14r1r3s1+14r21s3,2d3−2s1d2+12s21+12r3s3d−14r3s1s3+14r1s23⊂I.Now it is clear that,given a degree3function,the coefficients of its components have degree at most3over Q each.But in fact there is afield of degree3that5contains all of them:ac −r 1=0⇒c ∈Q (a ),2c +ad −r 3=0⇒d ∈Q (a,c )=Q (a ),bd −s 3=0⇒b ∈Q (d )⊆Q (a ).The well–known Extension Theorem (see [3])may be used to extend the points in the variety defined by J =I ∩C [r 1,r 2,r 3,s 1,s 2,s 3].This can also be used to study other questions related to the variety,for example the existence of functions with one decomposition in Q (x )and other in a proper extension of Q .The ideal J determines a variety in C 6of dimension 4(in fact,the equations defining I provide a parametrization of the variety),which represents the set of decomposable functions of degree 4(in normal form).The four polynomials in the different variables showed above allow the successive application of Extension Theorem to prove that each point of J is the image of some point in the parametrization.That is,each point in the variety corresponds to a function of degree 4that is decomposable in C (x ).From Example 5we can deduce that,after normalization,not every point of J corresponds to a function that can be decomposed in Q (x ).Remark 12In the polynomial case of degree 4we havef =x 4+r 3x 3+r 2x 2+r 1x ,g =x 2+ax ,h =x 2+cx ∈C (x ).The equations of the ideal I ⊂C [r 1,r 2,r 3,a,c ]determined by this areI =(2c −r 3,c 2+a −r 2,ac −r 1),I ∩C [r 1,r 2,r 3]= 18r 33−12r 2r 3+r 1.It is clear that,from the initial equations and by Extension Theorem,for each point there exist corresponding values for a ,c .However,if we choose the coefficient field to be Z 2,the ideal is generated by I =(r 3,c 2+a +r 2,ac +r 1)and the variety is I ∩C [r 1,r 2,r 3]=(r 3).We cannot apply Extension Theorem with the generators we have for I ;in fact,the polynomial x 4+x 2+x is in the variety,but the corresponding equations are c 2+a +1=0,ac +1=0which are false for any values a,c ∈Z 2,so this polynomial is indecomposable in Z 2[x ].63Fixing group andfixedfieldIn this section we introduce several simple notions from classical Galois theory. LetΓ(K)=Aut K K(x)(we will write simplyΓif there can be no confusion about thefield).The elements ofΓ(K)can be identified with the images of x under the automorphisms,that is,with M¨o bius transformations(non–constant rational functions of the form(ax+b)/(cx+d)),which are also the units of K(x)under composition.Definition13(i)Let f∈K(x).We define G(f)={u∈Γ(K):f◦u=f}.(ii)Let H<Γ(K).We define Fix(H)={f∈K(x):f◦u=f∀u∈H}. Example14(i)Let f=x2+1x2∈K(x).Then G(f)=x,−x,1x,−1x.(ii)Let H={x,ix,−x,−ix}⊂Γ(C).Then Fix(H)=C(x4).These definitions correspond to the classical Galois correspondences(not bi-jective in general)between the intermediatefields of an extension and the subgroups of its automorphism group,as the following diagram shows: K(x)←→{id}||K(f)−→G(f)||Fix(H)←−H||K←→ΓRemark15As K(f)=K(f )if and only if f=u◦f for some unit u,we have that the application K(f)→G(f)is well–defined.Next,we state several interesting properties of thefixedfield and thefixing group.Theorem16Let H be a subgroup ofΓ.(i)H is infinite⇒Fix(H)=K.7(ii)H isfinite⇒K Fix(H),Fix(H)⊂K(x)is a normal extension,and in particular Fix(H)=K(f)with deg f=|H|.PROOF.(i)It is clear that no non–constant function can befixed by infinitely many units,as these mustfix the roots of the numerator and denominator.(ii)We will show constructively that there exists f such that Fix(H)=K(f) with deg f=|H|.Let H={h1=x,...,h m}.LetP(T)=mi=1(T−h i)∈K(x)[T].We will see that P(T)is the minimum polynomial of x over Fix(H)⊂K(x).A classical proof of L¨u roth’s Theorem(see for instance[17])states that any non–constant coefficient of the minimum polynomial generates Fix(H),and we are done.It is obvious that P(x)=0,as x is always in H.It is also clear that P(T)∈Fix(H)[T],as its coefficients are the symmetric elementary polynomials in h1,...,h m.The irreducibility is equivalent to the transitivity of the action of the group on itself by multiplication.2Theorem17(i)For any non–constant f∈K(x),|G(f)|divides deg f.Moreover,for any field K there is a function f∈K(x)such that1<|G(f)|<deg f.(ii)If|G(f)|=deg f then K(f)⊆K(x)is normal.Moreover,if the extension K(f)⊆K(x)is separable,thenK(f)⊆K(x)is normal⇒|G(f)|=deg f.(iii)Given afinite subgroup H ofΓ,there is a bijection between the subgroups of H and thefields between Fix(H)and K(x).Also,if Fix(H)=K(f),there is a bijection between the right components of f(up to equivalence by units) and the subgroups of H.PROOF.(i)Thefield Fix(G(f))is between K(f)and K(x),therefore the degree of any generator,which is the same as|G(f)|,divides deg f.For the second part,take8for example f=x2(x−1)2,which gives G(f)={x,1−x}in any coefficient field.(ii)The elements of G(f)are the roots of the minimum polynomial of x over K(f)that are in K(x).If there are deg f different roots,as this number equals the degree of the extension we conclude that it is normal.If K(f)⊂K(x)is separable,all the roots of the minimum polynomial of x over K(f)are different,thus if the extension is normal there are as many roots as the degree of the extension.(iii)Due to Theorem16,the extension Fix(H)⊂K(x)is normal,and the result is a consequence of the Fundamental Theorem of Galois.Remark18K(x)is Galois over K(that is,the only rational functionsfixed byΓ(K)are the constant ones)if and only if K is infinite.Indeed,if K is infinite,for each non–constant function f there exists a unit x+b with b∈K which does not leave itfixed.On the other hand,if K isfinite thenΓ(K) isfinite too,an the proof of Theorem16provides a non–constant rational function that generates Fix(Γ(K)).Algorithms for computing several aspects of Galois theory can be found in [16].Unfortunately,it is not true in general that[K(x):K(f)]=|G(f)|;there is no bijection between intermediatefields and subgroups of thefixing group of a given function.Anyway,we can obtain partial results on decomposability. Theorem19Let f be indecomposable.(i)If deg f is prime,then either G(f)is cyclic of order deg f,or it is trivial.(ii)If deg f is composite,then G(f)is trivial.PROOF.(i)If1<|G(f)|<deg f,we have K(f) K(Fix(G(f))) K(x)and any generator of K(Fix(G(f)))is a proper component of f on the right.Therefore, G(f)has order either1or deg f,and in the latter case,being prime,the group is cyclic.(ii)Assume G(f)is not trivial.If|G(f)|<deg f,we have a contradiction as in(i).If|G(f)|=deg f,as it is a composite number,there exists H G(f) not trivial,and again any generator of Fix(H)is a proper component of f on the right.Corollary20If f has composite degree and G(f)is not trivial,f is decom-posable.9Now we present algorithms to efficiently computefixedfields andfixing groups.The proof of Theorem16provides an algorithm to compute a generator of Fix(H)from its elements.Algorithm1INPUT:H={h1,...,h m}<Γ(K).OUTPUT:f∈K(x)such that Fix(H)=K(f).A.Let i=1.pute the i-th symmetric elementary functionσi(h1,...,h m).C.Ifσi(h1,...,h m)∈K,returnσi(h1,...,h m).If it is constant,increase i and return to B.Analysis.The algorithm merely needs computing the product n i=1(T−h i) using O(log n)multiplications,so it is efficient both in theory and practice.Example21LetH=±x,±1x,±i(x+1)x−1,±i(x−1)x+1,±x+ix−i,±x−ix+i<Γ(C).ThenP(T)=T12−x12−33x8−33x4+1x2(x−1)2(x+1)2(x4+2x2+1)T10−33T8+2x12−33x8−33x4+1x2(x−1)2(x+1)2(x4+2x2+1)T6−33T4−x12−33x8−33x4+1x2(x−1)2(x+1)2(x4+2x2+1)T2+1.Thus,Fix(H)=Cx12−33x8−33x4+1x2(x−1)2(x+1)2(x4+2x2+1).H is isomorphic to A4.It is known that A4has two complete subgroup chains of different lengths:{id}⊂C2⊂V⊂A4,{id}⊂C3⊂A4.10In our case,{x}⊂{±x}⊂±x,±1x⊂H,{x}⊂x,x+ix−i,i(x+1)x−1⊂H.Applying our algorithm again we obtain the followingfield chains:C(f)⊂Cx2+1x2⊂C(x2)⊂C(x),C(f)⊂C−i(t+i)(1+t)t(−t+i)(−1+t)⊂C(x).As there is a bijection in this case,the corresponding two decompositions are complete.In order to compute thefixing group of a function f we can solve the system of polynomial equations obtained fromfax+bcx+d=f(x).This can be reduced to solving two simpler systems,those given byf(ax+b)=f(x)and fax+bx+d=f(x).This method is simple but inefficient;we will describe another method that is faster in practice.We need to assume that K has sufficiently many elements.If not,we take an extension of K and later we check which of the computed elements are inΓ(K) by solving simple systems of linear equations.Theorem22Let f∈K(x)of degree m in normal form and u=ax+bcx+dsuchthat f◦u=f.(i)a=0and d=0.(ii)f N(b/d)=0.(iii)If c=0(that is,we take u=ax+b),then f N(b)=0and a m=1. (iv)If c=0then f D(a/c)=0.11PROOF.(i)Suppose a =0.We can assume u =1/(cx +d )=(1/x )◦(cx +d ).But if we consider f (1/x ),its numerator has smaller degree than its denominator.As composing on the right with cx +d does not change those degrees,it is impossible that f ◦u =f .Also,as the inverse of u is dx −b −cx +a,we have d =0.(ii)Letf =a m x m +···+a 1x b m −1x m −1+···+b 0.The constant term of the numerator of f ◦u isa mb m +a m −1b m −1d +···+a 1bd m −1=d m f N (b/d ).As d =0by (i),we have that f N (b/d )=0.Alternatively,0=f (0)=(f ◦u )(0)=f (u (0))=f (b/d ).(iii),(iv)They are similar to the previous item.We can use this theorem to compute the polynomial and rational elements of G (f )separately.Algorithm 2INPUT :f ∈K (x ).OUTPUT :G (f )={w ∈K (x ):f ◦w =f }.A .Compute units u,v such that f =u ◦f ◦v is in normal form.Let m =deg f .Let L be an empty list.B .Compute A ={α∈K :αm =1},B ={β∈K :f N (β)=0}andC ={γ∈K :fD (γ)=0}.C .For each (α,β)∈A ×B ,check if f (αx +β)=f (x ).In that case add ax +b to L .D .For each (β,γ)∈B ×C ,let w =cγx +βc x +pute all values of c for which f ◦w =f .For each solution,add the corresponding unit to L .E .Let L ={w 1,...,w k }.Return {v ◦w i ◦v −1:i =1,...,k }.Analysis .It is clear that the cost of the algorithm heavily depends on the complexity of the best algorithm to compute the roots of a univariate poly-nomial in the given field.We analyze the bit complexity when the ground12field is the rational number Q.We will use several well–known results about complexity,those can be consulted in the book[7].In the following,M denotes a multiplication time,so that the product of two polynomials in K[x]with degree at most m can be computed with at most M(m)arithmetic operations.If K supports the Fast Fourier Transform, several known algorithms require O(n log n log log n)arithmetic operations. We denote by l(f)the maximum norm of f,that is,l(f)= f ∞=max|a i| of a polynomial f= i a i x i∈Z[x].Polynomials in f,g∈Z[x]of degree less than m can be multiplied using O(M(m(l+log m)))bit operations,where l=log max(l(f),l(g)).Now,suppose that the given polynomial f is squarefree primitive,then we can compute all its rational roots with an expected number of T(m,log l(f))bit operations,where T(m,log l(f))=O(m log(ml(f))(log2log log m+(log log l(f))2log log log l(f))+m2M(log(ml(f))).We discuss separately the algorithm steps.Let f=f N/f D,where f N,f D∈Z[x]and let l=log max(l(f N),l(g D))and m=deg f.Step A.Let u∈Q(x)be a unit such that g N/g D=u(f)with deg g N> deg g D.Such a unit always exists:–If deg f N=deg f D.Let u=1/(x−a),where a∈Q verifies deg f N−a deg f D<deg f N.–If deg f N<deg f D,let u=1/x.Now,let b∈Z such that g D(b)=0.Then h N/h D=g N(x+b)/g D(x+b) verifies h D(0)=0and the rational function(x−h(0))◦h N/h D is in normal form.Obviously,the complexity in this step is dominated on choosing b.In the worst case,we have to evaluate the integers0,1,...,m in g D.Clearly,a complexity bound is O(M(m3l)).Step pute the set A can be done on constant time.Now,in order to compute the complexity,we can can suppose,without loss of generality,that f N and f D are squarefree and primitive.Then the bit complexity to compute both set B and set C is T(m,ml).Step C.A bound for the cardinal of A is4and m for the cardinal of B.Then, we need to check4m times if f(αx+β)=f(x)for each each(α,β)∈A×B. So,the complexity of this step is bounded by O(M(m4l)).13Step D.In the worst case the cardinal of B×C is m2.This step requires to compute all rational roots of m2polynomials h(x)given by the equation: f◦w=f,for each(β,γ)∈B×C,where w=cγx+βc x+1.A bound for the degree ofh(x)is m2.The size of the coefficients is bounded by ml,so a bound for total complexity of this step is m4T(m2,lm2).Step E.Finally,this step requires substituting at most2m rational functions of degree m and the coefficients size is bounded by lm3.So,abound for the complexity is O(M(m4l)).We can conclude that the complexity of this algorithm is dominated by that of step D,that is,m4T(m2,lm2).Of course,a worst bound for this is O(m8l2). The following example illustrates the above algorithm:Example23Letf=(−3x+1+x3)2x(−2x−x2+1+x3)(−1+x)∈Q(x).We normalize f:let u=1x−9/2and v=1x−1,thenf=u◦f◦v=−4x6−6x5+32x4−34x3+14x2−2x 27x5−108x4+141x3−81x2+21x−2is in normal form.The roots of the numerator and denominator of f in Q are{0,1,1/2}and {1/3,2/3}respectively.The only sixth roots of unity in Q are1and−1;as char Q=0there cannot be elements of the form x+b in G(f).Thus,there are two polynomial candidates:−x+1/3,−x+2/3.A quick computation reveals that none of themfixes f.Let w=cβx+αc x+1.Asα∈{0,1,1/2}andβ∈{1/3,2/3},another quickcomputation shows thatG(f)=x,−x+1−3x+2,−2x+1−3x+114andG(f)=v·G(f)·v−1=x,11−x,x−1x.From this group we can compute a proper component of f as in the proof of Theorem19,obtaining f=g(h)withh=−3x+1+x3(−1+x)x,g=x2x−1.In the next section we will use these tools to investigate the number of com-ponents of a rational function.4Ritt’s Theorem and number of componentsOne of the classical Ritt’s Theorems(see[13])describes the relation among the different decomposition chains of a tame polynomial.Essentially,all the decompositions have the same length and are related in a rather simple way.Definition24A bidecomposition is a4-tuple of polynomials f1,g1,f2,g2such that f1◦g1=f2◦g2,deg f1=deg g2and(deg f1,deg g1)=1.Theorem25(Ritt’s First Theorem)Let f∈K[x]be tame andf=g1◦···◦g r=h1◦···◦h sbe two complete decomposition chains of f.Then r=s,and the sequences (deg g1,...,deg g r),(deg h1,...,deg h s)are permutations of each other.More-over,there exists afinite chain of complete decompositionsf=f(j)1◦···◦f(j)r,j∈{1,...,k},such thatf(1)i=g i,f(k)i=h i,i=1,...,r,and for each j<k,there exists i j such that the j-th and(j+1)-th decompo-sition differ only in one of these aspects:(i)f(j)ij ◦f(j)ij+1and f(j+1)i j◦f(j+1)i j+1are equivalent.15(ii)f(j)ij ◦f(j)ij+1=f(j+1)i j◦f(j+1)i j+1is a bidecomposition.PROOF.See[13]for K=C,[5]for characteristic zerofields and[6],[15]for the general case.2Unlike for polynomials,it is not true that all complete decompositions of a rational function have the same length,as shown in Example21.The paper [10]presents a detailed study of this problem for non tame polynomial with coefficients over afinitefield.The problem for rational functions is strongly related to the open problem of the classes of rational functions which commute with respect to composition,see[14].In this section we will give some ideas about the relation between complete decompositions and subgroup chains that appear by means of Galois Theory.Now we present another degree12function,this time with coefficients in Q, that has two complete decomposition chains of different length.This function arises in the context of Monstrous Moonshine as a rational relationship be-tween two modular functions(see for example the classical[2]for an overview of this broad topic,or the reference[12],in Spanish,for the computations in which this function appears).Example26Let f∈Q(x)be the following degree12function:f=x3(x+6)3(x2−6x+36)3 (x−3)3(x2+3x+9)3.f has two decompositions:f=g1◦g2◦g3=x3◦x(x−12)x−3◦x(x+6)x−3==h1◦h2=x3(x+24)x−3◦x(x2−6x+36)x2+3x+9.All the components except one have prime degree,hence are indecomposable; the component of degree4cannot be written as composition of two components of degree2.If we compute the groups for the components on the right in Q we have:G Q(f)=G Q(g2◦g3)=G Q(g3)= 3x+18x−3,x,G Q(h2)={x}.16However,in C:G C(f)= 3αix+18αix−3,3αi x−18−18αix−3αi,3αi x+18x+3αi+3, 3x+18αix−3αi,3x+18x−3,αi x,x,G C(g2◦g3)= 3αix−18−18αix−3αi,3x+18x−3,x,G C(g3)= 3x+18x−3,x,G C(h2)= 3αix+18x+3αi+3,xwhereαi,i=1,2are the two non-trivial cubic roots of unity.In order to obtain the function in Example21,we used Theorem17,and in particular the existence of a bijection between the subgroups of A4and the intermediatefields of a function that generates the correspondingfield.The existence of functions with this property has been known for some time,as its construction from any group isomorphic to A4is straightforward.On the other hand,the example above is in Q(x),but there is no bijection between groups and intermediatefields.In general,there are two main obstructions for this approach.On one hand, there is no bijection between groups andfields in general,as the previous example shows for Q.On the other hand,only somefinite groups can be subgroups of PGL2(K).The onlyfinite subgroups of PGL2(C)are C n,D n, A4,S4and A5,see[11].In fact,this is true for any algebraically closedfield of characteristic zero(it suffices that it contains all roots of unity).Among these groups,only A4has subgroup chains of different length.This is even worse if we consider smallerfields as the next known result shows:Theorem27Everyfinite subgroup of PGL2(Q)is isomorphic to either C n or D n for some n∈{2,3,4,6}.Indeed these all occur,unfortunately none of them has two subgroup chains of different lengths,so no new functions can be found in this way.5ConclusionsIn this paper we have presented several counterexamples to the generalization of thefirst Ritt theorem to rational functions.We also introduced and analyzed17。
Reviving Functional Decomposition in Object-Oriented DesignDavid WolberDepartment of Computer Science, University of San Francisco2130 Fulton St., San Francisco, CA., 94117-1080wolber@A new concept is born. If it is worthy, and if it has competent supporters, it can significantly change some pattern of life. The society reacts to the new concept by attempting to integrate it with the old ideas. The reactionary elements in the society are skeptical and cling to the “proven” ways. The over-zealous supporters seek to expand the new idea at all costs, and ignore the destruction of the “antiquated”theories. The reasonable seek to synthesize a new theory by merging the best ideas from the new concept with the vital and salvageable ideas of the old.In the realm of software engineering, object-oriented concepts have caused a significant shift in the way software is developed. Few would contest that the software engineering field has benefited, most notably with an increased focus on code reuse through encapsulation, information hiding, and inheritance. But in its migration to objects, the field has neglected a vital concept from the structured programming world, namely, functional decomposition.The lack of emphasis on functional decomposition is the basis for two problems in using object-oriented methodologies. First, developers tend to write long methods that would benefit from decomposition. This tendency can be reduced through emphasizing the importance of decomposing methods in design walk-throughs, but such emphasis is not supported by the notations of today’s object-oriented methodologies. Second, though the methodologies provide ample techniques for modeling the static nature of objects, they do not provide sufficient techniques for modeling the dynamic behavior of objects [3,11]. Specifically, no sufficient notation exists for modeling the hierarchical nature in which objects interact. This deficiency leads to important information about a system being undocumented or unemphasized in a design, and thus to a lack of direction in the implementation, testing, and maintenance phases.Because of their nature, these problems provide “reactionary” computer scientists, i.e., proponents of structured methods, with their best arguments against object-oriented methods. The structure chart provides exactly what is missing in object-oriented methods - a design document that can drive a step-by-step implementation and testing process. For some, the lack of equivalent structure in object-oriented methods is enough to ignore objects altogether.This paper argues that a synthesis is possible -- that functional decomposition can be revived within an object-oriented development process, and that this integration can occur without reducing the benefits that an orientation to objects provides. A use case structure chart, oriented towards use cases, objects and events, is presented, as well as a description of how a set of these charts can be used to specify the dynamic behavior of a system. This structure-chart-based model has been used in numerous small-to-medium sized projects performed in the Software Engineering and Senior Project courses at the University of San Francisco (USF). It has proven to significantly simplify the design, implementation, and maintenance phases compared to processes driven by “conventional” dynamic object models.This paper is organized as follows: first, current object-oriented development methods are discussed, and the groundwork is laid for where functional decomposition can fit in the process. Next, the use case structure chart is introduced, along with an examples of its use and a discussion of how it effects the development process. Finally, our results are summarized.OBJECT-ORIENTED DEVELOPMENTThis section provides an overview of the object-oriented development process. To avoid confusion, analysis and design are considered together as the design --the bridge between system requirements and the final code implementation. The reader is referred to [1,2,3,4,6,7,10,12] for various definitions of where analysis ends and design begins in the development process.The goal of the design is to provide documentation that can be used to code and maintain an application. Of particular importance is that there be a clear and direct mapping between the design and the system requirements, and the design and the program code. Generally, two models of the system are generated as part of the design process: the static object model, and a model that describes the dynamic behavior of the objects in the system, which here is called the dynamic model.Designing the Static Object ModelGiven a minimal set of specifications for what the system should do, the object-oriented designer makes a first run at identifying abstractions (objects) that are inherent in the application. This initial object identification stage includes the introduction of new objects as well as decisions to use or extend (through inheritance) objects from existing class libraries.In creating new objects, the responsible designer thinks not only of the present application, but also of the long term. At all opportunity, he designs classes and class hierarchies that can be easily reused in other applications. Such an approach is vital to the success of an object-oriented development enterprise.The result of the initial stage is a description of the objects to be used in the application, and a depiction of the relationships between them. Object descriptions include attributes and services provided. Object relationships include aggregation and inheritance relationships, as well as more general relationships. In many methodologies, this initial object model focuses only on the problem domain, and thus only entity (subject) objects are specified. In other methodologies, most notably Jacobson’s [4], designers begin modeling the interface (view) and control objects as well early in development.The initial object analysis is given various degrees of importance in the different object methodologies. In Use case [4] or Responsibility-driven approaches [12] designers tend to swiftly identify an initial set of objects, then build up the objects by analyzing functional requirements and assigning services to objects so that those requirements are met. Data-centered approaches [1,6,10], on the other hand, emphasize that objects should be closely analyzed in the abstract prior to melding them to implement the requirements of the particular application. Such a “bottom-up approach” leads to more maintainable systems because objects are less likely to change than processing requirements [6]. Designing objects without focusing on requirements can also lead to the design of more reusable objects.Use Case Walk-throughsEven with a data-centered approach, the initial object analysis will rarely produce a complete and robust static object model. To complete the model, the system requirements must be analyzed.Due to the work of Ivan Jacobson, use cases are often used to drive this process. Jacobson defines a use case in the following manner: “When a user uses the system, she or he will perform a behaviorally related sequence of transactions in a dialogue with the system. We call such a special sequence a use case.”Use cases are similar to system functions, but they emphasize not only functionality, but the different actors that use the system. It should be noted that the functionality described by a use case is end-user oriented; it is different from the functions or methods that appear in the program code.Use case or scenario walk-throughs are a key step in object-oriented design. These walk-throughs take place when a set of use cases have been specified, and an initial static object model created. The goal is to determine the internal activity that must occur in order to implement each use case. A major by-product (deliverables) of this phase is a revised and refined object model.One form of use case walk-through is the CRC-Card walk-through. In this exercise, each person in the development team is given one or more index cards, each card representing a class. The cards contain the name of the class, its responsibilities (services) , and the classes that it collaborates with. The team then steps through each use case, attempting to distribute the “tasks” amongst the various classes. If a particular task seems appropriate for a particular class, but no such service exists, a new service is added to the class. When there is no object for which a task can appropriately be assigned, a new object (and card) is created or retrieved from a class library. In general, the object model is refined. After a walk-through has been performed for all use cases, the team can be much more confident that the (refined) object model meets the system requirements.Besides a refined static object model, another by-product of use case walk-throughs is the dynamic model. Rumbaugh calls the construction of the dynamic “message” model “the main activity of the design phase of development.” The model provides direct documentation of a use case walk-through. It isat least as important as the static object model in the implementation phase, and because it maps directly to the behavior of a system, it is extremely vital to the testing and maintenance phases.The dynamic model is generally depicted with some form of object interaction diagrams (state transition diagrams are also used, but not considered here. The reader is referred to [10] for a description of these diagrams). Object interaction diagrams show the messages that are sent between various objects to implement each use case:Although they document the messages sent between objects, interaction diagrams leave important information about use case implementation undocumented or unemphasized. Most importantly, they do not stress the hierarchical nature of method execution in an event-driven system. ImplementationGenerally, object-oriented implementation proceeds on a class-by-class basis, that is, each member of the development team is responsible for coding and testing some class or class cluster. The biggest difficulty with this scheme is that classes are not stand-alone entities. Each class is interrelated to other classes through message passing, that is, some methods of a class call methods of other classes. Thus, a programmer cannot completely implement and test a particular class until the related classes and methods are implemented. The order with which classes and methods are implemented is a major issue, and one that is hardly straightforward.Of course, the fact that methods are dependent on other methods is not unique to object-oriented programming -- such inter-relatedness exists in structured programs as well. However, structured development provides a notation, the structure chart,and a related process, bottom-up implementation and testing, that defines an order for implementing functions. This order provides direction in the implementation phase, and it fosters step-by-step integration of code as opposed to a “big-bang”approach.Most object-oriented development methodologies provide little support for a clearly-ordered implementation and testing process (one notable exception is Jorgensen and Erickson’s work on object-oriented integration testing [5]). Object message diagrams do not emphasize method decomposition, so they cannot be used for bottom-up implementation. The lack of an ordering principle often results in a more ad hoc implementation and testing phase, and large-scale as opposed to incremental code integration.Functional DecompositionThough not emphasized, most of today’s methodologies do include some form of functional decomposition in their dynamic model notations. Here, a distinction is made between requirements decomposition, which breaks requirements (use cases) into abstract components that do not map directly to program code, and method decomposition, which breaks a system into components that map directly to program code.OMT [7], which along with the Booch method has been integrated into the Unify method [8], suggested the use of traditional data flow diagrams to decompose the requirements of a system. Because the diagrams provide a view that is completely orthogonal to the object model, this type of functional decomposition was not extremely useful, and was dropped from Unify.The research of [9] has focused on decomposing requirements and data in parallel, and associating objects and requirements within the same diagrams. Instead of showing the object collaboration that occurs for an entire use case or system function, the diagrams break a system function into functional components (not program functions). The objects that collaborate for each sub-function are then depicted subordinate to each sub-function:Use Case 1Messages between objects and method decomposition are not depicted -- all collaborating objects are represented on a single level below the sub-requirement.Though not discussed in [9], an interaction diagram (or use case structure chart, introduced in the next section) could be designed under each appropriate sub-function. These diagrams would be simpler than the diagrams required to show the object collaboration for an entire use case, and the level of granularity would reduce the redundancy within the diagrams.Requirements decomposition is certainly useful, but it does not provide a model of the actual code that appears in an object-oriented program. Method decomposition, on the other hand, breaks an application down into components that map directly to program code. For this reason, such decomposition is very useful in the implementation testing, and maintenance of an application.The interaction diagrams of various methodologies emphasize method decomposition to various degrees.from the Unified methodology. This diagram is representative of that used in many methodologies,including those described in [3,11]. In the diagram,objects are represented as nodes and messages aslabels on the connectors between nodes. Because message connectors emanate from nodes representingobjects, the particular method from which a message is sent is not visually apparent. The hierarchy of method calls cannot be visually depicted, either, sincemethods are not represented with nodes.Clearly, the object message diagram is not structured to emphasize methods and method decomposition.However, because the importance of methoddecomposition was realized, the methodologysuggests that message signatures be annotated with sequence numbers (e.g., 1.1.1) which provide the decomposition information.Jacobson’s interaction diagrams , such as the one shown in Figure 2, are less object-oriented and more method-oriented than the object message diagrams.Objects that take part in each use case are listed onthe top row of the diagram. Object methods arerepresented with rectangles and are identified by the message signature on the arrow coming in from the left. Messages are shown emanating from rectangles so it is clear which method initiates each message.Time is represented top-to-bottom, and the message sequence is shown left-to-right.The method decomposition tree can be shown forsome use cases (on its side), but the structure of thenotation prevents it for most. The problem is that an object’s methods are shown on the vertical line belowit -- this organization disallows the representation ofSys_funSub_fun1Sub_fun2 class1method1method2 class2method1Because of the existence of interactors and event-handling methods, the internal activity of an event-driven system cannot be represented with a traditional functional decomposition tree. Event-handling methods do not fit, because they are invoked by an input port event, not another method. However, if viewed as connected to an interactor, which is itself invoked by a method, event-handling methods can take their rightful place in the method decomposition tree.Given this view, the hierarchical nature of an event-driven application can be described by a set of use case threads:Each use case is invoked when some input port event occurs. This event causes an event-handling method of the root interactor (generally the main dialog in the application) to be called. This root method calls zero or more other methods, and invokes zero or more interactors. All other methods, like the root, can call other methods or invoke interactors. Each interactor defines one or more event-handling methods that are invoked by an input port event sometime after the interactor is invoked (activated). In this way, the activity is hierarchical in nature. The NotationThe use case structure chart provides a notation for modeling a single use case thread, and a set of charts models the internal activity of an event-driven system. Because it depicts the hierarchy of methods, the chart can be used to drive a bottom-up implementation and testing process. Its hierarchical structure also encourages the decomposition of methods into workable units.Sample use case structure chart are shown in Figure 3 and 4. In the chart, methods and interactors are emphasized and represented as nodes. Non-interactor objects (e.g., Group in Figure 3) appear only indirectly, as pre-fixes to method names. Method nodes are represented as rectangles with whole lines:and interactor nodes as rectangles with broken lines: Connectors from one method to another represent method calls, and are represented with whole lines:Connectors from a method to an interactor represent interactor invocation, i.e., the opening or activation of a dialog. These connectors are represented with broken lines:xNote that the parameter (e.g., x) represented on the connecting line is not exactly the same as a parameter sent to a method. Parameters sent to an interactor (e.g., x) are meant to be accessible not within a single method, but within all the methods of the interactor, including the event-handlers. The code corresponding to the connector is framework and language dependent, but similar to the following Visual C++ code:void Object1::Method1(){// declare view and call constructorSomeViewaSomeView(x);// activate viewaSomeView.DoModal(); // other code .....}// the constructorvoid SomeView::SomeView(int x){m_x = x; // set data member}Object ::MethodViewXObject1::Method1Object2::MethodAObject1::MethodXSomeViewObject1::Method1all the methods of SomeView can access m_x .The Event-handling methods of an interactor aredrawn overlapping the interactor node:This representation illustrates that these methods areonly invoked after the interactor is invoked, i.e., a certain input port event will only cause a method to be called when the interactor is active. Thus, by depicting interactors explicitly as nodes, the hierarchical nature of the diagram is preserved.Unlike most other notations, a clear distinction is made between external messages (input port events)and messages sent between two internal objects. Input port events are not explicitly shown, but are represented as part of an event-handler’s name. It is recommended that all event-handling methods be named with a prefix of “On” followed by the name of some event (e.g., OnOKButton ).are represented as labels on connectors. Arrows are used to signify whether each argument is in, out, or in-out:In object-oriented programming, one must also be aware of the implicit this parameter denoting the data of the object itself. For instance, consider the following code:void Object1::MethodA(){// some code ...Object2.MethodB(x)}The call to MethodB is treated by a compiler as:MethodB(&Object2,x);ViewX::OnEvent1specified by showing all of the possible method nodes that might be called. Consider the following code:Note that traditionally, iteration and selection are not depicted in a structure chart. Thus, the above chart only states that the functions may be called from the higher level function.ExamplesConsider the following end-user oriented specification of one use case in a typical event-handling application:Use Case: AddIndividual To GroupThe end-user selects a group from the list box labeled Group, then selects Group | AddIndiv from the menu.A dialog appears displaying a list of all individuals in the system, and a list of the individuals in the group. To add an individual to the group, the end-user selects an individual from the list of all individuals, then clicks on the button labeled “Add”. The selected individual is added to the list for the group, and disappears from the list of all individuals. The structure chart in Figure 3 provides the corresponding programmer’s view of this use case. When the AddIndiv menu item is selected, the event-handler OnGroupAddIndivMenuItem of the main interactor (TaisView) is invoked. It calls the GetSelected function of the list box holding the groups in the system (GroupsView) to retrieve a pointer to the desired Group.The Group, along with the list of all individuals in the system, are sent as parameters when the GroupView interactor is invoked. The GroupView initializes its controls (OnInitialUpdate), then awaits an AddButton event. When it occurs, the OnAddButton method is invoked. It then calls a sequence of methods to manipulate the group and individual list, and to update the corresponding listboxes.When the OKButton is pressed, control returns to TaisView:: OnGroupAddIndivMenuItem, which calls IndivsListBox::Update to update the list box in the main view which displays the individuals in the group.Whereas Figure 3 illustrates the representation of interactors and event-handlers, Figure 4 is provided in order to be compared with the equivalent Unify message diagram (Figure 1). With a quick glance at the use case structure chart, one can immediately visualize the hierarchical method execution of the program code. The hierarchy is not near as apparent in the message diagram, where the hierarchical sequencing is specified only with annotation, and not in the form of the diagram.A REVISED DEVELOPMENT PROCESSThe development process now used in our projects at USF is similar to that used in Jacobson’s Objectory [4], the major difference being the utilization of the use case structure chart instead of the object interaction diagram in the dynamic model. Early in the process a static object model is designed, as well as a set of use cases. To promote reuse, the static object model is well constructed before use case walk-throughs begin.The product of the use case walk-throughs is a refined static object model and a set of use case structure charts, one for each use case. The structure charts provide a clear and detailed depiction of the internal activity, including the input port events and event-handling of each use case.Though the use case structure charts have been found to be extremely useful, they are not used as the sole driving force for a bottom-up implementation, as in traditional structured development. Architecturally, this would be inconvenient if not impossible, since the code of an application is stored in a class-by-class fashion, and each structure chart refers to multiple classes spread over many files.On the other hand, implementation is not based solely on the static object model, because it is not especially useful for integrating code, as integration generally involves methods from separate classes, and it is not useful for integration or system testing. Instead, the use case structure charts, which provide a direct mapping to system requirements, are used in consort with the static object model, which maps directly to the code architecture.Following is an overview of the implementation, testing, and maintenance phases used in USF projects:Step 1. Distribute the code tasks class by class.Step 2. Each programmer codes and unit tests all methods of a class that depend on no other classes, i.e., methods that do not call any methods of another class, or refer to data members of another class. Such methods include most constructors and destructors, simple access (“Get”) and modification (“Set”) functions, and calculations involving only data members and non-object-typed parameters. For this task, the use case structure charts used to determine which methods have dependencies, and it or the static model is used to determine the methods and signatures of the class.Step 3. Each member codes all methods of a class that are dependent on other classes. Unit testingcannot be performed, since the related classes have not yet been published. The use case structure charts are followed in determining the sub-methods called within each method. During this stage, the programmer may also realize the need to break a method down into components, and thus will modify the structure chart accordingly.Step 4. Each programmer publishes his code, and then uses the code published by others to unit test his class(es). This is generally a stage where many changes occur to code within methods as well as to the interface of each class. When changes are made, the structure charts are generally referred to and modified first and foremost; the static model is then updated for consistency.Programmers continue to unit test, modify code, and re-publish their classes until each method has been successfully unit tested. But successful unit tests do not mean that the system will behave correctly. As stated in [5], “the usual response from the object-oriented community is that if the units (objects) are carefully defined and tested, any composition will work. This was the hope of information hiding as a decomposition criterion in traditional software development. We know from experience that this fails. We know also that unit testing can never reveal integration-level problems.”-Step 5. Integration testing is performed. The structure charts are followed in a bottom-up fashion to test if the objects are collaborating correctly. When problems are found, notification is sent to the developer of the particular class that requires modification.Step 6. Integration testing becomes system testing when the root event-handler is reached. When a system test does not complete as expected, the structure chart for the use case is analyzed and modified so that it is equivalent to the use case specification.Step 7. Maintenance -- when a change is requested by an end-user, the appropriate use case specification is modified, then the corresponding structure chart is altered to match. Finally, the object model is updated and a code modification is requested from the programmer(s) of the involved classes. CONCLUSIONFunctional decomposition was the key concept in software designer prior to the object “revolution”. With the advent of objects, the pendulum swung far to the side of object decomposition, and away from functional decomposition. The objective of this paper was to describe a notation and methodology that reestablishes the importance of decomposing functions.The use case structure chart is the notation that drives the revised methodology. As an alternative to interaction diagrams, the use case structure chart provides 1) a more comprehensive documentation of a use case walk-through, 2) a clearer depiction of the input-port events and object messages that occur for each use case, and 3) a direct documentation of the method decomposition trees that exist in an event-driven system. The hierarchical nature of the notation has made it easier to stress and enforce the decomposition of methods into small units. More importantly, the charts have provided direction and order in the implementation and testing process, and they have proven invaluable in maintaining our systems.The use case structure chart differs from Shlaer and Mellor’s class structure chart in that it depicts the dynamic behavior of a system behavior instead of a class, so it is more useful in integrating classes, testing and maintenance. It also provides constructs for explicitly modeling external events and event-handling methods within the method decomposition tree.The revised methodology, based on use case structure charts, has significantly improved the development process in various projects at USF, compared to earlier projects based on interaction diagrams. The process has been used both in year-long Senior Project courses, in which teams of 4-6 students develop a medium-sized commercial software system for a “real-world” client, in individual Software Engineering course projects, and in one medium-sized commercial project. The results and comments from students have been extremely encouraging. Though difficult to assess, it appears that the use of structure charts has not reduced the programmers’orientation to objects. This can be attributed to the fact that a rigorous “bottom-up” object analysis is performed early in the process -- prior to use case walk throughs. The fact that a more method-oriented document is produced from the walk-throughs simply has no effect on how objects are designed or reused.。