Higher-Order Separation Logic and Abstraction
- 格式:pdf
- 大小:165.28 KB
- 文档页数:13
Linear Algebra and its Applications432(2010)2089–2099Contents lists available at ScienceDirect Linear Algebra and its Applications j o u r n a l h o m e p a g e:w w w.e l s e v i e r.c o m/l o c a t e/l aaIntegrating learning theories and application-based modules in teaching linear algebraୋWilliam Martin a,∗,Sergio Loch b,Laurel Cooley c,Scott Dexter d,Draga Vidakovic ea Department of Mathematics and School of Education,210F Family Life Center,NDSU Department#2625,P.O.Box6050,Fargo ND 58105-6050,United Statesb Department of Mathematics,Grand View University,1200Grandview Avenue,Des Moines,IA50316,United Statesc Department of Mathematics,CUNY Graduate Center and Brooklyn College,2900Bedford Avenue,Brooklyn,New York11210, United Statesd Department of Computer and Information Science,CUNY Brooklyn College,2900Bedford Avenue Brooklyn,NY11210,United Statese Department of Mathematics and Statistics,Georgia State University,University Plaza,Atlanta,GA30303,United StatesA R T I C L E I N F O AB S T R AC TArticle history:Received2October2008Accepted29August2009Available online30September2009 Submitted by L.Verde-StarAMS classification:Primary:97H60Secondary:97C30Keywords:Linear algebraLearning theoryCurriculumPedagogyConstructivist theoriesAPOS–Action–Process–Object–Schema Theoretical frameworkEncapsulated process The research team of The Linear Algebra Project developed and implemented a curriculum and a pedagogy for parallel courses in (a)linear algebra and(b)learning theory as applied to the study of mathematics with an emphasis on linear algebra.The purpose of the ongoing research,partially funded by the National Science Foundation,is to investigate how the parallel study of learning theories and advanced mathematics influences the development of thinking of individuals in both domains.The researchers found that the particular synergy afforded by the parallel study of math and learning theory promoted,in some students,a rich understanding of both domains and that had a mutually reinforcing effect.Furthermore,there is evidence that the deeper insights will contribute to more effective instruction by those who become high school math teachers and,consequently,better learning by their students.The courses developed were appropriate for mathematics majors,pre-service secondary mathematics teachers, and practicing mathematics teachers.The learning seminar focused most heavily on constructivist theories,although it also examinedThe work reported in this paper was partially supported by funding from the National Science Foundation(DUE CCLI 0442574).∗Corresponding author.Address:NDSU School of Education,NDSU Department of Mathematics,210F Family Life Center, NDSU Department#2625,P.O.Box6050,Fargo ND58105-6050,United States.Tel.:+17012317104;fax:+17012317416.E-mail addresses:william.martin@(W.Martin),sloch@(S.Loch),LCooley@ (L.Cooley),SDexter@(S.Dexter),dvidakovic@(D.Vidakovic).0024-3795/$-see front matter©2009Elsevier Inc.All rights reserved.doi:10.1016/a.2009.08.0302090W.Martin et al./Linear Algebra and its Applications432(2010)2089–2099Thematicized schema Triad–intraInterTransGenetic decomposition Vector additionMatrixMatrix multiplication Matrix representation BasisColumn spaceRow spaceNull space Eigenspace Transformation socio-cultural and historical perspectives.A particular theory, Action–Process–Object–Schema(APOS)[10],was emphasized and examined through the lens of studying linear algebra.APOS has been used in a variety of studies focusing on student understanding of undergraduate mathematics.The linear algebra courses include the standard set of undergraduate topics.This paper reports the re-sults of the learning theory seminar and its effects on students who were simultaneously enrolled in linear algebra and students who had previously completed linear algebra and outlines how prior research has influenced the future direction of the project.©2009Elsevier Inc.All rights reserved.1.Research rationaleThe research team of the Linear Algebra Project(LAP)developed and implemented a curriculum and a pedagogy for parallel courses in linear algebra and learning theory as applied to the study of math-ematics with an emphasis on linear algebra.The purpose of the research,which was partially funded by the National Science Foundation(DUE CCLI0442574),was to investigate how the parallel study of learning theories and advanced mathematics influences the development of thinking of high school mathematics teachers,in both domains.The researchers found that the particular synergy afforded by the parallel study of math and learning theory promoted,in some teachers,a richer understanding of both domains that had a mutually reinforcing effect and affected their thinking about their identities and practices as teachers.It has been observed that linear algebra courses often are viewed by students as a collection of definitions and procedures to be learned by rote.Scanning the table of contents of many commonly used undergraduate textbooks will provide a common list of terms such as listed here(based on linear algebra texts by Strang[1]and Lang[2]).Vector space Kernel GaussianIndependence Image TriangularLinear combination Inverse Gram–SchmidtSpan Transpose EigenvectorBasis Orthogonal Singular valueSubspace Operator DecompositionProjection Diagonalization LU formMatrix Normal form NormDimension Eignvalue ConditionLinear transformation Similarity IsomorphismRank Diagonalize DeterminantThis is not something unique to linear algebra–a similar situation holds for many undergraduate mathematics courses.Certainly the authors of undergraduate texts do not share this student view of mathematics.In fact,the variety ways in which different authors organize their texts reflects the individual ways in which they have conceptualized introductory linear algebra courses.The wide vari-ability that can be seen in a perusal of the many linear algebra texts that are used is a reflection the many ways that mathematicians think about linear algebra and their beliefs about how students can come to make sense of the content.Instruction in a course is based on considerations of content,pedagogy, resources(texts and other materials),and beliefs about teaching and learning of mathematics.The interplay of these ideas shaped our research project.We deliberately mention two authors with clearly differing perspectives on an undergraduate linear algebra course:Strang’s organization of the material takes an applied or application perspective,while Lang views the material from more of a“pure mathematics”perspective.A review of the wide variety of textbooks to classify and categorize the different views of the subject would reveal a broad variety of perspectives on the teaching of the subject.We have taken a view that seeks to go beyond the mathe-matical content to integrate current theoretical perspectives on the teaching and learning of undergrad-uate mathematics.Our project used integration of mathematical content,applications,and learningW.Martin et al./Linear Algebra and its Applications432(2010)2089–20992091 theories to provide enhanced learning experiences using rich content,student meta cognition,and their own experience and intuition.The project also used co-teaching and collaboration among faculty with expertise in a variety of areas including mathematics,computer science and mathematics education.If one moves beyond the organization of the content of textbooks wefind that at their heart they do cover a common core of the key ideas of linear algebra–all including fundamental concepts such as vector space and linear transformation.These observations lead to our key question“How is one to think about this task of organizing instruction to optimize learning?”In our work we focus on the conception of linear algebra that is developed by the student and its relationship with what we reveal about our own understanding of the subject.It seems that even in cases where researchers consciously study the teaching and learning of linear algebra(or other mathematics topics)the questions are“What does it mean to understand linear algebra?”and“How do I organize instruction so that students develop that conception as fully as possible?”In broadest terms, our work involves(a)simultaneous study of linear algebra and learning theories,(b)having students connect learning theories to their study of linear algebra,and(c)the use of parallel mathematics and education courses and integrated workshops.As students simultaneously study mathematics and learning theory related to the study of mathe-matics,we expect that reflection or meta cognition on their own learning will enable them to construct deeper and more meaningful understanding in both domains.We chose linear algebra for several reasons:It has not been the focus of as much instructional research as calculus,it involves abstraction and proof,and it is taken by many students in different programs for a variety of reasons.It seems to us to involve important mathematical content along with rich applications,with abstraction that builds on experience and intuition.In our pilot study we taught parallel courses:The regular upper division undergraduate linear algebra course and a seminar in learning theories in mathematics education.Early in the project we also organized an intensive three-day workshop for teachers and prospective teachers that included topics in linear algebra and examination of learning theory.In each case(two sets of parallel courses and the workshop)we had students reflect on their learning of linear algebra content and asked them to use their own learning experiences to reflect on the ideas about teaching and learning of mathematics.Students read articles–in the case of the workshop,this reading was in advance of the long weekend session–drawn from mathematics education sources including[3–10].APOS(Action,Process,Object,Schema)is a theoretical framework that has been used by many researchers who study the learning of undergraduate and graduate mathematics[10,11].We include a sketch of the structure of this framework and refer the reader to the literature for more detailed descriptions.More detailed and specific illustrations of its use are widely available[12].The APOS Theoretical Framework involves four levels of understanding that can be described for a wide variety of mathematical concepts such as function,vector space,linear transformation:Action,Process,Object (either an encapsulated process or a thematicized schema),Schema(Intra,inter,trans–triad stages of schema formation).Genetic decomposition is the analysis of a particular concept in which developing understanding is described as a dynamic process of mental constructions that continually develop, abstract,and enrich the structural organization of an individual’s knowledge.We believe that students’simultaneous study of linear algebra along with theoretical examination of teaching and learning–particularly on what it means to develop conceptual understanding in a domain –will promote learning and understanding in both domains.Fundamentally,this reflects our view that conceptual understanding in any domain involves rich mental connections that link important ideas or facts,increasing the individual’s ability to relate new situations and problems to that existing cognitive framework.This view of conceptual understanding of mathematics has been described by various prominent math education researchers such as Hiebert and Carpenter[6]and Hiebert and Lefevre[7].2.Action–Process–Object–Schema theory(APOS)APOS theory is a theoretical perspective of learning based on an interpretation of Piaget’s construc-tivism and poses descriptions of mental constructions that may occur in understanding a mathematical concept.These constructions are called Actions,Processes,Objects,and Schema.2092W.Martin et al./Linear Algebra and its Applications432(2010)2089–2099 An action is a transformation of a mathematical object according to an explicit algorithm seen as externally driven.It may be a manipulation of objects or acting upon a memorized fact.When one reflects upon an action,constructing an internal operation for a transformation,the action begins to be interiorized.A process is this internal transformation of an object.Each step may be described or reflected upon without actually performing it.Processes may be transformed through reversal or coordination with other processes.There are two ways in which an individual may construct an object.A person may reflect on actions applied to a particular process and become aware of the process as a totality.One realizes that transformations(whether actions or processes)can act on the process,and is able to actually construct such transformations.At this point,the individual has reconstructed a process as a cognitive object. In this case we say that the process has been encapsulated into an object.One may also construct a cognitive object by reflecting on a schema,becoming aware of it as a totality.Thus,he or she is able to perform actions on it and we say the individual has thematized the schema into an object.With an object conception one is able to de-encapsulate that object back into the process from which it came, or,in the case of a thematized schema,unpack it into its various components.Piaget and Garcia[13] indicate that thematization has occurred when there is a change from usage or implicit application to consequent use and conceptualization.A schema is a collection of actions,processes,objects,and other previously constructed schemata which are coordinated and synthesized to form mathematical structures utilized in problem situations. Objects may be transformed by higher-level actions,leading to new processes,objects,and schemata. Hence,reconstruction continues in evolving schemata.To illustrate different conceptions of the APOS theory,imagine the following’teaching’scenario.We give students multi-part activities in a technology supported environment.In particular,we assume students are using Maple in the computer lab.The multi-part activities,focusing on vectors and operations,in Maple begin with a given Maple code and drawing.In case of scalar multiplication of the vector,students are asked to substitute one parameter in the Maple code,execute the code and observe what has happened.They are asked to repeat this activity with a different value of the parameter.Then students are asked to predict what will happen in a more general case and to explain their reasoning.Similarly,students may explore addition and subtraction of vectors.In the next part of activity students might be asked to investigate about the commutative property of vector addition.Based on APOS theory,in thefirst part of the activity–in which students are asked to perform certain operation and make observations–our intention is to induce each student’s action conception of that concept.By asking students to imagine what will happen if they make a certain change–but do not physically perform that change–we are hoping to induce a somewhat higher level of students’thinking, the process level.In order to predict what will happen students would have to imagine performing the action based on the actions they performed before(reflective abstraction).Activities designed to explore on vector addition properties require students to encapsulate the process of addition of two vectors into an object on which some other action could be performed.For example,in order for a student to conclude that u+v=v+u,he/she must encapsulate a process of adding two vectors u+v into an object(resulting vector)which can further be compared[action]with another vector representing the addition of v+u.As with all theories of learning,APOS has a limitation that researchers may only observe externally what one produces and discusses.While schemata are viewed as dynamic,the task is to attempt to take a snap shot of understanding at a point in time using a genetic decomposition.A genetic decomposition is a description by the researchers of specific mental constructions one may make in understanding a mathematical concept.As with most theories(economics,physics)that have restrictions,it can still be very useful in describing what is observed.3.Initial researchIn our preliminary study we investigated three research questions:•Do participants make connections between linear algebra content and learning theories?•Do participants reflect upon their own learning in terms of studied learning theories?W.Martin et al./Linear Algebra and its Applications432(2010)2089–20992093•Do participants connect their study of linear algebra and learning theories to the mathematics content or pedagogy for their mathematics teaching?In addition to linear algebra course activities designed to engage students in explorations of concepts and discussions about learning theories and connections between the two domains,we had students construct concept maps and describe how they viewed the connections between the two subjects. We found that some participants saw significant connections and were able to apply APOS theory appropriately to their learning of linear algebra.For example,here is a sketch outline of how one participant described the elements of the APOS framework late in the semester.The student showed a reasonable understanding of the theoretical framework and then was able to provide an example from linear algebra to illustrate the model.The student’s description of the elements of APOS:Action:“Students’approach is to apply‘external’rules tofind solutions.The rules are said to be external because students do not have an internalized understanding of the concept or the procedure tofind a solution.”Process:“At the process level,students are able to solve problems using an internalized understand-ing of the algorithm.They do not need to write out an equation or draw a graph of a function,for example.They can look at a problem and understand what is going on and what the solution might look like.”Object level as performing actions on a process:“At the object level,students have an integrated understanding of the processes used to solve problems relating to a particular concept.They un-derstand how a process can be transformed by different actions.They understand how different processes,with regard to a particular mathematical concept,are related.If a problem does not conform to their particular action-level understanding,they can modify the procedures necessary tofind a solution.”Schema as a‘set’of knowledge that may be modified:“Schema–At the schema level,students possess a set of knowledge related to a particular concept.They are able to modify this set of knowledge as they gain more experience working with the concept and solving different kinds of problems.They see how the concept is related to other concepts and how processes within the concept relate to each other.”She used the ideas of determinant and basis to illustrate her understanding of the framework. (Another student also described how student recognition of the recursive relationship of computations of determinants of different orders corresponded to differing levels of understanding in the APOS framework.)Action conception of determinant:“A student at the action level can use an algorithm to calculate the determinant of a matrix.At this level(at least for me),the formula was complicated enough that I would always check that the determinant was correct byfinding the inverse and multiplying by the original matrix to check the solution.”Process conception of determinant:“The student knows different methods to use to calculate a determinant and can,in some cases,look at a matrix and determine its value without calculations.”Object conception:“At the object level,students see the determinant as a tool for understanding and describing matrices.They understand the implications of the value of the determinant of a matrix as a way to describe a matrix.They can use the determinant of a matrix(equal to or not equal to zero)to describe properties of the elements of a matrix.”Triad development of a schema(intra,inter,trans):“A singular concept–basis.There is a basis for a space.The student can describe a basis without calculation.The student canfind different types of bases(column space,row space,null space,eigenspace)and use these values to describe matrices.”The descriptions of components of APOS along with examples illustrate that this student was able to make valid connections between the theoretical framework and the content of linear algebra.While the2094W.Martin et al./Linear Algebra and its Applications432(2010)2089–2099descriptions may not match those that would be given by scholars using APOS as a research framework, the student does demonstrate a recognition of and ability to provide examples of how understanding of linear algebra can be organized conceptually as more that a collection of facts.As would be expected,not all participants showed gains in either domain.We viewed the results of this study as a proof of concept,since there were some participants who clearly gained from the experience.We also recognized that there were problems associated with the implementation of our plan.To summarize ourfindings in relation to the research questions:•Do participants make connections between linear algebra content and learning theories?Yes,to widely varying degrees and levels of sophistication.•Do participants reflect upon their own learning in terms of studied learning theories?Yes,to the extent possible from their conception of the learning theories and understanding of linear algebra.•Do participants connect their study of linear algebra and learning theories to the mathematics content or pedagogy for their mathematics teaching?Participants describe how their experiences will shape their own teaching,but we did not visit their classes.Of the11students at one site who took the parallel courses,we identified three in our case studies (a detailed report of that study is presently under review)who demonstrated a significant ability to connect learning theories with their own learning of linear algebra.At another site,three teachers pursuing math education graduate studies were able to varying degrees to make these connections –two demonstrated strong ability to relate content to APOS and described important ways that the experience had affected their own thoughts about teaching mathematics.Participants in the workshop produced richer concept maps of linear algebra topics by the end of the weekend.Still,there were participants who showed little ability to connect material from linear algebra and APOS.A common misunderstanding of the APOS framework was that increasing levels cor-responded to increasing difficulty or complexity.For example,a student might suggest that computing the determinant of a2×2matrix was at the action level,while computation of a determinant in the 4×4case was at the object level because of the increased complexity of the computations.(Contrast this with the previously mentioned student who observed that the object conception was necessary to recognize that higher dimension determinants are computed recursively from lower dimension determinants.)We faced more significant problems than the extent to which students developed an understanding of the ideas that were presented.We found it very difficult to get students–especially undergraduates –to agree to take an additional course while studying linear algebra.Most of the participants in our pilot projects were either mathematics teachers or prospective mathematics teachers.Other students simply do not have the time in their schedules to pursue an elective seminar not directly related to their own area of interest.This problem led us to a new project in which we plan to integrate the material on learning theory–perhaps implicitly for the students–in the linear algebra course.Our focus will be on working with faculty teaching the course to ensure that they understand the theory and are able to help ensure that course activities reflect these ideas about learning.4.Continuing researchOur current Linear Algebra in New Environments(LINE)project focuses on having faculty work collaboratively to develop a series of modules that use applications to help students develop conceptual understanding of key linear algebra concepts.The project has three organizing concepts:•Promote enhanced learning of linear algebra through integrated study of mathematical content, applications,and the learning process.•Increase faculty understanding and application of mathematical learning theories in teaching linear algebra.•Promote and support improved instruction through co-teaching and collaboration among faculty with expertise in a variety of areas,such as education and STEM disciplines.W.Martin et al./Linear Algebra and its Applications432(2010)2089–20992095 For example,computer and video graphics involve linear transformations.Students will complete a series of activities that use manipulation of graphical images to illustrate and help them move from action and process conceptions of linear transformations to object conceptions and the development of a linear transformation schema.Some of these ideas were inspired by material in Judith Cederberg’s geometry text[14]and some software developed by David Meel,both using matrix representations of geometric linear transformations.The modules will have these characteristics:•Embed learning theory in linear algebra course for both the instructor and the students.•Use applied modules to illustrate the organization of linear algebra concepts.•Applications draw on student intuitions to aid their mental constructions and organization of knowledge.•Consciously include meta-cognition in the course.To illustrate,we sketch the outline of a possible series of activities in a module on geometric linear transformations.The faculty team–including individuals with expertise in mathematics,education, and computer science–will develop a series of modules to engage students in activities that include reflection and meta cognition about their learning of linear algebra.(The Appendix contains a more detailed description of a module that includes these activities.)Task1:Use Photoshop or GIMP to manipulate images(rotate,scale,flip,shear tools).Describe and reflect on processes.This activity uses an ACTION conception of transformation.Task2:Devise rules to map one vector to another.Describe and reflect on process.This activity involves both ACTION and PROCESS conceptions.Task3:Use a matrix representation to map vectors.This requires both PROCESS and OBJECT conceptions.Task4:Compare transform of sum with sum of transforms for matrices in Task3as compared to other non-linear functions.This involves ACTION,PROCESS,and OBJECT conceptions.Task5:Compare pre-image and transformed image of rectangles in the plane–identify software tool that was used(from Task1)and how it might be represented in matrix form.This requires OBJECT and SCHEMA conceptions.Education,mathematics and computer science faculty participating in this project will work prior to the semester to gain familiarity with the APOS framework and to identify and sketch potential modules for the linear algebra course.During the semester,collaborative teams of faculty continue to develop and refine modules that reflect important concepts,interesting applications,and learning theory:Modules will present activities that help students develop important concepts rather than simply presenting important concepts for students to absorb.The researchers will study the impact of project activities on student learning:We expect that students will be able to describe their knowledge of linear algebra in a more conceptual(structured) way during and after the course.We also will study the impact of the project on faculty thinking about teaching and learning:As a result of this work,we expect that faculty will be able to describe both the important concepts of linear algebra and how those concepts are mentally developed and organized by students.Finally,we will study the impact on instructional practice:Participating faculty should continue to use instructional practices that focus both on important content and how students develop their understanding of that content.5.SummaryOur preliminary study demonstrated that prospective and practicing mathematics teachers were able to make connections between their concurrent study of linear algebra and of learning theories relating to mathematics education,specifically the APOS theoretical framework.In cases where the participants developed understanding in both domains,it was apparent that this connected learning strengthened understanding in both areas.Unfortunately,we were unable to encourage undergraduate students to consider studying both linear algebra and learning theory in separate,parallel courses. Consequently,we developed a new strategy that embeds the learning theory in the linear algebra。
Chapter 3 Inorganic Chemistry (28)3.1 The Atomic Nature of Matter (28)3.2 Electronic Structure of Atoms (30)3.3 Periodicity of Atomic Properties (32)3.5 Molecular Geometry and Bonding Theories......................................................... 错误!未定义书签。
3.6 Chemical Reactions................................................................................................. 错误!未定义书签。
3.7 The Behavior of Gases ............................................................................................ 错误!未定义书签。
3.8 Aqueous Reactions and Solution Stoichiometry................................................... 错误!未定义书签。
3.9 Chemical Equilibrium ............................................................................................ 错误!未定义书签。
3.10 Thermochemistry.................................................................................................. 错误!未定义书签。
A Calculus of Higher-Order Parameterizationfor Algebraic SpecificationsMar´ıa Victoria CengarleMartin WirsingInstitut f¨u r InformatikLudwig-Maximilians-Universit¨a t M¨u nchenLeopoldstr.11b,80802Munich,Germany{cengarle,wirsing}@informatik.uni-muenchen.deAbstractA specification language is presented which provides three specification-build-ing operators:amalgamated union,renaming and restriction.The languageis enhanced with parameterization over higher-order variables based on thesimply typed lambda calculus.Context dependencies that ensure the well-definedness of a parameterized specification,are defined over a calculus ofrequirements and can be syntactically derived.A contextual proof system forparameterized specifications is also presented,that is correct and relativelycomplete.Keywords:algebraic specification,specification-building operator,proof,lambda calculus.1IntroductionAlgebraic specifications are a formalization of the abstract data type concept,and constitute a well-developedfield in computer science.They concentrate on the functionality of a system and provide a systematic means for describing its proper-ties.Basically,a specification is composed of a signature and restrictions associated with it.A signature consists of name declarations for data sets and data opera-tions,and possibly also for data relations.The associated restrictions are formulas over the signature;depending on the formalism chosen,they can be e.g.equations orfirst-order formulas with equality.These restrictions must be fulfilled by any implementation of the specification.These formalisms are convenient for programming in the small.For program-ming in the large,specifications may become quite unmanageable:there is a natural need to divide a task of significant size among different designers or programmers, who then have to synchronize their efforts when considering,for instance,names for procedures,side-effects,etc.For attacking problems of a larger nature,specification-building operators are usually put at designer’s disposal.These allow the building of new,larger specifications out of smaller ones.In the present work we introduce a small specification language equipped with three specification-building operators:amalgamated union,renaming and restric-tion(also called export),based on ASL(cf.[10]).The semantics of a simple spec-ification,consisting of signature and axioms,is loosely defined as the collection of all structures satisfying the axioms.The semantics of a specification expression is function of the semantics of the arguments of the specification-building operator, and is defined in terms of reducts of structures.We enhance this language with parameterization based on the simply typed λ-calculus.Pure calculi are defined as just a set of typed variables closed under abstraction and application.In the calculus we define,we have as constants of ba-sic type the signature and specification expressions.Further terms are defined by applying the signature-and specification-building operators to terms of the corre-sponding basic type.Our aim is to devise a powerful parameterization mechanism,whose novelty is twofold:syntactic type inference and derivation,and semantic type definition and derivation.We approach to this goal step by step.We begin by defining a function returning the syntactic requirements to be fulfilled by a simply typed term to be defined,and a system for the derivation of syntactic well-definedness of terms.On top of this system,we define a proof system that allows us to imply the theorems valid in a parameterized specification.In this way,we can provide means for semantic types imposed on the parameters to parameterized specifications(as an example,we may require that the argument satisfy a number of sentences). Moreover the semantic types can be then inferred using those systems combined. In the present paper we concentrate on syntactic type inference and derivation,and on proofs.Let us explain these two points in slightly more detail.The present approach,similar to the one presented in[5],corresponds to“pa-rameterized specification”and not to“specification of parameterized algebras”like in Extended ML,as they were defined in[8].Moreover,parameterized specifica-tions map specification expressions into specification expressions,this means,they do not define specification morphisms.Another approach to parameterization is the so-called pushout parameterization(see[4])which can be encompassed by the β-reduction of theλ-calculus in combination with further specification-building op-erators;see also[12].Parameterization in our framework permits the definition of both parameterized signatures and parameterized specifications.As opposed to what can be found in e.g.[8,12],there are no restrictions on the syntax of parameters.In those works an actual parameter has to have a particular signature,in[9]this is relaxed by stating that an actual parameter has to have a supersignature of the parameter restriction.Even this latter definition is unsatisfactory,since there may be situations in which one needs precisely the opposite,namely an actual signature contained in a parameter restriction.In the present proposal,any specification of the proper (simple)type can be passed to a parameterized specification as argument.That means,there is no need for the designer to know a priori the signature of all the actual parameters over which his/her parameterized specification will be applied.Obviously simple types can lead to inconsistent definitions,since not every spec-ification of the corresponding simple type can be eligible for the application of a parameterized specification.We thus define a requirement function evaluating sig-nature and specification terms to terms of a requirement language.A requirement expresses the conditions that must be met in order for a term to be defined.Ba-sically a requirement is afinite set of expressions of the form K1⊑K2,whose meaning is:the signature denoted by K1is a subsignature of the one denoted by K2.Thus,requirements can demand that the argument has precisely a signature, or a supersignature or even a subsignature of a given one.In order to keep track of bound variables,requirements are closed under typed abstraction and application.Semantically,we prove that the requirement function returns necessary and suf-ficient conditions for the given term to be defined.Syntactically,we develop a system of axioms and inference rules by means of which the requirements may be derived.These derivations are relative to a context assuming values for variables. The system is correct:if a requirement is derivable,then it is valid in any environ-ment consistent with the context assumed.Moreover,it is complete in relation to denotable environments(environments assigning denotable values to each variable).As a consequence,the requirements associated with a term are derivable iffthe term is a well-defined signature or specification in any denotable environment compatible with the context of derivation.We extend the notion of signature and of model of specification over parameter-ized specifications.We combine the system for the derivation of requirements and the structured proof system for specification expressions(see[13])to obtain a con-textual proof system for parameterized specifications.Like requirement derivation, the proofs derived are relative to a context.The contextual proof system is correct and,in the presence of the interpolation property(see[2]),complete with respect to denotable environments.2General DefinitionsPrior to the definition of parameterization,we introduce the non-parameterized fragment of the specification language.We define the three basic signature-and specification-building operators of renaming,restriction and amalgamation.These operators together with the simple signatures and specifications define the signature resp.specification expressions.2.1Signatures and Signature StructuresIn the present subsection we define signatures and signature expressions together with the subsignature relation,signature morphisms and signature structures.Spec-ifications are defined in the next subsection.Definition2.1.1(Signature)A signatureΣis a triple S,F,P of a set S of sorts,a set F of typed function letters,and a set P of typed predicate letters.We write sorts(Σ)to denote S,opns(Σ)to denote F,and preds(Σ)to denote P.We write s,s1,s′for elements of S,f,f1,f′for function letters in F,and q,q1,q′for predicate letters in P.We write s1×···×s n→s for function types,and s1×···×s n for predicate types.By abuse of notation we write s,f,q∈Σ.A generic element in Σis denoted byµ,and we writeµ∈Σ.For the purposes of the present article,there is no necessity to restrict ourselves to unique typing.That means,there may be a function letter with types s1×···×s n→s and s′1×···×s′m→s′where n=m or s i=s′i(1≤i≤n),and the same for predicate letters.In other words,letters in F and in P may be overloaded.Definition2.1.2(Subsignature Relation)A signatureΣis said to be a sub-signature ofΣ′,writtenΣ⊆Σ′,if sorts(Σ)⊆sorts(Σ′),opns(Σ)⊆opns(Σ′)and preds(Σ)⊆preds(Σ′).Σcontains declared function and predicate letters with their types attached.In the presence of overloading,if for instanceΣ⊆Σ′,f:s1×s2→s3∈opns(Σ) and f:s1×s4→s5∈opns(Σ),then necessarily s1,s2,s3,s4,s5∈sorts(Σ′), f:s1×s2→s3∈opns(Σ′)and f:s1×s4→s5∈opns(Σ′).Definition2.1.3(Signature Morphism)Given signaturesΣandΣ′,a signa-ture morphismσ:Σ→Σ′is a triple σS,σF,σP of mapsσS:sorts(Σ)→sorts(Σ′),σF:opns(Σ)→opns(Σ′),σP:preds(Σ)→preds(Σ′),such that the typing is respected.1IfσS,σF andσP are all bijective,σis called a signature isomorphism.By abuse of notation we writeσ(s),σ(f),σ(q)∈Σ′.As usual,σ,σ1,σ′denote signature morphisms and r,r1,r′denote signature isomorphisms(also called renamings).For any two signaturesΣandΣ′,whenever Σis subsignature ofΣ′(i.e.Σ⊆Σ′),there is the so-called embedding morphism, which is denoted by inΣΣ′and defined by inΣΣ′(µ)=µfor anyµ∈Σ.Now we introduce the language of signature expressions and the subsignature relation extended over such expressions,inductively defined by2.1.4through2.1.7. Signatures and signature expressions are writtenΣ,Σ1,Σ′.A signature expression Σdenotes the signature nf(Σ),and the subsignature relation extended is correct: given signature expressionsΣ1andΣ2,ifΣ1andΣ2are in the relation,then nf(Σ1)⊆nf(Σ2).The signature morphisms between signature expressions are thus defined to be the signature morphisms between the signatures denoted by those expressions.Definition2.1.4(Simple Signature)A signatureΣ= S,F,P is a signature expression,also called a simple signature,and it denotes nf(Σ)=Σ.Definition2.1.5(Restriction)LetΣ0andΣ1be signature expressions such that Σ0is a subsignature ofΣ1.1.Σ1|Σ0is a signature expression called the restriction ofΣ1toΣ0,and it denotes nf(Σ1|Σ)=nf(Σ0).2.(a)A signature expressionΣis a subsignature ofΣ1|Σ0 ifΣis a subsignature ofΣ0.(b)Σ1|Σ0is a subsignature of a signature expressionΣifΣ0is a subsignature ofΣ.The elements inΣ1that do not belong toΣ0are called the hidden symbols ofΣ1|Σ0;the elements inΣ0are called the export symbols ofΣ1|Σ0.By abuse of notation wewriteµ∈Σ1|Σ0,sorts(Σ1|Σ),etc.forµ∈nf(Σ1|Σ),sorts(nf(Σ1|Σ)),etc.Definition2.1.6(Renaming)LetΣ0,Σ1andΣ2be signature expressions such thatΣ0is a subsignature ofΣ1,and let r be a signature isomorphism from nf(Σ1) to nf(Σ2).1.r·Σ0is a signature expression,called the renaming via r ofΣ0,and it denotes nf(r·Σ0)=r(nf(Σ0)).2.(a)A signature expressionΣis a subsignature of r·Σ0ifΣis a subsignature of r(nf(Σ0)).(b)r·Σ0is a subsignature of a signature expressionΣif r(nf(Σ0))is a subsignature ofΣ.By abuse of notation we writeµ∈r·Σ0,sorts(r·Σ0),etc.forµ∈nf(r·Σ0), sorts(nf(r·Σ0)),etc.The signature-building operator to be introduced now allows the combination of two signatures.Let us motivate the intuition of the reader with an example. Suppose the task of defining a stack of natural numbers is divided into two:the definition of natural numbers and the definition of stacks.The corresponding sig-natures are:SG-NAT= {Nat},{zero:→Nat,succ:Nat→Nat},{iszero:Nat}SG-STACK= {Nat,Stack},{emptystack:→Stack,push:Nat×Stack→Stack,pop:Stack→Stack,top:Stack→Nat},{isempty:Stack}We can combine both signatures using a union operator and write SG-NATSTACK= SG-NAT+SG-STACK.Given that the only symbol common to both SG-NAT and SG-STACK is Nat,with a plain union we would reach the desired result.Nevertheless, if the task is accomplished by two different programmers,normally they would agree just on the symbols the two signatures must have in common,and not about the rest of the symbols.In larger examples it can be very tedious to revise both signatures for the purpose of detecting possible(unwanted)name clashes in order to perform a renaming before defining the union.Therefore,we define the union of signatures to be an amalgamation,providing a third operand with the common symbols that are to be identified by the union.If there are further common symbols that are not present in this third argument of the union,then they are duplicated so as to avoid confusion among them.In our example,if we abbreviate to{Nat}the signature {Nat},{},{} ,then SG-NATSTACK=SG-NAT+{Nat}SG-STACK.Definition2.1.7(Amalgamation)LetΣ0,Σ1,andΣ2be signature expressions such thatΣ0is a subsignature both ofΣ1and ofΣ2.1.Σ1+Σ0Σ2is a signature expression,called the union ofΣ1andΣ2with sharedsubsignatureΣ0(in the literature also called the amalgamated union), and it denotes the following signature:nf(Σ1+ΣΣ2)= (sorts(nf(Σ1))\sorts(nf(Σ0)))⊎(sorts(nf(Σ2))\sorts(nf(Σ0)))⊎sorts(nf(Σ0)),(opns(nf(Σ1))\opns(nf(Σ0)))⊎(opns(nf(Σ2))\opns(nf(Σ0)))⊎opns(nf(Σ0)),(preds(nf(Σ1))\preds(nf(Σ0)))⊎(preds(nf(Σ2))\preds(nf(Σ0)))⊎preds(nf(Σ0))where⊎denotes disjoint union of sets.We writeµl∈nf(Σ1+Σ0Σ2)ifµ∈nf(Σ1)andµ∈nf(Σ0),µr∈nf(Σ1+Σ0Σ2)ifµ∈nf(Σ2)andµ∈nf(Σ0)andµc∈nf(Σ1+Σ0Σ2)ifµ∈nf(Σ0),where l stands for“left,”r for“right,”and c for“center.”2.We define morphisms inl(Σ1,Σ0):nf(Σ1)→nf(Σ1+Σ0Σ2)and inr(Σ0,Σ2):nf(Σ2)→nf(Σ1+ΣΣ2)byinl(Σ1,Σ0)(µ)= µc ifµ∈nf(Σ0)µl otherwise,andinr(Σ0,Σ2)(µ)= µc ifµ∈nf(Σ0)µr otherwise.(a)A signature expressionΣis a subsignature ofΣ1+ΣΣ2ifi.Σis a subsignature of inl(Σ1,Σ0)(nf(Σ1)),orii.Σis a subsignature of inr(Σ0,Σ2)(nf(Σ2)).(b)Σ1+Σ0Σ2is a subsignature of a signature expressionΣif inl(Σ1,Σ0)(nf(Σ1))is a subsignature ofΣandinr(Σ0,Σ2)(nf(Σ2))is a subsignature ofΣ.We omit(Σ1,Σ0)and(Σ0,Σ2)when they are obvious from the context.By abuseof notation we writeµ∈Σ1+Σ0Σ2,sorts(Σ1+ΣΣ2),etc.forµ∈nf(Σ1+ΣΣ2),sorts(nf(Σ1+ΣΣ2)),etc.In this definition,note that both inl(nf(Σ1))and inr(nf(Σ2))are simple sig-natures(moreover are subsignatures of nf(Σ1+Σ0Σ2)given that the extendedsubsignature relation is correct;see below).That means,the subsignature relation extension of definition2.1.7is well-defined.Definition2.1.8A signature expression is a simple signature and any expression obtained from simple signatures by use of the signature-building operators amalga-mated union,restriction and renaming.The collection of all signature expressions is denoted by CSign.The subsignature relation extended is written⊆,too.It can be easily proved that this relation is correct:if two signature expressionsΣ1andΣ2are such that Σ1⊆Σ2(i.e.they are in the extended subsignature relation),and denote nf(Σ1) resp.nf(Σ2),then sorts(nf(Σ1))⊆sorts(nf(Σ2)),opns(nf(Σ1))⊆opns(nf(Σ2)), and preds(nf(Σ1))⊆preds(nf(Σ2)).Moreover if nf(Σ1)⊆nf(Σ2)thenΣ1⊆Σ2. The inductive definition is useful in section4when we derive this relation.The signature morphisms between signature expressions are the signature mor-phisms between the simple signatures denoted by those expressions.Embedding too extends over signature expressions in the obvious way.Signatures are interpreted as structures,in which sorts are assigned sets,typed functions are assigned functions on the corresponding domain,and typed predicates are assigned subsets of the corresponding Cartesian products.Depending on the intended meaning of signatures in general(and also on the available tools for e.g. performing proofs,see subsection2.2),the functions interpreting typed function letters could also be partial.Definition2.1.9(Structure)Given a signatureΣ= S,F,P ,aΣ-structure A consists of a non-empty carrier set A s for each s∈sorts(Σ),a[partial]functionf A s1×···×A s n→A s:A s1×···×A sn→A s for each f∈F with a type s1×···×s n→s,and a relation q A s1×···×A s n⊆A s1×···×A snfor each q∈P with a type s1×···×s n.The collection of allΣ-structures is denoted by CStrs(Σ).The structures associated with a signature expressionΣare the structures associ-ated with nf(Σ);the collection of allΣ-structures is likewise denoted by CStrs(Σ). Signature morphisms provide a means for changing the domain of interpretation without changing the interpretation itself.The interpretation obtained in this way is called a reduct,and formally defined as follows.Definition2.1.10(Reduct)Given signature expressionsΣandΣ′,a signature morphismσ:Σ→Σ′defines a mapping CStrs(σ):CStrs(Σ′)→CStrs(Σ) called theσ-reduct in the following way.Given aΣ′-structure A′in CStrs(Σ′), CStrs(σ)(A′)=A is aΣ-structure defined by1.A s=A′σ(s)for all s∈sorts(Σ),2.f A s1×···×A s n→A s=σ(f)A′σ(s1)×···×A′σ(s n)→A′σ(s)for all f∈opns(Σ)with a type s1×···×s n→s,3.q A s1×···×A s n=σ(q)A′σ(s1)×···×A′σ(s n)for all q∈preds(Σ)with a type s1×···×s n.These are the definitions of signature expression and of signature structure that we need in our framework.There are further results,concerning e.g.the categorical aspect of signature expressions and morphisms;for details the reader may consult [9,12].2.2SpecificationsWefirst introduce simple specifications,and then structured ones as the closure of the former under application of the specification-building operators.For eachspecification,structured or not,we define two functions returning the signature(a signature expression)of the specification and its class of models,respectively.As usual,a simple specification is a signature together with a set of axioms.A signature defines a language of sorted terms and of sorted predicate atomic formulas; the language associated with a signature expression is thus the language defined by the simple signature the expression denotes.The definition of well-formed formula, and therefore the definition of ground formula or sentence,are omitted here.The axioms of a specification can be equations,Horn clauses,first-order formulas,etc.A decision is normally taken according to the expressiveness one needs(if one aims at issues like e.g.errors or partial operations)and the tools one has for carrying out proofs.Definition2.2.1(Simple Specification)A simple specification is a pair Σ,E whereΣis a signature expression and E is a set ofΣ-sentences.BetweenΣ-structures andΣ-formulas there is usually a satisfaction relation written A|=ϕ,typically defined inductively on the structure ofϕ.We assume such a definition is given for the approach to formulas chosen.Definition2.2.2(Specification Expression)The collection CSpec of all spec-ification expressions together with the operations sig:CSpec→CSign and Mod:CSpec→{C⊆CStrs(Σ):Σ∈CSign},is inductively defined as fol-lows.1.A simple specification Σ,E is a specification expression.sig( Σ,E )=ΣMod( Σ,E )={A∈CStrs(Σ):A|=ϕfor allϕ∈E}where“|=”is a satisfaction relation between structures and sentences.2.If SP1and SP2are specification expressions,andΣis a signature expressionsuch thatΣ⊆sig(SP1)andΣ⊆sig(SP2),then SP1+ΣSP2is a specification expression.sig(SP1+ΣSP2)=sig(SP1)+Σsig(SP2)Mod(SP1+ΣSP2)={A∈CStrs(sig(SP1)+Σsig(SP2)):CStrs(inl)(A)∈Mod(SP1)and CStrs(inr)(A)∈Mod(SP2)}3.If SP is a specification expression andΣis a signature expression such thatΣ⊆sig(SP),then SP|Σis a specification expression.sig(SP|Σ)=sig(SP)|ΣMod(SP|Σ)=CStrs(inΣsig(SP))(Mod(SP))4.If SP is a specification expression and r is a renaming such that sig(SP)⊆dom(r)2,then r·SP is a specification expression.sig(r·SP)=r·sig(SP)Mod(r·SP)=CStrs(r−1)(Mod(SP))Usually structures in Mod(SP|Σ)and in Mod(r·SP)are also denoted by A|Σand r(A),respectively,if A∈Mod(SP).For this definition to make sense,it is necessary that the logic behind the relation |=be an institution(cf.[9]).The main characteristic of an institution is the so-called satisfaction condition:CStrs(σ)(A′)|=ϕ⇐⇒A′|=σ(ϕ)for any A′∈CStrs(Σ′)and anyΣ-wffϕ, whereσ:Σ→Σ′is a signature morphism;see[6].There may otherwise be some difficulties with the definition of the function Mod and the intuitive semantics of an expression.So,for example,in the amal-gamated union and choosing a certain institution dependent logic,we canfind a pathological example in which models of two simple specifications are combined in a sum in such a way that the axioms of both summands are contradicted;see[3].Using the signatures SG-NAT and SG-STACK,and choosingfirst-order logic with equality,we can specify stacks of natural numbers as follows:SP-NAT= SG-NAT,AXS-NATwhere AXS-NAT={iszero(zero),(∀n:Nat)(zero=succ(n)∧¬iszero(succ(n)))} SP-STACK= SG-STACK,AXS-STACKwhere AXS-STACK={isempty(emptystack),(∀s:Stack)(∀n:Nat)(¬isempty(push(n,s))∧top(push(n,s))=n∧pop(push(n,s))=s)} SP-NATSTACK=SP-NAT+{Nat}SP-STACKwhere{Nat}abbreviates {Nat},{},{} .In this example,observe that sig(SP-NATSTACK)=SG-NAT+{Nat}SG-STACK= SG-NATSTACK.The function sig is extended over formulas.An explicit formulation of this extension can be given on top of(and inductively on)a definition of well-formed formula.We let sig(ϕ)denote the least signature containing all symbols ofϕ(which is a simple signature,i.e.with no occurrence of any signature-building operator), and define sig(E)= ϕ∈E sig(ϕ).3We write Mod(SP)|=ϕif A|=ϕfor every A∈Mod(SP).Proofs can be performed in the framework of structured specification.In[13],one canfind a correct proof systemΠs based on a sound proof systemΠfor the logic underlying,such that if SP⊢Πs ϕthen Mod(SP)|=ϕ,that is,Πs is correct.Πs is alsostructured,in the sense that there is an inference rule for each specification-building operator,and hence supports the reuse of proofs.Πs is moreover complete,i.e.if Mod(SP)|=ϕthen SP⊢Πs ϕ,ifΠis complete and in the presence of theinterpolation property.The role of the interpolation property can be informallyexplained as follows:when one wants to prove that SP1+SP2⊢Πs ϕ(forgetting foran instant the labels introduced by amalgamation),thenϕhas to be split into two formulas in order for the rules associated with the sum to apply.That is,one has to be able to inspect in SP1and in SP2separately.This can be put in equivalence with interpolation(see also[2]).Here,again,there are the definition of specification morphism and a category of specifications.For the aim of the present work we just need definitions2.2.1and 2.2.2,and an institution independent semantics.3Parameterized Signatures and SpecificationsIn the present section we formally define syntax and semantics of parameterized signatures and specifications.There are many practical examples that motivate parameterization:typically,the storing structures like stacks,lists and queues.In any of these examples,the argument has to supply a sort whose elements are to be stored using those structures.That is,the argument needs a minimum signature.There are other examples,in which an argument may have up to a maximum signa-ture.Consider a database SP supporting views and access restrictions depending on the hierarchy of the user.Whenever a new user is given access to the database, a view VIEW(y):=SP|y is created,which can have at most as many symbols as the database itself,i.e.the parameter variable y of a parameterized view VIEW(y)is a signature such that y⊆sig(SP).Because of these considerations,we do not require of a parameter to have or contain a particular signature,as in e.g.[8,9,12].We define a language of simply typedλ-terms.There are terms denoting pa-rameterized signatures and terms denoting parameterized specifications.Signature terms are the signatures defined in2.1(the so-called constants)and signature vari-ables;this set is closed under abstraction and application.Specification terms are the specifications of2.2over signature terms(that is,not only over signature constants but over any signature term of the appropriate type)and specification variables;this set is likewise closed under abstraction and application.The set of specification terms is defined using the set of signature terms.From specification terms to signature terms we define a term constructor sigλ,that when applied to specification constants is interpreted as the function sig defined for spec-ifications in2.2.4This means,signature terms are obtained using the function sigλon specification terms,and thus the set of signatures terms is also defined using the set of specifications terms.In other words,both sets are defined by mutual recursion.3.1Syntax of TermsWefirst define signature and specification types,and afterwards signature and spec-ification terms.From specification types and terms to signature types and terms we have a function sigλreturning,in the term base case,the signature of a given specification term.In general,if a specification term M is parameterized,then the signature term sigλ(M)is likewise parameterized,and both M and sigλ(M)accept arguments of the same type.Finally we defineβ-reduction as usual.Definition3.1.1(Signature and Specification Types)Signature types are ei-ther0or(τ1→τ2)or(π→τ),whereτ1,τ2,τare signature types,andπis a specification type.A specification type is either1or(π1→π2)or(τ→π),whereπ1,π2,πare specification types,andτis a signature type.From specification types to signature types we define the map sigλas follows:•sigλ(1)=0,•sigλ((π1→π2))=(π1→sigλ(π2)),•sigλ((τ→π))=(τ→sigλ(π)).The types0and1are called basic types.We denote the signature variables and types by y,y1,y2andτ,τ1,τ2,and the specification variables and types by z,z1, z2andπ,π1,π2.We also use p,p1,p2andρ,ρ1,ρ2to denote variables and types which may be either signature or specification variables and types.Definition3.1.2(Signature and Specification Terms)LetY={y:τ|τis a signature type},andZ={z:π|πis a specification type}be sets of typed signature and specification variables,respectively(denumerable many of each type).The setΛof typedλ-terms over Y,Z and CSpec,is defined inductively as follows.LetΣbe a simple signature,suppose defined terms K,K0, K1,K2of type0,terms M,M1,M2of type1,a term N1of typeρ1,and a term N2of type(ρ1→ρ2).1.If p:ρ∈Y∪Z is a variable,then p:ρ∈Λ.2.(a)Σ1+Σ2,denoted by term(Σ1+Σ0Σ2).Similarly for constant specification expressions,for example a term denotingSP=SP1+Σ0SP2with SP j= Σj,E j simple(j=1,2)is term(Σ1),E1 +term(Σ)term(Σ2),E2 ,where term(Σi)is the signature term obtained by the previous procedure(i=0,1,2).Such a specification term is likewise written term(SP).6Σ1+Σ2is a signature term whose semanticsis the signature expressionΣ1+Σ0Σ2(see section3.2)denoting the signature nf(Σ1+ΣΣ2).。
国外高阶思维及其教学方式上海教育科研2011.9SHANGHAI JIAOYU KEYAN国外高阶思维及其教学方式笪文王帅一、高阶思维的辨识:特征及类别鉴于思维过程的复杂性,不同研究者可以从不同视界提出关于思维本质的不同认识。
其中杜威(Dewey)对思维过程的解释被奉为经典,“没有人对思维过程所做的解释比杜威更好”。
杜威认为,思维的过程是一种事件的序列链。
这一生产过程从反思开始移动到探究,再到批判性思维,最后得到比个人信仰和想象更为具体的“可以证实的结论”。
思维不是自然发生的,但是它一定是由“难题和疑问”或“一些困惑、混淆或怀疑“”引发”的。
观察者“手头的数据不会提供解决方案;它们仅仅能够给人启示”。
而正是对“解决方案的需要”,维持和引导着反思性思维的整个过程“;问题的本质决定了思考的结果,思考的结果控制着思维的过程”。
[1]不难看出,杜威着重强调了问题之于思维的重要意义,思维的发生就是反思———问题生成———探究、批判———解决问题的过程。
事实上,根据后来布卢姆(Bloom)对思维所作的分类,杜威在此所指的思维过程,实为高阶思维过程。
也有研究者从分析高阶思维的一般特征或标准出发来理解其内涵,从而避免对这一复杂的范畴进行精确界定。
瑞斯尼克(Resnick)指出,高阶思维是不规则的、复杂的,能够产生多种解决方法,需要多种应用标准,自动调节,且包含不确定性。
[2]恩尼斯(Ennis)进一步细化了相关的标准:(1)使用抽象的思维结构。
(2)将信息组织成一个整合的体系。
比较慢的学习者看到的是呈现在他面前的一系列随机的、没有联系的知识片段。
能力强的学生则把学习材料看成是系统的、有联系的、能进行归类和类比的,换言之,他们的精神世界是有组织的,能借助高阶思维把琐碎的信息组合成有体系的整体。
(3)应用合理的逻辑和判断准则。
逻辑是推理的研究,是对思考的思考,可以被认为是提升了推理的艺术,是艺术需要遵守的科学情形[3]。
Logic as a Formal MethodAntony GaltonDepartment of Computer ScienceUniversity of ExeterExeter EX44PTJune25,1997IntroductionThe aim of this article is to present,in outline,a representative selection of the ways in which formal logic has been of service to computer science.Logic offers so many possibilities of application,and there are so many diverse groups of researchers developing logic-based applications,that it will be impossible in the space available to do justice to the wholefield.Indeed it will be impossible even to mention everything that is going on,let alone say anything about it.Therefore I confine myself to a few areas which I believe,taken together,give a fair impression of the promise that formal logic holds as a tool for computer scientists.I assume in this article that the reader has a working knowledge of the classical Propositional and Predicate Calculi:a lightning sketch of these systems can be found in my article‘Classical Logic:a Crash Course for Beginners’which appears earlier in this issue;for more details,the reader is urged to consult a textbook such as my Logic for Information Technology (Wiley,1990).1Applications of Classical LogicFirst-order logic has a number of virtues which make it a valuable tool in Computer Science.The first of these is that it is an artificial language,totally under our control,with none of the maverick and unpredictable ambiguities that pervade ordinary language.Once one hasfixed the domain of the interpretation,and the denotation of the constants,predicates and function symbols,the meaning of every formula is therebyfixed too,in a completely unambiguous way.It is therefore a good medium in which to codify precisely the facts and rules pertaining to the sorts of domains we are interested in when thinking computationally—which can mean either the domains we want our computations to be about,or the domain of computation itself,for example if we want to reason about the behaviour of programs.Second,and no less important,is the fact that the language of the Predicate Calculus comes with a ready-made inferential apparatus,enabling us not just to express the facts pertaining to our chosen domain but also to reason with them in a way that is guaranteed to be logically correct.And finally,a virtue that is often claimed for the Predicate Calculus is that it is universal in that it does not prejudge its possible domains of application.It is not entirely clear to me how far this claim is really correct,but there certainly seem to be potential application areas which pose considerable difficulty for the Predicate Calculus,for example reasoning about mass terms such as‘water’or‘gold’,where it is only with the greatest artificiality that we can conceptualise the domain in terms of a set of discrete individuals.For computer science,however,this limitation is not serious,since in this discipline we1do almost always think of things in just the discrete kind of way that is required for a direct application of the Predicate Calculus.1.1Program specificationMyfirst example of such an application concerns the range of activities to which we may apply the term specification.Writing programs is only a meaningful activity so long as one starts off with some idea,however vague,of how one wants one’s program to behave.In small-scale‘recreational’programming,and in some areas of Artificial Intelligence,the idea may only ever be formulated in the vaguest way,programmers relying on their ability to recognize intuitively when they have come up with something interesting that conforms to their original idea.In a commercial or industrial context, however,it has long been recognized that a systematic codification of the desired program behaviour is an absolute prerequisite for responsible programming.How systematic should one be?According to the Formal Methods school,nothing less than a rigorous specification infirst-order logic or some comparable formalism will do,for without such a specification,it is not only impossible to be sure that the program behaves as desired,it is even left indeterminate what the desired program behaviour is in thefirst place.Suppose,to take a simple example,that one wanted to develop a library information system which users can consult in order tofind out about what books,periodicals,etc.,the library possesses, and also about their own use of the library—who has a given book on loan,when it is due back, and so on.The domain of the program consists of a number of different types of object:library items such as books and periodicals,catalogue numbers assigned to these,locations where items are physically stored in the library,and individual borrowers,who may have various different statuses (e.g.,in a university library,undergraduate,postgraduate,staff),their addresses,and so on.There are innumerable constraints that must be satisfied,for example that a book cannot be on loan to more than one user at a time,that each copy of every book has a catalogue number that identifies it uniquely,that each borrower has a unique status which determines the normal period of loan for items borrowed, that certain items may have restricted loan periods,and so on.Somehow thefinished information system must behave in such a way that these constraints are all satisfied.How is the programmer to ensure that this is achieved?Of course the answer is by being systematic,and from what we have already said it should be clear that a very good way of being systematic is to express all the relevant constraints in logical form.Thus,for example,thefirst constraint mentioned above might come out looking something like this:The complete set of such constraints will constitute a formal specification of the information system, laying down criteria for what is to count as a correct implementation of the system.There are several things that can be done with such a specification.Ideally,we would like to be able to do the following: Use the specification to prove that the implementation is correct—i.e.,that it does not violate any of the constraints1;Even better,by systematically transforming the specification,to produce a working program from it automatically,in such a way that the correct behaviour of the program is guaranteed.By logical inference from the specification to determine what further properties any correct implementation of the specification must possess.It is generally found that logic alone is not enough;the logic,which is as we have remarked highly general,has to be embedded in procedures that are more specifically tailored to the computational context.What is important,though,is that we can effect a clean separation between the specification of what we are trying to achieve—that is,how thefinished program should behave—and the imple-mentational details,the procedures by which the desired behaviour is to be realised.The specification should thus be presented as a set of declarations of what is required,rather than in terms of algorithms, so the declarative nature of logic suits it to that purpose.Formal specification methods such as Z and VDM generally contain a substantial core of pure logic,although as already hinted they contain much else besides.1.2Program verificationGiven a formal specification and an actual program,how can one tell whether the latter is correct in relation to the former,that it actually behaves as specified?Formal methods of program verification are designed to allow one to check just this.In general terms,the behaviour of a program,or of a self-contained module within a program,can be specified by1.the class of inputs it is to accept;2.the class of outputs it is to deliver;3.the required relation between the input and the output.For example,a program to compute the quotient and remainder when one natural number2is divided by another can be specified as follows:1.Input:two natural numbers,where0.2.Output:two natural numbers.3.Input-Output Relation:and.An example procedure,in Pascal,isprocedure quot_rem(x,y:integer;var q,r:integer);beginq:=0;r:=x;while r>=y dobeginr:=r-y;q:=q+1endend.We assume here that the input condition,that0and0,is satisfied before the procedure is called.A logical system specifically designed for program verification was developed by C.A.R.Hoare over twenty years ago(Hoare,1969).It makes use offirst-order logic together with a special form of program logic in which to express propositions of the form‘if before the execution of a piece of code,the values of the program variables satisfy the formula,then after execution of they will satisfy the formula’.The notation used to express this is.Here is known as a precondition and as a postcondition.In general one wants the precondition to be as weak as possible,to cover the greatest possible range of initial states(hence the weakest precondition rules of Dijkstra,1976), and the postcondition to be as strong as possible,so that one gets as detailed a picture of the output state as possible.For example,the formula:1states that if we execute the instruction x:=y+1when,then in the resulting state we will have1and(note that the initial value of is irrelevant,so it is not mentioned in the precondition).In general,Hoare laid down as an axiom(the Axiom of Assigment)the rule:Here is the statement obtained from by replacing each occurrence of‘’by‘’—so,in the example above we have‘1’1i.e.,‘11’,which is equivalent to what we had before since11is equivalent to and for any formula,is equivalent to.In addition to the axiom of assignment,Hoare used a number of rules of inference,includingtwo Rules of Consequence:(Cons1)If,and implies,then;(Cons2)If implies,and thenwhich enable one to weaken the postcondition or strengthen the precondition,a Rule of Composition:(Comp)If1and2then1;2which enables pre-and post-conditions to be stated for compound instructions built up by concatenating simpler ones,anda Rule of Iteration:(it)If then while B do Swhich enables us to handle while-loops.One can now prove that the Pascal procedure above meets the specification for the quotient-remainder computation.Specifically,the relationbut the precondition here is equivalent to,so we have:;:making this formula an invariant for the while-loop of the procedure.We now have,from the rule of iteration,:;:;but since is implied by0,and implies,the rules of consequence give us:;:It is not hard to see how to get from here to the full correctness proof.Note that we several times made use of equivalences and implications between formulae;of course in a fully formal proof we should have to prove these equivalences,and that is wherefirst-order logic comes in.The whole proof makes use of both ordinary standard logic and Hoare’s special logic of programs.This illustrates the point made above thatfirst-order logic achieves its greatest power when used in conjunction with other formalisms.For further details of methods such as those mentioned here,see for example Dijkstra (1976)or Backhouse(1986).1.3Program synthesisIn the quotient-remainder example above we gave the specification and the program independently of one another,and then showed how to prove that the program is correct with respect to the specification. In actual programming practice,though,it is obviously desirable that the construction of the program should be guided by the specification,so that it is not as it were just a matter of luck that the program should happen to satisfy it;formal specification methods such as Z and VDM are designed to facilitate this(see Jones,1986,for VDM,and Diller,1990for Z).Ideally,we would like a systematic method for deriving the program from the specification,and the goal of a number of researchers is to make the transformation of specification to program so systematic that it can be actually automated,so that the activity of programming is in effect reduced to that of composing specifications.This goal defines the area of Automated Program Synthesis(APS).The approach to APS that makes greatest use of logic is the so-called Deductive Approach, pioneered by Manna and Waldinger(1980).The central idea behind the deductive approach is that a formal specification of the formGiven inputs12satisfying the formula12find outputs12such that the formula11holds5can be written as a Specification Theorem11111which has to be true in order for there to exist a program meeting the specification(for if the required outputs do not even exist,then obviously no program can deliver them).To synthesize the required program we attempt to prove the specification theorem;our proof must be constructive,i.e.,we prove that something exists by showing how to construct it.The program itself then emerges as a by-product of the proof.Here I shall discuss a variant of the deductive approach which uses the Constructive Matching methodology of Fraˇn ov´a(1988).I shall illustrate this method by showing how it can be used to synthesize a quotient-remainder program.The specification theorem in this case isNote that we assume the domain is the set of natural numbers.The proof of the theorem uses mathematical induction,using as the induction variable3.The base case is therefore when0,and the theorem reduces in this case to00Constructive matching requires us to match0with;given the axioms for addition and multiplication(in3above),there are only two ways of doing this:either by putting0or by putting0.The second conflicts with the condition0in the antecedent,so we are left with thefirst,for which we must now check that holds;since0and0,it does.This gives us the construction‘if0,then put0and0’,which in turn gives us thefirst part of the synthesized programif x=0thenbeginq:=0;r:=0endelse...For the‘else’part,we use the fact that if0then for some(where denotes the successor function).In this case the specification theorem becomesand we may make use in addition of the induction hypothesis4,which isConstructive matching requires us to match with.Without even using the induction hypothesis,we can immediately write down two‘trivial’solutions,namely0and0.The second conflicts with the condition0,but thefirst is acceptable so long as,i.e.,.This gives us the next part of the synthesized program:...if x=s(a)and s(a)<y thenbeginq:=0;r:=s(a)endelse...which may be simplified to...if x<y thenbeginq:=0;r:=xendelse...We must now consider the case where,the‘non-trivial’solutions.For these we must take into account the induction hypothesis.For given,this guarantees us the existence of values1and1 such that11and1.To match with,then,we must try to match11 with.The axioms for addition and multiplication give us the following possibilities:1.0112.1103.114.11Since0and11,thefirst possibility implies0,which duplicates the trivial solution already found.Of the remaining cases,numbers2and4lead nowhere since we can’t match with either11or1.This leaves number3,which can be simplified to11.This solution is acceptable so long as the condition is satisfied,i.e.,1.The axioms for‘’(not given here)break this down into11.The induction hypothesis gives us1,so we are left with1as a condition for this solution to be acceptable.To convert this into the next part of the algorithm,all we need is to note that1and1 are the values returned by the quotient-remainder procedure when called with inputs and,so we have...beginquot_rem(x-1,y,q1,r1);if r1+1<>y thenbeginq:=q1;r:=r1+1endelse...(Here we represent by1since.)Finally we are left with the‘failure case’,in which1.This will occur when is a multiple of,though we don’t need to know this in order to execute the proof.For this case we have to prove a‘missing lemma’,namely that the induction hypothesis implies117(this is got by substituting1for in the earlier formula).We must match with1.But substituting1for in the induction hypothesis gives us111,which implies(via the addition and multiplication axioms)11.So we are really trying to match11 with1,which we can do by putting10.This enables us to complete the algorithm asbeginq:=q1+1;r:=0endPutting it all together,then,we have succeeded in synthesizing the following recursive procedure for computing quotient and remainder:procedure quot_rem(x,y:integer;var q,r:integer);beginif x=0thenbeginq:=0;r:=0endelse if x<y thenbeginq:=0;r:=xendelsebeginquot_rem(x-1,y,q1,r1);if r1+1<>y thenbeginq:=q1;r:=r1+1endelse beginq:=q1+1;r:=0endendend.This program may not be maximally efficient;it may be necessary to transform it in some way to improve it in this respect.Nonetheless it is impressive that by an almost purely mechanical procedure we have been able to synthesize a program from the specification at all!It is guaranteed to be correct so long as the condition0is satisfied.The goal of APS is to take techniques like this and refine them so that they can be applied practically to much more complicated cases,the sort of cases that one is likely to encounter in‘real life’.It has to be said that we are as yet nowhere near to achieving this goal;nonetheless I think that the enterprise is a good illustration of the power and insight that purely mechanical operations on logical formalisms can provide.81.4Logic ProgrammingThe drive towards automated synthesis opens up the prospect of programming by writing specifi-cations:that is,instead of writing a program,the programmer writes a specification which is then automatically converted into a program.In a sense this is already something we are doing all the time,for what else is a program written in a high-level programming language but a specification for the low-level code into which it is compiled?However,there is another sense in which it is quite misleading to regard a Pascal program,say,as a specification,and that is that it already consists of a sequence of instructions which the computer is to follow in order to derive the output from the input rather than a bare statement of the relations that are to hold between the input and the output. Thus Pascal,and the majority of other widely-used programming languages which resemble it in this respect,is an imperative language,whereas ideally a specification language should be purely declar-ative,i.e.,it should enable one to describe conditions on the input-output relation independently of any particular procedure for realizing them.If,therefore,we are ever to program by writing specifications,we require our programming languages to be declarative,just like our specification languages.A program written in such a language could then be regarded as an executable specification.There are two main classes of declarative programming languages in existence,the functional languages,of which Lisp is a rather impure example,and Miranda a purer one,and the logic programming languages,of which so far only Prolog has come to be widely used,although a recently developed language called G¨o del seem to be gaining in popularity.In a functional language the input-outputrelation is specified as a function whose values for a given set of inputs is the corresponding set of outputs.The language thus consists of a set of expressions denoting primitive functions and operations for constructing new functions from old,and the task of implementing it is that of specifying algorithms for evaluating the complex functional expressions built up in this way.Several different logical systems may be used to provide the foundation for functional languages,for example the-calculus or Martin-L¨o f’s intuitionistic type theory.On the latter,see section2.1.Turning now to logic programming,the central idea here is that a statement of the form‘if then’can be regarded either declaratively,as asserting something which may or may not be true, or procedurally,as telling you that if you want to know whether is true,you should tryfinding out whether is true.Suppose,for example,we have the following set of rules and facts:1.If it is sunny then it is warm.2.If it is daytime and there are no clouds then it is sunny.3.It is daytime.4.There are no clouds.and suppose we are asked‘Is it warm?’.From1we know that it is warm if it is sunny,so that in effect we have replaced the original question by a new question,‘Is it sunny?’.From2we know that it is sunny if it is daytime and there are no clouds,so now we can replace our question by the two questions‘Is it daytime?’and‘Are there no clouds?’.From3and4we know that the answer to each of these questions is‘yes’;this gets transmitted back up the chain of questions to give us an answer ‘yes’to the original question‘Is it warm?’.This example could be translated directly into Prolog,and the resulting code might look like: warm:-sunny.sunny:-daytime,no_clouds.9daytime.no_clouds.With this program loaded,we can then pose the query?-warm.to the Prolog interpreter,which will then set in train a sequence of moves which amount to a proof that the statements in the program logically imply the statement warm.This example makes use of no resources beyond the Propositional Calculus,and in fact only involves a restricted subset of formulae known as Horn clauses.A Horn clause in the Propositional Calculus is a formula of the form12where the and are atomic formulae(i.e.,single schematic letters).This corresponds to the Prolog clauseH:-B1,B2,...,Bn.The logic of Horn clauses has a sound and complete proof theory which uses a single inference rule, called resolution,which says that given the clauses12and12if for some then we may infer the clause11121Logic programming would be of little use if it did not go beyond the Propositional Calculus,but in fact the ideas presented above can be extended in a rather natural way to the Predicate Calculus.In this setting,a Horn clause is now defined as a formula having the form1212where each and is an atom,that is a formula obtained byfilling in the argument places of a simple predicate by constants,variables,or complex terms derived from these using function symbols, and12are all the variables that occur in any of the or.To extend the resolution procedure to Horn clauses of this kind,we make use of a pattern-matching algorithm called the unification algorithm which enables us to determine whether two atoms have a common instance.For example,given the two clauses and,we can observe that the atoms involving can be unified by substituting for and for. We can therefore resolve the two clauses to give.Prolog notation dispenses with quantifiers since Horn clauses do not contain existential quantifiers and all the universal quantifiers are placed in a string at the head of the clause.Thus the two examples above would be written asr(Y):-p(X),q(g(X),Y).q(Z,f(Z)):-s(Z).10Note Prolog’s odd convention of writing predicates,function symbols and constants beginning with lower case letters and variables with upper case.To illustrate the use of Prolog as an executable specification language,we shall consider some simple list-processing tasks.Prolog notates a list in the form[X|Xs],where X is thefirst element of the list and Xs is tail of the list,i.e.,the list consisting of the remaining elements.An individual list with known members may be written out in full as,e.g.,[monday,tuesday,wednesday,thursday,friday,saturday,sunday]. The empty list is written as[].Suppose we want to specify the relation in which a list stands to that list which contains the same elements but in the reverse order.If we know that the reverse of a list12is11,then we know that the reverse of the list 12,obtained by adding an element to the beginning of,will be the listobtained by adding to the end of.We also know that the reverse of the empty list is itself.This gives us two Prolog clausesreverse([],[]).reverse([H|L],S):-reverse(L,R),add_to_end(H,R,S).We must also specify the relation:we know that the result of adding an element to the end of the empty list is the one-element list;also,if the result of adding to the end of a list 12is the list12,then the result of adding to the end of the list 12will be12.The required Prolog clauses are thus add_to_end(E,[],[E]).add_to_end(E,[H|L],[H|M]):-add_to_end(E,L,M).The four clauses we have written down,which express in a purely declarative way the basic facts about the and relations,can now be used to answer a query such as?-reverse([a,b,c,d,e],X).to which the Prolog interpreter will duly come up with the answerX=[e,d,c,b,a].It would be idle to pretend that Prolog fulfils all the requirements of an executable specification language:manifestly it does not,as even its most enthusiastic devotees will admit.The way the Prolog interpreter works is sensitive to the ordering of atoms within a clause and to the ordering of clauses within a program;programs which are equivalent from a declarative point of view can turn out to have quite different behaviour in practice—it can often happen,for example,that by changing the order of the clauses one can turn a program which always terminates into a program that never does.On the other hand,merely swapping around clauses in a correct program will never generate a program that is incorrect in the sense of delivering wrong answers,the worst that can happen is that instead of delivering correct answers it fails to deliver anything.But Prolog has in addition a number of non-logical,i.e.,purely procedural features,most notably the infamous‘cut’operator(written‘!’), which gives the programmer control over which parts of the search space are examined,and as a result can result in incorrect programs unless one is very careful to observe the procedural niceties in one’s programming.Another source of possible error is the(again infamous)‘negation by failure’operator (not)which was introduced in an attempt to circumvent the limitation to Horn clauses by means of a procedural definition of negation.(For more on this,see section2.4on non-monotonic reasoning.)11Thus Prolog is not perfect,nor was it ever claimed to be.For all its faults,though,it has proved to be a very congenial medium in which to encode certain kinds of programming tasks,notably those which essentially involve recursion(such as our list-processing examples above)and whichfigure prominently in Artificial Intelligence.It also serves as a pointer to what might be achieved once the problems have been ironed out.For further details on Prolog,see Sterling and Shapiro(1986),and for Logic Programming generally,Hogger(1990).2Beyond Classical LogicAlthough classical logic,i.e.,thefirst-order Predicate Calculus,can be,when appropriately handled, a formidable tool for representing and reasoning about almost any domain,it is not the last word in formal logic.During the present century logicians,mathematicians,philosophers and computer scientists have studied a wide range of alternative formalisms designed for specific applications which do not appear to be easily handled by classical logic.In this section we shall briefly review a number of these formalisms,with particular emphasis on those that have attracted the attention of computer scientists.There are broadly speaking two ways of devising a non-classical logic:one can either take the language of classical logic unaltered,but reinterpret the logical constants(i.e.,the connectives and the quantifiers)so that the class of formulae that count as logically true or inferences that count as valid is altered;or one can alter the language itself by introducing new logical constants.2.1Intuitionistic LogicIntuitionistic Logic belongs to thefirst of these two categories,in that it does not extend the syntax of classical logic,but reinterprets the connectives so that they are no longer truth-functional.Intuitionists are very much concerned with the grounds one might have for asserting a proposition:it is no good just saying that a formula is true,one’s conviction of its truth must be grounded in some concrete intuition.As a consequence of this,the intuitionist is disinclined to accept as valid such classical theorems as the Law of the Excluded Middle or the Law of Double Negation. In the former case,the intuitionist would say that one is only warranted in asserting a formula of the form if either one is warranted in asserting or one is warranted in asserting—and clearly there are cases where one is not warranted either in asserting or.In mathematics,for example,one may not be able to prove either a proposition or its negation;this is currently the case with Goldbach’s conjecture that every even number greater than2is the sum of two primes.If we represent this conjecture by,then the classical logician will be quite happy to assert that, and will be prepared to use this assertion as a premiss in a proof;the intuitionist however will not be prepared to assert this until a proof of either or is available.In the case of the Law of Double Negation,the intuitionist interprets to mean that one has well-grounded reasons for denying; so says that the supposition that one has a proof that is false is untenable,and the intuitionist will not accept that this amounts to a proof of,since it is quite possible that neither nor can be proved.Intuitionists demand a similar grounding for the existence of objects.One is only warranted in saying that something exists if one has a constructive means to exhibit it.This places limitations on the conditions under which an intuitionist is prepared to accept a formula of the form.For example,whereas classical mathematicians will accept as a proof of the existence of transcendental (i.e.,non-algebraic)numbers the fact that the class of real numbers is of higher cardinality than the12。
Higher-Order Separation Logic and AbstractionLars Birkedal⋆and Noah Torp-Smith⋆Department of Theoretical Computer Science,IT University of Copenhagen{birkedal,noah}@itu.dkAbstract.We present a simple extension of separation logic which makesthe specification language higher order,in the sense that quantificationover predicates and higher types is possible.The force of this extension isillustrated via examples;specifically we demonstrate that existential anduniversal quantification correspond to abstract data types and paramet-ric data types,respectively.Moreover,we show how to express invarianceof programs in our framework.1IntroductionVariants of the recent formalism of separation logic[16,7]have been used to prove correct many interesting algorithms involving pointers,both in sequential and concurrent settings[11,17,3].It is a Hoare-style program logic,and its main advantage over traditional program logics is that it facilitates local reasoning.In a recent paper[2]with B.Biering we present a precise correspondence between separation logic and a simple notion of predicate BI,extending the earlier correspondence given between part of separation logic and propositional BI[14].Moreover,we introduce the notion of a BI hyperdoctrine and show that it soundly models classical and intuitionisticfirst-and higher-order predicate BI, and use it to show that we may easily extend the assertion language of separation logic to higher-order.Being a program logic,the real force of separation logic comes not only from its language of assertions,which we focused our attention on in[2],but also from its language of specifications(Hoare triples).In the present paper we extend the higher-order model of assertions from loc.cit.to a model of higher-order specifications,and provide a sound collection of inference rules for deriving valid specifications.It turns out that it is technically straightforward to do so;and we believe this serves to emphasize that our model of higher-order separation logic is very natural indeed.For concreteness we choose to consider specifications for a language that has been considered often in recent papers on separation logic,namely a language withfirst-order functions.Next we consider the effectiveness of higher-order separation logic and argue, with the use of several examples,that it is quite effective.In particular,we show that higher-order separation logic can be used in a natural way to model data abstraction,via existential quantification over predicates corresponding to ab-stract resource invariants.This shows that there is no need to extend separation logic with special syntax,such as the recently suggested abstract predicates ofParkinson and Bierman[13],for providing modular proofs of programs using abstract data types.Moreover,we show that,using universal quantification over predicates,we may prove correct polymorphic operations on polymorphic data types,e.g.,reversing a list of elements described by some arbitrary predicate. For this to be very useful,however,it is clear that a higher-order programming language would be preferable(such that one could program many more useful polymorphic operations,e.g.,the map function for lists)—we have chosen to stick with the simplerfirst-order language here to communicate more easily the ideas of higher-order separation logic.However,in Sec.4.3,we sketch how to extend our ideas to a small object-oriented language.In future work,we will show how to also make use of higher-order separation logic for a higher-order language,specifically by extending joint work with Yang on separation logic typ-ing for idealized algol[4].Here we also show how to use higher-order separation lgoic for defining assertions by means offixed points which exist in the logic because of higher-orderness.Before proceeding with the technical development we give an intuitive de-scription thereof.In Section2wefirst present the model of the assertions of higher-order separation logic in concrete terms;it corresponds to the internal logic of the BI-hyperdoctrine in[2,Section3])but this paper does not presup-pose familiarity with[2]—we present the model in concrete terms without using hyperdoctrines.Next we present higher-order separation logic specifications in Section3.The idea is simply to consider a standard kind of predicate logic with atomic predicates corresponding to Hoare triples,in the spirit of Reynold’s spec-ification logic[15].Since we have chosen to focus on a relatively simple program-ming language(with onlyfirst-order functions),the logic of specifications will also be fairly simple,e.g.,we do not include implications between specifications, and our main focus will be on existentially and universally quantified specifica-tions.The logic allows quantification over variables as in the assertion language, including quantification over variables ranging over functions and predicates.In Section4we argue that the higher-order logic is effective;in particular we show that it can be used to prove correct programs operating on abstract data types. Finally,we conclude in Section5.2Assertions of Higher-Order Separation LogicIn this section we briefly recall the syntax and semantics of the assertion language of higher-order separation logic as introduced in[2].It is a higher-order logic over a typed term language;formulas correspond to terms of types Prop and there are operations on Prop corresponding to the usual connectives of propositional separation logic.The set of types of the term language is generated by the following grammer.τ::=Prop|Int|τ×τ|ττThe semantics of these types is the standard semantics in the cartesian closed category Set of sets and functions;we just need to give the interpretation ofthe basic types.The interpretation of the type Int is given by[[Int]]=Z,and the interpretation of the type Prop is given by[[Prop]]=P(H),where H is the usual set of heaps,that is,finite maps from natural numbers to integers:H def=N⇀fin Z.Recall that P(H),ordered by inclusion,is a complete Boolean BI-algebra[2]. Terms in context are given in a standard way by a judgement∆⊢t:τ,where ∆is a list of assignments of types to variables,and the free variables in t are included in∆.The semantics of such a term is defined in the standard manner as a function from the semantics of∆,denoted[[∆]],to[[τ]],the semantics of the typeτ.The semantics[[∆]]of∆is the set of functionsηwith domain the domain of∆that assigns a value in[[τ]]to each variable x:τin∆.In particular,the semantics of formulas in context∆⊢ϕ:Prop,correspond-ing to terms of type Prop,is given by a function[[∆⊢ϕ:Prop]]:[[∆]]→P(H),defined by induction onϕ.Formulas are generated by:ϕ::=t=t|⊤|ϕ∧ϕ|⊥|ϕ∨ϕ|ϕ→ϕ|emp|ϕ∗ϕ|ϕ−−∗ϕ|e→e|∀x:τ.ϕ|∃x:τ.ϕ,where e ranges over a term of type Int,andτcan be any type,including a type of predicates such as Prop Int×Int.The semantics of all the connectives besides quantification is given pointwise using the BI-structure on P(H).The quantifiers are interpreted using corre-sponding set-theoretic quantification.For example,[[∆⊢∀x:τ.ϕ:Prop]]=η→ v∈[[τ]]([[∆,x:τ⊢ϕ:Prop]](η[x→v])). The logic for assertions allows one derive conclusions of the formϕ⊢∆ϕ′.Such a judgement is valid if,for allη∈[[∆]],we have that[[∆⊢ϕ:Prop]]η⊆[[∆⊢ϕ′:Prop]]η.The rules from standard predicate logic plus the standard rules from first-order separation logic for emp,e→e′,ϕ∗ϕ,andϕ−−∗ϕare all sound.3Specifications of Higher-Order Separation LogicIn this section we define the syntax and semantics of specifications of higher-order separation logic.There can,of course,be several variants,depending on the programming language considered.For concreteness and because it serves well to illustrate the effectiveness of higher-order separation logic in a simple way,we choose to use a simple imperative langauge withfirst-order functions that has been used in several recent papers and has been precisely defined in[13].We will not repeat the full definition of the language here,but just recall the following.The operational semantics of the language includes a semanticfunction environmentΠ,which maps function names k to pairs(¯x,C),where ¯x is a vector of integer variables and C is a command from the programming language.The operational semantics is then specified by a judgment(Π,C,η,h)⇓(η′,h′),where Modifies(C)⊆dom(η).The details are given in[13],but note that they have two separate stacks for program variables and so-called abstract variables. We just use one stack for variables of all types here.As in standard separation logic,there is a wrong state which denotes memory fault and this induces a notion of a safe state.With assertions and the programming language in place,we turn to specifica-tions,i.e.,to the program logic.One way to think about our approach is that we basically seek to define a standard logic,where predicates are specifications,in particular basic predicates are simple Hoare-triples,and where variables range over terms as in the previous section,in the spirit of Reynold’s specification logic[15].Since we have chosen to focus on a relatively simple programming language with onlyfirst-order functions,the logic of specifications will also be fairly simple,e.g.,it does not include implications between specifications.In fu-ture work,we will extend the ideas presented here to a higher-order programming language with a separation-logic typing discipline[4].First we define the syntax of our program logic,and subsequently we give the semantics of several judgments.The programming language includes calls to functions,and as in[13,12],our logic includes assumptions about these functions, formulated as specifications.Thus,we will have triples in assumptions as well as in conclusions,but the triples will be slightly different,inasmuch as the basic specifications for functions will have the form{P}k{Q}(where k is a function name),whereas those in conclusions will have form{P}c{Q},for c a command. We will allow conjunction and quantifications for both kinds of specifications.The specifications that involve function names are called function specifica-tions,and they are generated by the grammar(given a context∆)γ::={P}k{Q}|γ∧γ|∃x:τ.γ|∀x:τ.γ,where P,Q have type Prop in the context∆.(Formally,we write∆⊢γ:FSpec to state thatγis well-formed in∆,and this judgement is defined in the obvious way.)The specifications involving the actual programs are simply called specifica-tions.Given a context∆and a semantic function environmentΠin which the program c is well-formed,the grammar for specifications isδ::={P}c{Q}|δ∧δ|∃x:τ.δ|∀x:τ.δ,where again,P,Q must be well-formed in∆,all functions called in c must be defined inΠ,and all variables in Modifies(c)must be defined to have type Int in∆.(Formally,we write∆,Π⊢δ:Spec to state thatδis well-formed in∆, and this judgement is also defined in the obvious way.)The semantics of the program logic is given in a couple of straightforward steps.First,the semantics of∆,Π⊢δis a map from[[∆]]to true or false,and it is defined by induction:[[∆,Π⊢{P}c{Q}]]ηifffor all h∈[[P]]η,−(Π,C,η,h)is safe,and−if(Π,C,η,h)⇓(η′,h′),then h′∈[[Q]]η′; [[∆,Π⊢δ1∧δ2]]ηiff[[∆,Π⊢δ1]]ηand[[∆,Π⊢δ2]]η;[[∆,Π⊢∃x:τ.δ]]ηiff[[∆,Π⊢δ]]η[x→v]for some v∈[[τ]];[[∆,Π⊢∀x:τ.δ]]ηiff[[∆,Π⊢δ]]η[x→v]for all v∈[[τ]].(1) Next,we say that a specification∆,Π⊢δis valid,denoted∆,Π|=δ,if[[∆,Π⊢δ]]ηis true for allη∈[[∆]].The semantics of function specifications is defined similarly,the only differ-ence being the base case:[[∆,Π⊢{P}k{Q}]]ηiff[[∆,Π⊢{P}c{Q}]]η,whereΠ(k)=(¯x,c).An environment is a list of function triples.The meaning of such an environment is similar to the one presented in[13]:forη∈[[∆]],we simply say that[[∆;Π⊢Γ]]ηif[[∆;Π⊢γ]]η,for allγ∈Γ.Finally,under a given context,a specification is valid in an environment if truth of the environment entails truth of the specification,for any well-formed semantic function environment:∆;Γ|=δiff∀Π.∀η∈[[∆]].[[∆;Π|=Γ]]η⇒[[∆;Π|=δ]]η.(2) This definition gives rise to a program logic,using which we can derive valid triples of the form∆;Γ⊢δ.The rules are listed in Fig.1(we have omitted some obvious side-conditions of,e.g.,the form∆;Γ⊢P:Prop).Most of the rules are as for standard(first-order)separation logic.The new rules are the rules for ∃and∀,which are really the standard rules for existential and universal quan-tification.They are formulated as“double-rules,”that is,they may be applied both downwards and upwards.As is standard(see,e.g.,[8,Ch.4]),the rules for ∃and∀are equivalent to the pairs of rules∆⊢M:τ∆;Γ⊢γ[M/x]x∈F V(Γ′,γ′)∆;Γ,Γ′⊢γ′and∆,x:τ;Γ⊢γ,∆;Γ⊢γ[M/x]respectively.Theorem1(Soundness).If a triple∆;Γ⊢δcan be derived from the rules in Fig.1,then it is valid in the sense of(2).∆;Γ⊢{P[E/x]}x:=E{P}x∈F V(E)∆;Γ⊢{P[¯y/¯x]}y=k(¯y){Q[¯y,y/¯x,ret]}∆;Γ⊢{E→−}dispose(E){emp}∆;Γ⊢{∆;Γ⊢E→−}[E]:=E′{E→E′}∆;Γ⊢{P1}c1{Q1}...∆;Γ⊢{P n}c n{Q n}∆;Γ,{P1}k1{Q1},···,{P n}k n{Q n}⊢{P}c{Q}∆;Γ⊢{P}c1;c2{Q}∆,x:Int;Γ⊢{P∧x=nil}c{Q} {P}if B then c1else c2fi{Q}∆;Γ⊢{P∧B}c{P}∆,∆′;Γ⊢{P}c{Q}∆∩∆′=∅∆,∆′;Γ⊢{P}c{Q}∆;Γ⊢{P}c{Q}∆,x:τ;Γ,γ⊢δ′∆;Γ,∃x:τ.γ⊢δ′x∈F V(δ′,Γ)∆,x:τ;Γ⊢δ′∆;Γ⊢∀x:τ.δ′x∈F V(Γ)∆;Γ⊢{P}c{Q}how to use it for data abstraction.∆⊢ˆP:τ∆;Γ⊢{P1[ˆP/x]}c1{Q1[ˆP/x]}...∆;Γ⊢{P n[ˆP/x]}c n{Q n[ˆP/x]}∆;Γ,∃x:τ.({P1}k1{Q1}∧···∧{P n}k n{Q n})⊢{P}c{Q}is often used to keep the number of constructed connections low.In the simple implementation illustrated here,a pool of connections are kept in a linked list, and the pool makes sure that connections to the database are not used after they are returned to the pool,via ownership transfer.We assume two system routines, consConn and dispConn,to construct and dispose a database connection,with the following specifications:{emp}consConn(s){conn(ret,s)}{conn(x,s)}dispConn(x){emp}The specifications use a predicate conn(x,s)to express that x holds a handle to a connection to the database s.As mentioned,the pool will be represented in the heap by a linked ing the same implementations of operations as in[13] (call them c consPool,etc.),we can show the following specifications using our program logic(for the definition of clist,see Section4.4—for now,just assume that it abbreviates a linked list predicate as in[13]):∆;Γ⊢{emp}c consPool(s){∃i.x→i,s∗clist(i,s)}∆;Γ⊢{∃i.x→i,s∗clist(i,s)}c dispPool(x){emp}∆;Γ⊢{∃i.x→i,s∗clist(i,s)}c getCon(x){(∃i.x→i,s∗clist(i,s))∗conn(ret,s)}∆;Γ⊢{(∃i.x→i,s∗clist(i,s))∗conn(y,s)}c freeCon(x,y){(∃i.x→i,s∗clist(i,s))}(4)As in[13],we note that a client program c using the connection pool,should not rely on the implementation of the connection pool.Thus we verify clients c as follows:∆,cpool:Prop Int×Int,conn:Prop Int×Int;Γ,{emp}consPool(s){cpool(ret,s)},{cpool(x,s)}dispPool(x){emp},{cpool(x,s)}getCon(x){cpool(x,s)∗conn(ret,s)},{cpool(x,s)∗conn(y,s)}freeCon(x,y){cpool(x,s)}⊢{P}c{Q},(5)where the{P}c{Q}on the right of the⊢are the derivations from[13].1By rule(3)we obtainD cconsPool D cdispPoolD cgetConD cfreeConD c1Note that according to the abstract function definition rule in[13],cpool is not allowed to occur in the specification of the conclusion(i.e,.in P and Q),so the examples in loc.cit.should dispose the pool using a dispPool operation.where in the premise,thefirst four derivations are obtained by substitution(we just show one of them)from the derivations in(4):∆;Γ⊢{emp}c consPool(s){∃i.x→i,s∗clist(i,s)}∆;Γ,∃cpool:Prop Int×Int,conn:Prop Int×Int.({emp}consPool(s){cpool(ret,s)},{cpool(x,s)}dispPool(x){emp},{cpool(x,s)}getCon(x){cpool(x,s)∗conn(ret,s)},{cpool(x,s)∗conn(y,s)}freeCon(x,y){cpool(x,s)})⊢{P}c{Q}Note the straightforward use of existentially quantified predicates for expressing the abstract predicates.An important point to note here is that rule(3)allows us to interchange implementations,without effecting the reasoning about client programs.Indeed, suppose we for some reason wanted to represent the connection pool by a doubly-linked list,and that predicate dclist expresses a doubly-linked list in the same way as clist represents a singly-linked structure(see[16]for a doubly-linked list pred-icate).Then rule(3)says that all we need to show are the specifications from(4) with clist replaced by dclist.The argument for the client is completely indepen-dent of the choice of representation,as long as there is some cpool predicate with associated operations with the right properties.Obviously,this argument of representation independence uses no concept of“logical relation”or such as one does when proving representation independence results for abstract types, since we are here dealing with specifications rather than types.4.2Polymorphic Types via Universal QuantificationWe now show that universally quantified predicates may be used to prove correct polymorphic operations on polymorphic data types.The queue module example from[12]is parametric in a predicate P at the meta-level.We show that in higher-order separation logic,the parameterization may be expressed in the logic.To that end,consider the following version of theparametric list predicate from[12].list(P,β,i)= i=null∧emp ifβ=ǫ∃j.i→x,j∗P(x)∗list(P,β′,j)ifβ= x ·β′The predicate P is required to hold for each element of the sequenceβinvolved. Different instantiations of P yields different versions of the list,with different amounts of data stored in the list.If P≡emp,then plain values are stored(no ownership transfer to the queue module in[12]),and if P≡x→−,−,then addresses of cells are stored in the queue(ownership of the cells is tranferred in and out of the queue[12]).Returning to our higher-order separation logic,the definition of list may be formalized withi:Int,β:seqInt,P:Prop Int⊢list(P,β,i):Prop.Here we have used a type seqInt of sequences of integers,which is easily definable in higher-order separation logic,and the definition of list(P,β,i)can be given by induction onβin the logic.Suppose listRev is the list reversal program given in the introduction of[16]. Then one can easily show the specification{list(P,β,i)}listRev{list(P,β†,j)}.By the introduction rule for universal quantification we obtain the specification β:seqInt⊢∀P:Prop Int.{list(P,β,i)}listRev{list(P,β†,j)},which expresses that listRev is parametric in the sense that it,roughly speaking, reverses singly-linked lists of any type.Thus we have one parametric correctness proof of a specification for listRev, which may then be used to prove correct different applications of listRev(to lists of different types).For such parametric operations on polymorphic data types to be really useful, one would of course prefer a higher-order programming language instead of the first-order language considered here.Then one could,e.g.,program the usual map function on lists,and provide a single parametric correctness proof for it. In future work we will show how to make use of higher-order separation logic for a higher-order language,specifically by extending the separation-logic typing discipline for idealized algol recently introduced in joint work with Yang[4].4.3InvarianceIn this subsection we briefly consider an example,suggested to us by John Reynolds,which demontrates that one may use universal quantification to spec-ify that a command does not modify its input state.We disregard stacks here since they are not important for the argument.Suppose that our intention is to specify that some command c takes any heap h described by a prediate q,and produces a heap(we assume for simplicity that c terminates),which is an extension of h.We might attempt to use a specification of the form:{q}c{q′∗q}.(6) This does not work,however,unless q is strictly exact[16](for example,if q is ∃β:seqInt.list(β,i),then c may delete some elements from the list in the input heap h).Instead,we may use the following specification∀p:Prop.{q∧p}c{q′∗p},(7) as we see by the following argument.Predicate q describes a set of heaps[[q]].For each h∈[[q]],let p h={h}.Suppose c terminates in heap h′.Then h′=h1∗h,for some h1.That is,the heap h is invariant under the execution of c,as intended.Note that(7)is stronger than(6):by instantiation p with q in(7)we get(6). Thus if we wish to prove(6),then we may prove something stronger(7),which may be easier to prove(c.f.,strengthening an induction hypothetis),and then derive the desired.Thus we can use universal quantification to express invariance of commands.4.4Predicates via Fixed PointsRecall the clist predicate from the connection pool example above.It is required to satisfy the following recursive equation:clist=λ(x,s).x .=null∨(∃j,k.x→j,k∗conn(j,s)∗clist(k,s)).Solutions to such equations are definable in higher-order separation logic.Indeed, we may define both minimal and maximalfixed points for any monotone operator on predicates,using standard encodings offixed points.To wit,consider for notational simplicity an arbitrary predicateq:Prop⊢ϕ(q):Propsatisfying that q only occurs positively inϕ.Thenµq.ϕ(q)=∀q.(ϕ(q)→q)→qis the leastfixed point forϕin the obvious sense thatϕ(µq.ϕ(q))→µq.ϕ(q)and ∀p.(ϕ(p)→p)→(µq.ϕ(q)→p)holds in the logic.Likewise,νq.ϕ(q)=∃q.(q→ϕ(q))∧qis the maximalfixed point forϕ.4.5Higher Order Separation Logic for a Class-based Language Parkinson and Bierman[13]not only give abstraction results for thefirst-order language considered above,but also treat class-based objects,inasmuch as they give a sound logic for a subset of Java.We now sketch that one can also use a model of higher-order separation logic for modeling abstraction in such class-based languages.The underlying storage model of the class-based language con-sidered in[13]is different from the storage model of thefirst-order language con-sidered above,and thus we should correspondingly change the model of higher-order separation logic to reflect the change of storage model.Luckily,that is straightforward,by the following result,which follows from the analysis in[2]. Proposition1If(H,·)is a partial commutative monoid,then P(H)can be viewed as a complete BI-algebra in a canonical way.Hence there is a natural BI-hyperdoctrine Set op→P(H).Thus,from the storage model H in[13,Def’n4.2],we easily obtain a BI-hyperdoctrine over Set,and the basic predicates for the class-based language from[13]can be interpreted in this hyperdoctrine in a natural way.Thus one can use this hyperdoctrine to give a higher-order separation logic model for the class-based langauge in[13]and deal with abstraction and the abstract predicate families of[13]via existential quantification,much as we have done with abstract predicates in this paper.5Conclusion and Future WorkWe have extended the model of assertions of higher-order separation logic in[2] to a higher-order model of specifications,and argued that the resulting higher-order separation logic is very effective.In particular,we have shown how it,in a straightforward way,can be used to prove correct programs using abstract data types with internal hidden(abstract)resource invariants.We have also consid-ered applications of universal quantification over predicates,e.g.,for proving correct polymorphic operations.Finally,we have argued that one may also em-ploy versions of higher-order seperation logic for class-based languages.For parametric operations on polymorphic data types to be really useful, one would of course prefer a higher-order programming language.In future work we will show how to make use of higher-order separation logic for a higher-order language,specifically by extending the separation-logic typing discipline for idealized algol recently introduced in joint work with Yang[4]. References1. A.Banerjee and D.A.Naumann.Representation independence,confinement andaccess control[extended abstract].In Proc.POPL’02,2002.2. B.Biering,L.Birkedal,and N.Torp-Smith.BI-hyperdoctrines and higher orderseparation logic.In Proc.of ESOP’05,Edinburgh,Scotland,April2005.Accepted for publication.3.L.Birkedal,N.Torp-Smith,and J.C.Reynolds.Local reasoning about a copyinggarbage collector.In Proc.of POPL’04,pages220–231,Venice,Italy,2004.4.L.Birkedal,N.Torp-Smith,and H.Yang.Semantics of separation-logic typingand higher-order frame rules.Submitted,2005.5.J.He,C.A.R.Hoare,and J.W.Sanders.Data refinement refined(resume).InProc.of ESOP’86,pages187–196,1986.6. C.A.R.Hoare.Proof of correctness of data representations.Acta Informatica,1:271–281,1972.7.S.Ishtiaq and P.W.O’Hearn.BI as an assertion language for mutable datastructures.In Proc.of POPL’01,2001.8. B.Jacobs.Categorical Logic and Type Theory.North-Holland Publishing Co.,Amsterdam,1999.9.I.Mijajlovic,N Torp-Smith,and P.W.O’Hearn.Refinement and separation con-texts.In Proc.of FSTTCS’04,pages421–433,2004.10.J.C.Mitchell and G.D.Plotkin.Abstract types have existential type.In Proc.of POPL’85,pages37–51,New Orleans,LA,USA,1985.11.P.W.O’Hearn.Resources,concurrency and local reasoning.In Proc.of CON-CUR’04,pages49–67,London,England,2004.12.P.W.O’Hearn,H.Yang,and J.C.Reynolds.Separation and information hiding.In Proc.of POPL’04,pages268–280,Venice,Italy,2004.13.M.Parkinson and G.Bierman.Separation logic and abstraction.In Proc.ofPOPL’05,Long Beach,CA,USA,January2005.(To appear).14. D.Pym,P.W.O’Hearn,and H.Yang.Possible worlds and resources:The semanticsof BI.Theoretical Computer Science,315(1):257–305,May2004.15.J.C.Reynolds.The Craft of Programming.Prentice-Hall,1981.16.J.C.Reynolds.Separation logic:A logic for shared mutable data structures.InProc.of LICS’02,pages55–74,Copenhagen,Denmark,2002.17.H.Yang.Local Reasoning for Stateful Programs.PhD thesis,University of Illinois,Urbana-Champaign,2001.18.H.Yang.Relational separation logic.Theoretical Computer Science,2004.Ac-cepted for publication.。