Reasoning about pointers in refinement calculus
- 格式:pdf
- 大小:214.08 KB
- 文档页数:28
数组,结构体初始化{0}一直以为int a[256]={0};是把a的所有元素初始化为0,int a[256]={1};是把a所有的元素初始化为1.调试的时查看内存发现不是那么一回事,翻了一下《The C++ Programming Language》总算有定论。
PDF的竟然不然复制,就把它这章翻译了,如下5.2.1 数组初始化数组可以用一个列值来初始化,例如int v1[] ={1,2,3,4};char v2[]={'a','b','c',0};当数组定义时没有指定大小,当初始化采用列表初始化了,那么数组的大小由初始化时列表元素个数决定。
所以v1和v2分别为 int[4] 和char[4]类型。
如果明确指定了数组大小,当在初始化时指定的元素个数超过这个大小就会产生错误。
例如:char v3[2] ={'a','b',0}; //错误:太多的初始化值了char v3[3] ={'a','b',0}; //正确如果初始化时指定的的元素个数比数组大小少,剩下的元素都回被初始化为 0。
例如int v5[8]={1,2,3,4};等价于int v5[8]={1,2,3,4,0,0,0,0};注意没有如下形式的数组赋值:void f(){v4={'c','d',0}; //错误:不是数组赋值}如果你想这样的复制的话,请使用vector(16章第三节) 或者valarray(22章第四节)。
字符数组可以方便地采用字符串直接初始化(参考第五章 2.2小节)译注:就是这样啦 char alpha []="abcdefghijklmn";The C++ Programming Language ,Third Edition by Bjarne Stroustrup.、、、、、、、、、、、、、、、、、、、、、、、、、、、////////////////////////////////////////////////////////////////////6.6 集合初始化顾名思义,集合(aggregate)就是多个事物聚集在一起,这个定义包括混合类型的集合:像struct和class等,数组就是单一类型的集合。
在操场上丢了钥匙英语作文None is a Powerful Prompt for Exploration and DiscoveryThe word "none" is deceptively simple, yet it holds a multitude of meanings and implications that can serve as a powerful prompt for exploration and discovery. As a word that conveys a sense of absence, emptiness, or lack, "none" invites the writer to delve into realms of the abstract, the philosophical, and the deeply personal.One interpretation of "none" is the absence of any specific topic or subject matter. This open-ended prompt allows the writer to venture into uncharted territories of thought, unencumbered by the constraints of a predetermined theme. It presents an opportunity to engage in a process of self-discovery, to explore the boundaries of one's own creativity and imagination. In this context, the essay on "none" could take on a highly introspective and reflective tone, as the writer grapples with the complexities of their own existence and the nature of reality itself.Alternatively, "none" could be seen as a statement of defiance or arejection of the conventional. The writer might choose to challenge the status quo, to question the accepted norms, and to venture into uncharted territories of thought and expression. This approach could lead to a thought-provoking and unconventional essay that encourages the reader to think beyond the confines of their own experiences. The writer might explore the concept of nonconformity, the power of dissent, or the transformative potential of radical ideas.Another perspective on "none" could be the exploration of the concept of nothingness itself. What does it mean to have nothing? What are the implications of the absence of something? How does the human mind grapple with the idea of nothingness, and what insights can be gained from this exploration? These are the kinds of questions that a writer could delve into when confronted with the prompt of "none." The essay might explore the philosophical and scientific theories of the nature of the universe, the role of emptiness in various spiritual and religious traditions, or the psychological and emotional impact of experiencing a profound sense of absence.Furthermore, "none" could be seen as a statement of neutrality or impartiality. In this context, the writer could choose to explore the concept of objectivity, the ability to step back and observe the world without the biases and preconceptions that often shape our perceptions. This could lead to a nuanced and balanced exploration of complex issues, where the writer seeks to present multipleperspectives and to encourage the reader to think critically and independently. The essay might examine the role of objectivity in fields such as journalism, science, or political discourse, or it might explore the challenges and limitations of achieving true impartiality in a world that is inherently subjective.Regardless of the specific interpretation or approach taken, the essay on "none" presents a unique opportunity for the writer to engage in a process of self-discovery and intellectual exploration. By delving into the depths of this seemingly simple word, the writer can uncover layers of meaning, challenge their own assumptions, and ultimately craft a thought-provoking and engaging piece of writing.In exploring the concept of "none," the writer might draw upon a wide range of sources and disciplines, from philosophy and literature to psychology and the natural sciences. They might incorporate personal anecdotes, historical examples, or cutting-edge research to support their arguments and to deepen the reader's understanding of the topic. The essay might also incorporate creative elements, such as metaphors, analogies, or thought experiments, to help the reader grasp the elusive and abstract nature of "none."One potential approach to the essay could be to structure it around a series of questions or themes that the writer explores in depth. For example, the essay might begin by examining the various dictionarydefinitions and connotations of the word "none," before delving into the philosophical and existential implications of nothingness. The writer might then explore the role of "none" in various cultural, religious, and artistic contexts, drawing connections between the concept of absence and the human experience of loss, uncertainty, or transcendence.Alternatively, the essay could take a more narrative or personal approach, with the writer using their own experiences and reflections as a starting point for exploring the broader significance of "none." The writer might share a pivotal moment or realization in their life where the concept of "none" played a central role, and then use this as a springboard for a deeper exploration of the topic.Regardless of the specific structure or approach, the essay on "none" presents a unique opportunity for the writer to engage in a process of intellectual and creative exploration. By embracing the absence of constraints and the freedom to delve into the unknown, the writer can craft a work that is both deeply meaningful and highly engaging for the reader.In conclusion, the essay on "none" is a blank canvas, a space for the writer to unleash their creativity, their curiosity, and their intellectual prowess. It is a prompt that invites the writer to venture into the unknown, to embrace the absence of constraints, and to craft a workthat is both meaningful and memorable. Whether the writer chooses to explore the philosophical implications of nothingness, the unconventional boundaries of expression, or the pursuit of objectivity, the essay on "none" presents a unique and compelling opportunity for the writer to leave a lasting impression on the reader.。
interpolation syntax error in section(最新版6篇)目录(篇1)1.概述2.插值语法错误3.解决方法正文(篇1)1.概述在使用计算机编程时,我们经常需要处理数据和文本。
有时,我们需要使用插值方法来处理这些数据。
然而,在某些情况下,我们可能会遇到插值语法错误。
本文将讨论这种错误的原因以及解决方法。
2.插值语法错误插值语法错误通常发生在尝试使用插值方法处理数据时。
这种错误可能是由于以下原因导致的:- 输入数据格式不正确- 插值函数选择不当- 代码中存在拼写错误或其他语法错误当遇到插值语法错误时,程序通常会停止运行或产生错误的结果。
因此,我们需要找到错误的根源并采取相应的解决措施。
3.解决方法要解决插值语法错误,可以尝试以下方法:- 检查输入数据:确保输入数据格式正确且符合插值方法的要求。
如果需要,可以对数据进行预处理,例如清理、格式转换等。
- 选择合适的插值函数:根据数据特点和需求选择合适的插值函数。
例如,对于线性数据,可以选择线性插值函数;对于非线性数据,可以选择多项式插值函数等。
- 检查代码语法:仔细检查代码中是否存在拼写错误或其他语法错误,并及时进行修正。
总之,遇到插值语法错误时,需要耐心分析问题原因并采取相应的解决措施。
目录(篇2)1.概述2.插值语法错误3.解决方法正文(篇2)1.概述在使用编程语言进行数据分析或数据可视化时,我们常常需要对数据进行插值以获得更精确的结果。
然而,在处理插值时,可能会遇到一种名为“插值语法错误”的问题。
本文将探讨这种错误的原因及其解决方法。
2.插值语法错误插值语法错误通常发生在尝试对数据进行线性插值或其他类型的插值时。
这种错误可能是由于以下原因导致的:- 输入的数据格式不正确。
- 插值函数的选择不正确。
- 编程语言或库的版本不兼容。
3.解决方法为了解决插值语法错误,可以尝试以下方法:- 检查输入数据的格式,确保其符合插值函数的要求。
Research StatementAmal AhmedMy research goal is to make it easier to build reliable,secure,and efficient software through advances in strongly-typed programming languages.To that end,I am interested both in developing languages with more expressive type and proof systems and in enhancing and formally certifying the trustworthiness of language implementations.Safety and security properties of high assurance software are usually verified at the source code level.Thus,an important concern that arises is whether one can trust the compiler.For instance,source level programs that successfully type check are considered safe,but a bug in the compiler can allow a type safe source program to be compiled to unsafe machine code,opening the door to erroneous behavior and security exploits.Proof-carrying code(PCC),introduced by Necula and Lee,is a framework for mechanically verifying the safety of machine language programs[14].Under this framework,the producer of a piece of code is required to provide a formal proof that the code satisfies some agreed-upon safety policy.A crucial component of a PCC system is a certifying compiler.Starting from programs in a type-safe language,the certifying compiler translates both the code and the types,automatically producing a proof of type safety.To be confident of safety,the consumer needs to trust relatively little:just the proof checker,the set of axioms that form the safety policy,and the runtime system.In particular,the compiler does not have to be trusted.Foundational proof-carrying code(FPCC),proposed by Appel and Felty[10],goes a step further by using the smallest set of axioms and the simplest possible proof checker.Low-level typing rules,built into the safety policy in traditional PCC,are proved fromfirst principles as lemmas that can be mechanically checked along with the safety proof.At Princeton,colleagues and I built an FPCC system that uses a novel approach to prove the soundness of the typing rules.The idea is to give semantics to types in terms of the registers,memory,and instruction set of the underlying machine(e.g.,Sparc or Pentium),and then prove each typing rule as a lemma. My thesis research[2,7]focused on scaling this approach to type systems rich enough to serve as a target for practical languages like ML and Java.Today,certified code systems can prove that machine code is type safe,but in the future,we would like to be able to prove other safety,security,and even correctness properties of nguage techniques allow us to reason about programs,and so are critical to this endeavor.Unfortunately,almost all of our elegant machinery for reasoning about programs(e.g.,type systems,various proof techniques)either becomes messy or simply falls apart as soon as we try to reason about mutable state.My research is aimed at overcoming these challenges. To date,I have developed advanced type systems with support for memory management,as well as a model of mutable references that is used to prove the safety of machine code in FPCC,and which I hope to extend in the future to reason about stronger security properties in the presence of mutable memory cells.Certified Code SystemsMy research has been motivated by practical problems related to the development of certified code systems.I have collaborated on the development of FPCC,investigated advanced type systems for state that provide foundational typing support for features from languages like Cyclone and CQual,and studied typed-preserving compilation and typed intermediate languages for region-based memory management.Foundational Proof-Carrying Code The Princeton FPCC system compiles core ML into Sparc machine code and simultaneously produces a safety proof in the form of a typed assembly language(LTAL)program. Since we were interested in proofs fromfirst principles,we built a machine-checkable proof of soundness for LTAL,encoded in higher-order logic.To do this in a modular fashion we designed Typed Machine Language which provides a rich set of constructors for types and instructions,and gave a semantics to LTAL using TML. Types in TML are predicates on machine states and values;the meaning of types is based on the operational semantics of the underlying machine.This model of TML is based on theoretical results in my thesis,which I’ll describe below.The approach is actually an instance of a proof method known as logical relations.Substructural Type Systems Advanced type systems for state rely upon limiting the ordering and number of uses of data and operations to ensure that state is handled in a safe manner.For example,(safely)deallocating a data structure requires that the data structure is never used in the future.To establish this property,a type system may ensure that the data structure is used at most once;after one use,the data structure may be safely deallocated,since there can be no further uses.A substructural type system provides the core mechanisms necessary to restrict the number and order of uses of data and operations.In collaboration with Matthew Fluet and Greg Morrisett,I have used substructural type systems in a number of novel ways to reason about mutable state[9,4,8].In particular,we have shown that substructural type systems can provide foundational support for strong(type-varying)updates,deallocation of references,storage of unique objects in shared references, temporarily treating shared references as unique(CQual’s restrict),and region-based memory management (including support for Cyclone’s dynamic regions and unique pointers).Logic-Based Typed Intermediate Languages Many security properties rely directly on our ability to reason about memory precisely.To develop proof-carrying code technology to the point where PCC systems can enforce complex security policies,we need intermediate languages with convenient abstractions for reasoning about memory.In collaboration with David Walker,I developed a substructural logic for reasoning about adjacency and separation of memory blocks,as well as aliasing of pointers[6].We deployed the logic in a novel type system for a stack-based assembly language,using formulae of the logic to describe memory states before and after the execution of each instruction.The connectives of the logic provide aflexible yet concise mechanism for controlling allocation,deallocation,and access to both heap-allocated and stack-allocated data.The ML Kit compiler for Standard ML supports region inference and region-based memory management.In languages with region-based memory management,objects are allocated in lexically-scoped regions(areas of memory)and objects in a region are deallocated all at once at the end of the region’s scope.Thus,region-based languages avoid both the performance penalties of garbage collection as well as the burden of explicit deallocation.In subsequent work[5],Limin Jia,David Walker,and I extended the substructural logic described above to facilitate reasoning about hierarchical storage.The extended logic may be used to describe the layout of bits in a memory word,the layout of memory words in a region,and the layout of regions in an address space.We used this logic to develop a type system for(a simplified version of)the Kit Abstract Machine,a region-based intermediate language used in the ML Kit compiler.SemanticsIn my thesis and as part of my postdoctoral research,I have made several contributions related to the method of logical relations.Logical relations are a powerful proof technique useful for establishing many important properties of programs.Logical relations for simple type systems are straightforward.However,to define and use logical relations for advanced type systems one needs to have a strong grasp of domain theory and category theory.The key problem is that logical relations,normally defined by induction on types,are no longer well founded in the presence of recursive types(for instance).The indexed model of recursive types,developed by Appel and McAllester[11],is a recent breakthrough that permits simple and direct proofs without the need for complicated mathematics.Here,logical relations are indexed not just by types but also by the number of steps available for future evaluation.This stratification has proved to be effective at handling circularities introduced by a variety of advanced typing constructs.Logical Relations for Mutable State My thesis research,motivated by the requirements of a practical FPCC implementation,focused on how to prove type safety using unary logical relations for languages rich enough to serve as a target for type-preserving compilation of ML or Java.Such languages must support not just updatable references,but also universal types(for encoding ML polymorphism and Java inheritance)and existential types(for encoding ML function closures and Java objects).To ensure safety in the presence of aliasing,these languages permit only type-invariant updates—that is,each location must forever contain only values of its designated type.Hence,we must model types as predicates on values as well as store typings,which tell us the designated types of allocated locations.This leads to inconsistency since the set of types depends on ing step-indexing to resolve the inconsistency,Andrew Appel,Roberto Virga,and I developed a model of type-invariant mutable references that can store values of any type,including functions,other references,recursive types,and even impredicative quantified types[2,7].We use this model,suitably adapted to a von Neumann machine,in the Princeton FPCC implementation.In later work with Matthew Fluet and Greg Morrisett,I have used extended versions of the above model to prove type safety in the presence of both type-invariant(shared)and type-varying(unique)references[9,3,4].These models have helped clarify the connection between our“capability-threading”(substructural)type systems for state and separation logic.Program Equivalence using Binary Logical Relations Proving program equivalence is important for verifying the correctness of compiler optimizations and other program transformations.It is also crucial for establishing that program behavior is independent of the representation of an abstract type—this guarantees that if one implementation of an abstraction is exchanged for another,client modules will not be able to tell the difference.Program equivalence is generally defined in terms of contextual equivalence(also known as observational equivalence).Two programs are contextually equivalent if they have the same observable behavior when placed in any valid program context.Unfortunately,direct proofs of contextual equivalence are typically infeasible since the definition involves quantification over all possible contexts.Binary logical relations offer a tractable method for proving contextual equivalence.As an added benefit,they can also be used to prove parametricity and extract free theorems[16]from types.However,in the presence of recursive and impredicative quantified types,even the definition of logical relations has been a challenge since relations defined by induction on types would not be well founded.Here again,Appel and McAllester’s step-indexing technique can be used to ensure well foundedness.In recent work[1],I present step-indexed syntactic logical relations that completely characterize contextual equivalence in a language with recursive types and polymorphism.A key issue is a problem with showing transitivity of the logical relation;this is resolved by restricting attention to well-typed terms.Future ResearchThe long term goal of my research is to develop certified code technology to the point where such systems can automatically prove—perhaps in combination with model checking and verification tools applied to source code—advanced security and correctness properties of both sequential and concurrent programs.For this program to succeed,we need not just advanced type and proof systems,but also semantic techniques that facilitate reasoning about observational equivalence(since proofs of many important security properties depend upon proving equivalence of programs).I have already presented step-indexed(binary)logical relations for showing equivalence of programs written in purely functional languages[1].In the future,I would like to extend these results to a language with ML-style mutable references,that is,references that can store functions as well as other references.This has been a long-standing open problem.Denotational logical relations(even unary ones)require category theoretic constructions that quickly become too complex.As for syntactic logical relations,the only(unary)result we have is the step-indexed model of mutable state developed in my thesis.I hope to extend this unary model to a binary one,borrowing ideas from Pitts and Stark’s work on logical relations for integer references[15].For the next generation of proof-carrying code systems,I believe that we should build semantic models of types using binary logical relations(rather than the unary logical relations used in the Princeton FPCC system).This would allow PCC systems to accommodate more optimizations,program transformations,and static analyses than today’s systems.Type-preserving compilation and compiler optimization are often at odds.Some optimized code,though observationally equivalent to its unoptimized version,is simply not well-typed.As an example, suppose we have a Java class,with a privatefield x and a public method getx that simply returns the value of x,and a client of this class that calls getx.If the compiler inlines this call,the resulting code is observationally equivalent to the unoptimized code,but it will not type check.Semantic models based on binary relations would allow us to produce a proof that such an optimization is justified.More generally,the use of binary relations for low-level semantics would allow us to express safety properties in terms of equivalence of observable behavior rather than absence of“going wrong”:rather than saying that the above program“goes wrong”if the client accesses x,we require only that the observable behavior not depend on such access.This presents a moreaccurate picture since“going wrong”is a rather artificial notion when it comes to machine code.Returning to the issue of whether we can trust the compiler,the answer today is yes and no.We have been building compilers that are type preserving,so we can trust these compilers as long as we only care about type safety.However,for critical,high assurance software,we care about more than type safety.We want our future compilers to be semantics preserving,which guarantees that source programs are observationally equivalent to target programs—this is the property formally certified by Leroy[13].Furthermore,we want future compilers to be fully abstract(essentially,equivalence preserving),which would guarantee that if there are no source contexts that can distinguish two programs then there are no target level contexts(i.e.,attackers) that can distinguish them either.This is important because programmers usually reason about the behavior of their code by thinking about source level contexts,so the absence of full abstraction may mean that there exist target contexts which may provoke unexpected and damaging behavior.Kennedy recently described several examples of C programs that when compiled to the Intermediate Language executed by CLR could be compromised because the translation from C to the IL is not fully abstract[12].In the future,I want to investigate fully abstract compilation,as well as how to formally prove such a property.Since full abstraction is stated in terms of observational equivalence,binary logical relations(for typed intermediate languages)may be essential for doing such proofs.References[1]Amal Ahmed.Step-indexed syntactic logical relations for recursive and quantified types.In European Symposiumon Programming(ESOP),Vienna,Austria,March2006.To appear.[2]Amal Ahmed,Andrew W.Appel,and Roberto Virga.A stratified semantics of general references embeddable inhigher-order logic.In IEEE Symposium on Logic in Computer Science(LICS),Copenhagen,Denmark,pages75–86, July2002.[3]Amal Ahmed,Matthew Fluet,and Greg Morrisett.L3:A linear language with locations.Submitted to FundamentaInformaticae.,November2005.[4]Amal Ahmed,Matthew Fluet,and Greg Morrisett.A step-indexed model of substructural state.In InternationalConference on Functional Programming(ICFP),Tallinn,Estonia,pages78–91,September2005.[5]Amal Ahmed,Limin Jia,and David Walker.Reasoning about hierarchical storage.In IEEE Symposium on Logicin Computer Science(LICS),Ottawa,Canada,pages33–44,June2003.[6]Amal Ahmed and David Walker.The logical approach to stack typing.In ACM SIGPLAN Workshop on Types inLanguage Design and Implementation(TLDI),pages74–85,January2003.[7]Amal Jamil Ahmed.Semantics of Types for Mutable State.PhD thesis,Princeton University,2004.[8]Matthew Fluet,Greg Morrisett,and Amal Ahmed.Linear regions are all you need.In European Symposium onProgramming(ESOP),Vienna,Austria,March2006.To appear.[9]Greg Morrisett,Amal Ahmed,and Matthew Fluet.L3:A linear language with locations.In Typed Lambda Calculiand Applications(TLCA),Nara,Japan,pages293–307,April2005.[10]Andrew W.Appel and Amy P.Felty.A semantic model of types and machine instructions for proof-carrying code.In ACM Symposium on Principles of Programming Languages(POPL),Boston,Massachusetts,pages243–253, January2000.[11]Andrew W.Appel and David McAllester.An indexed model of recursive types for foundational proof-carrying code.ACM Transactions on Programming Languages and Systems,23(5):657–683,September2001.[12]Andrew Kennedy.Securing programming model.In APPSEM II Workshop,Industrial ApplicationsSession,September2005.[13]Xavier Leroy.Formal certification of a compiler back-end.In ACM Symposium on Principles of ProgrammingLanguages(POPL),Charleston,South Carolina,January2006.[14]George Necula and Peter Lee.Safe kernel extensions without run-time checking.In Proceedings of Operating SystemDesign and Implementation,pages229–243,Seattle,Washington,October1996.[15]Andrew Pitts and Ian Stark.Operational reasoning for functions with local state.In Andrew Gordon and An-drew Pitts,editors,Higher Order Operational Techniques in Semantics,pages227–273.Publications of the Newton Institute,Cambridge University Press,1998.[16]Philip Wadler.Theorems for free!In ACM Symposium on Functional Programming Languages and ComputerArchitecture(FPCA),London,September1989.。
pointer indirection 指针指针间接引用(Pointer Indirection)是计算机编程中一个重要的概念。
通过指针间接引用,我们可以访问和修改指针所指向的内存地址中的值。
本文将从引言概述、正文内容和总结三个方面,详细阐述指针间接引用的相关知识。
引言概述:指针间接引用是一种在编程中常用的技术,它允许我们通过指针访问和操作内存中的数据。
指针间接引用在许多编程语言中都存在,并且在底层的系统编程中尤为重要。
下面将从五个大点出发,详细介绍指针间接引用的相关内容。
正文内容:1. 指针的定义和声明1.1 指针的定义:指针是一个变量,它存储了一个内存地址,该地址指向内存中的一个特定值。
1.2 指针的声明:在编程中,我们需要使用指针时,首先需要声明一个指针变量,并将其与特定的数据类型关联起来。
2. 指针的初始化和赋值2.1 指针的初始化:指针变量在声明时可以被初始化为空指针(null pointer),也可以指向一个已经存在的内存地址。
2.2 指针的赋值:我们可以通过将一个已存在的变量的地址赋值给指针变量,来使指针指向该变量所在的内存地址。
3. 指针的解引用3.1 指针的解引用:通过解引用操作符(*),我们可以访问指针所指向的内存地址中的值。
3.2 指针解引用的使用:解引用操作允许我们读取和修改指针所指向的内存地址中的数据。
4. 指针的指针4.1 指针的指针定义:指针的指针是指一个指针变量存储了另一个指针变量的地址。
4.2 指针的指针使用:通过指针的指针,我们可以间接地访问和修改指针所指向的内存地址中的值。
5. 指针的应用5.1 动态内存分配:通过指针间接引用,我们可以在运行时动态地分配和释放内存。
5.2 数据结构的实现:指针的间接引用为数据结构的实现提供了便利,例如链表和树等数据结构。
5.3 传递参数:通过指针间接引用,我们可以在函数之间传递参数,以便在函数内部修改传递的参数值。
总结:通过本文的介绍,我们可以看到指针间接引用在计算机编程中的重要性。
Reasoning about Pointers in Refinement CalculusRalph-Johan BackXiaocong FanViorel PreoteasaTurku Centre for Computer ScienceTUCS Technical Report No543June2003ISBN952-12-1198-9ISSN1239-1891AbstractPointers are an important programming concept.They are used explicitely or implicitly in many programming languages.In particular,the semantics of object-oriented programming languages rely on pointers.We introduce a semantics for pointer structures.Pointers are seen as indexes and pointer fields are functions from these indexes to ing this semantics we turn all pointer operations into simple assignments and then we use refinement calculus techniques to construct a pointer-manipulating program that checks whether or not a single linked list has a loop.We also introduce an induction principle on pointer structures in order to reduce complexity of the proofs. Keywords:Refinement,Pointer StructuresTUCS LaboratorySoftware Construction Laboratory1IntroductionPointers provide an efficient and effective solution to implementing some programming tasks.Moreover,object–oriented languages rely explicitly (e.g.C++,Pascal),or implicitly(e.g.Java,Python,C#,Eiffel)on point-ers.However,pointer-manipulating programs are notoriously prone to er-rors due to pointer dangling,pointer aliasing,null-pointer accessing,and memory leaking.Some languages offer certain mechanisms to prevent the above-mentioned problems from occurring.For instance,the garbage col-lection mechanism frees the programmer from manually disposing memory and solves the problem of memory leaking.The problem of pointer dangling could be alleviated by always instantiating pointer members of objects with null,and combining it with garbage collection.However,a powerful while relatively simple pointer calculus is highly needed for refining specifications into executables leveraging theflexibility and efficiency of pointers,and for laying a basis for mechanically proving the correctness of existing pointer-manipulating programs using theorem proving systems such as HOL[13], PVS[20],etc.In this paper,we develop such a formal framework for pointer structures in higher-order logic[10].The goal of the calculus is to add support to refine-ment calculus[2]for reasoning about pointer programs.Wefirst introduce a general theory about pointers,where the pointerfields of an object are mod-eled as functions from objects to objects.The assignment to a pointerfield is seen as an update of the corresponding function.We also keep track of all allocated pointers in a subset P of the set of all objects.Allocating a new pointer means updating the set P to include the new element,and disposing an allocated pointer means removing it from P.With the semantics that we propose,all pointer operations become simple assignments,and this enable us to extend the refinement calculus to support pointer and object-oriented constructs.However,such treatment also brings in complexity in manipulating pro-grams because we are now dealing with functions rather than simple types, which complicates the formulas to be proved.To reduce such complexity,we introduce a principle of induction over the set of pointers accessible from a given starting pointer.Here,we say that a pointer is accessible,only if all the pointers in the corresponding path starting from the initial pointer are allocated.We then specialize the theory for single linked lists.As an illustrative example,we completely refine a specification for testing whether a single linked list is linear(i.e.,leads to null)or has a loop(i.e.,the last element points to some element of the list)into a program using an in-place algorithm1to reverse the list.In this example,we specify how the algorithm reverses a list by means of a recursive definition.As a result,we get the properties that the loop invariant should satisfy for free.We also prove that by reversing a single linked list twice,we can get the initial list.2Related workThere have been many formal treatments of pointer structures;here we con-centrate on a few that we found most relevant to our work.Reynolds[23]describes an axiomatic[14,12]programming logic for rea-soning about correctness of pointer programs.This logic is based on early ideas of Burstall[7],and combines ideas from[19,22,15].He uses a simple imperative programming language with commands for accessing and modi-fying pointer structures,and for allocation and deallocation of storage.The assertion language is used to express heap properties;in particular,a“sepa-rating conjunction”construct express properties that hold for disjoint parts of the heap.Neither the programming language nor the assertion language refers explicitely to the heap.However,as the author notes,the logic is in practice incomplete–new inference rules may be needed for new problems. Moreover,in order to use refinement calculus techniques,we need to refer explicitly to the heap in an assignment specification statement[2].Contrary to Reynolds’s approach,we allow explicit references to the“heap”,both in programs and in assertions.In order to deal with assignments involving pointer variables,Morris[18] generalizes the Hoare axiom of assignment correctness[14]by allowing for pointerfields to be treated as regular program variables.The substitution is done by replacing all aliases of the pointerfield with the corresponding expression.The treatment of pointer-structurefields as global functions from pointers to values can be traced back at least to Burstall[7].Similar ideas are used prevalently in[16,4,6,9,17,5].Most of these approaches have developed axiomatic semantics for pointer programs.Butler[8]uses a data refinement mechanism to translate recursive speci-fications on abstract trees to imperative algorithms on pointer structures.In comparison,we add the induction mechanism directly to the pointer struc-tures,which leads to simpler refinement proofs.Paige,Ostroff,and Brooke[21]introduce a semantics for reference and expanded types in Eiffel.The theory is expressed in the PVS specification language.References(pointers)are organized in equivalence classes of ref-erences to the same object.Creating a new object means creating a new2singleton equivalence class that contains a reference to the new object.As-signing to a reference variable means moving this reference from one equiva-lence class to another.However,in this work we could not see how difficult is to solve a real problem,due to lack of practical examples.3PreliminariesIn this section we introduce the programming language we are working with and the refinement rules.3.1Data typesWe assume that we have a collection of basic data types.Among them we have int,bool,nat,Ω.The setΩrepresents the set of all possible pointers (objects).We assume thatΩis an infinite type.We also assume the existence of the function type.The type A→B denotes the type of all functions from type A to type B.For a type A we denote with P f.A the type of allfinite subsets of A.We assume that we have all the usual operations(e.g.,+,−,≤,∧)defined on int,bool,and nat.We denote with true and false the two values of bool, but also the constant predicates on some type A(functions from A to bool). In addition we also assume that some other operations are available.If A is a type and if p is a predicate on A then we denote by p some arbitrary butfixed element a of A such that p.a is true.If p is false then p is some arbitrary element of A.If e is an expression of type A where the variable x of type B may occur free,then we denote by e[x:=e ]the substitution of x with e in e.We denote by(λx•e)the function that maps b∈B to e[x:=b]∈A.If f∈A→B and a∈A then f.a denotes the application of f to a.For a function f from A to B,a∈A and b∈B,we define the update of f in a to b,denoted f[a←b],by(λx:A•if x=a then b else f.xfi) Lemma1The update of f satisfies the following properties:1.f[x←y].x=y2.f[x←y][x←z]=f[x←z]3.f[x←f.x]=f4.x=z⇒f[x←y].z=f.z35.x=y⇒f[x←z][y←u]=f[y←u][x←z]Although we use a similar syntax for substitution and update,they are dif-ferent concepts.3.2The programming languageWe will use a simple programming language that contains the basic imper-ative programming constructs.Our language has program variables of data types that we have described so far.The program expressions are built from program variables and constants using the operators that are available for the data types.Although we call it a programming language,this language contains constructs that are not executable.These constructs can be used to specify what the result of the computation should be.This allows us to represent specifications as programs and then refine them to executable programs.The abstract syntax of the language is given by structural induction.If x,and x are distinct program variables of the same type,b is a boolean expression,e is a program expression of the same type as x and S,and S are programs,then the following constructs are programs too:i.{b}–assertionii.[x:=x |b]–specification assignmentiii.if b then S else S fi–if statementiv.while b do S od–while statementv.S;S –sequential compositionThe statements if,while,and sequential composition are the usual statements that can be found in all imperative programming languages.The assertion {b}does nothing if b is true in the current state,and behaves as abort(it does not terminate)otherwise.The specification assignment updates the variable x to a value x that makes b true.If for all x ,b is false then it behaves as magic(establishes any postcondition)[2].The variable x is bounded in [x:=x |b].In order to manipulate programs and prove properties about them we will also need a semantics for them.We use a predicate transformer semantics [11].The semantics of a program is a function from predicates on states to predicates on states.The intuition of a predicate transformer S applied to a predicate q is the set of the initial states from which the execution of S terminates in a state that satisfies q.4The refinement relation on programs,denoted ,is the pointwise exten-sion of the partial order on predicates over states,i.e.S S if(∀q•S.q⊆S .q).The Hoare total-correctness triple[14]p{|S|}q is true,if the execution of S from an initial state that satisfies p,is guaranteed to terminate in a state that satisfies q.The intuition behind the refinement relation can be explained in terms of total correctness:S is refined by S ,if S is correct with respect to a precondition p and postcondition q whenever S is.We define some other programming constructs based on the primitive ones.1.skip={true}2.(x:=e)=[x:=x |x =e]where x is a variable that does not occurfree in e3.if b then Sfi=if b then S else skipfi.If we write the sentences of a program on different lines then we do not use the sequential composition operator.We use indentation to emphasize the body of while or if statements.When indentation is used,we do not use od orfito end the while and if statements.3.3Refinement rulesWe list a set of refinement(equivalence)rules that we use in our example. The rules are proven in[2,3].assertion introduction If x is not free in e then(x:=e)=(x:=e;{x=e})assertion refinement ifα⇒βthen{α} {β}assignment merge(x:=e;x:=e )=(x:=e [x:=e])multiple assignment If x is not free in f then(x:=e;y:=f)=(x,y:=e,f)relational assignment If x is not free in e then[x:=x |x =e]=(x:=e)assignment introduction Ifα⇒β[x :=e]then{α};[x:=x |β] x:=e5moving assertion{x=x0∧α};x:=ex:=e;{x=e[x:=x0]∧α[x:=x0]}using assertion–assignment Ifα⇒e=e then({α};x:=e)=({α};x:=e )adding specification variables If a is not free in S,and S then S S ⇔(∀a:{a=e};S S )introducing if statement{α};S ifαthen Sfiunfolding whilewhileαdo S od=ifαthen S;whileαdo S odfimerge while[3]whileαdo S od=whileα∧βdo S od;whileαdo S odusing assertion–while Ifα⇒(β=γ)then{α};whileβdo S;{α}od={α};whileγdo S;{α}odwhile introduction Ifα⇒I and I∧¬γ⇒β[x :=x]then{α};[x:=x |β]{α}whileγdo{I∧γ}[x:=x |I[x:=x ]∧t[x:=x ]<t]{I}{I∧¬γ}To handle local program variables[1]we assume that we have two pro-gramming constructs add.x and del.x,which add and delete a local program variable x.The two constructs commute with all programs where x does not occur free.We assume that adding x,followed by setting it to some arbitrary value and then deleting it is equivalent to skip.Moreover we assume that the program add.x;S;del.x;add.x;S ;del.x is refined by add.x;S;S ;del.x.The following rule can be derived from the properties of add and del.local variable introduction If x is not free inαand S then{α};S add.x;{α};[x:=x |β];S;del.xIn our example we will omit the statements add.x and del.x.64Dynamic data structuresIn this section we show how to capture the basic notions of dynamic data structures(pointers)without any substantial extension of the logical basis as it is presented in[2].The new programming constructs that we add are defined in terms of the primitives we have already introduced.A pointer structure declaration is:pointer name i(f i,1:T i,1,...,f i,ni :T i,ni)(1)We can have as many pointer structure declarations as we need.Thefield types T i,1,...,T i,nican be any basic types,exceptΩ,or any pointer structure names that have been already declared or that are going to be declared.For allfield types T i,j we define typeof.T i,j bytypeof.T i,j=T i,j if T i,j is a basic typeΩif T i,j is pointer structure nameA declaration of the pointer structures name1to name k corresponds to the following declarations in terms of basic program constructs.var name1,...,name k:P f.Ωvar f1,1:Ω→typeof.T1,1...var f k,n:Ω→typeof.T k,nkname1:=∅...name k:=∅nil:=( x:Ω•false)new(var p:Ω,var A:P f.Ω): p:=( x:Ω•x=nil∧x∈iname i)A:=A∪{p}dispose(val p:Ω,var A:P f.Ω):A:=A−{p}(2)To allocate and dispose a pointer we have defined the procedures new and dispose.The procedure new has two reference parameters:p for the newly allocated pointer,and A for the set of all allocated pointers of some type.We create a new pointer of type name i and assign it to p by calling new(p,name i).The access of afield f of some pointer p,denoted p→f,is in our case f.p.The update of afield(p→f:=q)is f:=f[p←q].7With these definitions all pointer operations becomes simple assignments and we can use the assignment refinement rules for them.For the rest of this section we assume that we have a program that declares the pointer structures name 1,...,name k .For any pointer structure field f i,j we denote by a i,j the tuple (f i,j ,name i ,T i,j ).We use the notation a i,j for both,the tuple,and its first component.We refer to the second element of a i,j by dom .a i,j and to the third by range .a i,j .We denote by A the set of all a i,j for which T i,j is a pointer structure name,and by A ∗the free monoid generated by A .We denote with 1the empty word.For α,β∈A ∗we denote α≤βiffαis a prefix of βand α<βiffαis a proper prefix of β.If α≤β,then we denote with α−1βthe word obtained from βby removing the prefix α,i.e.α−1β=γwhere γis such that β=αγ.Let pstr be i name i .Definition 2For α∈A ∗and p ∈Ω.We define α.p by induction on α:1.p =p and aα.p =α.(a.p ).A straightforward consequence of the above definition is that if α,β∈A ∗and p ∈Ωthen αβ.p =β.(α.p ).We define p α−→A q to be true if we can reach the pointer q from p following the path αby accessing only proper (allocated)pointers.Formally:Definition 3If p ,q ∈Ω,a ∈A ,and α∈A ∗then1.p 1−→A q if p =q and p ∈pstr ∪{nil }2.p a−→A q if a.p =q ∧p ∈dom .a ∧q ∈range .a ∪{nil }3.p aα−→A q if (∃r ∈Ω•p a −→A r ∧r α−→A q )When the set A is fixed,we will omit it from the notation p α−→A q and in general from any notation that has A as a parameter.Definition 4Let [p ]A ={q |(∃α∈A ∗•p α−→A q )∧q =nil }and |p |A =|[p ]A |Lemma 5If α,β∈A ∗then81.pα−→q∧pα−→r⇒q=r=α.p2.qαβ−→p⇔(∃r•qα−→rβ−→p)3.qα−→p∧qβ−→r∧α≤β⇒pα−1β−→r4.p∈pstr⇔p∈[p]A5.p∈pstr⇔[p]A=∅Theorem6(Pointer Induction)If P is a predicate onΩthenP.p∧(∀q∈[p],a∈A,r∈Ω−{nil}•P.q∧q a−→r⇒P.r)⇒(∀q∈[p]•P.q)Proof.If q∈[p]then existsα∈[p]such that pα−→Aq and q=nil.We can prove P.q by induction on the length ofα.5Single linked listSingle linked lists are pointer structures that contain an infofield of some type(integer in our example)and a nextfield that gives for a pointer the next element in a list.pointer plist(info:int,next:plist)In the case of only one link(A={(next,plist,plist)}),the set A∗is isomorphic with the set of natural numbers.We will use the notationp n−→A q instead of pnext n−→Aq.The fact next n≤next m becomes n≤m and(next n)−1next m becomes m−n.We will mention instead of the set index A just the components of A that changes.For example if we have two nextfunctions next and next0we write p n−→next q,and p n−→next0q instead of p n−→Aqand p n−→A0q where A0={(next0,plist,plist)}Lemma7p∈[q]if and only if there is an unique i<|q|such that q i−→p.Corollary8[p]={next i.p|i<|p|}.If i,j<|p|and i=j then next i.p= next j.p.9Definition9If n=|p|Athen we define last A.p∈plist by last A.p=next n−1.p. Definition10If q∈[p]A then we definei.[p:q]A={s|∃i,j:p i−→A s j−→Aq∧i+j<|p|A}ii.[p:q)A={s|∃i,j:p i−→A s j−→Aq∧i+j<|p|A∧0<j}iii.(p:q]A={s|∃i,j:p i−→A s j−→Aq∧i+j<|p|A∧0<i}iv.(p:q)A={s|∃i,j:p i−→A s j−→Aq∧i+j<|p|A∧0<i,j}Lemma11If q∈[p]theni.[p]=[p:last.p]ii.[p]=[p:q)∪[q]iii.s∈[p:q]⇒[p:q]=[p:s)+{s}+(s:q]iv.s∈[p:q)⇒next.s∈(p:q]∧next.s∈[p:s]v.p=q⇒[p:q]={p}+(p:q)+{q}Where x=y+z denotes the fact that x=y∪z and y∩z=∅.ing Corollary8.Definition12If q∈[p]A then we define|p:q|A=|[p:q]A|Lemma13If q∈[p]and n=|p:q|then p n−1−→q.Theorem14(Length decreasing)If q∈[p]and s∈[p:q)then|s:q|= |next.s:q|+1.Theorem15(List induction)If P is a predicate onΩand q∈[p]then P.p∧(∀r∈[p:q)•P.r⇒P.(next.r))⇒(∀r∈[p:q]•P.r)10Proof.By induction on|p:r|.In the case of lists we can also introduce a principle of definition by induction.In order to define a function f on[p:q]it is enough to define f.p,and for all r∈[p:q)to define f.(next.r)assuming that f.r is defined. Definition16For all p∈plist we define linear A.p,circular A.p,loop A.p,and list A.p∈bool by:linear A.p=(next.(last A.p)=nil)circular A.p=(next.(last A.p)=p)loop A.p=(next.(last A.p)∈[p]A)list A.p=linear A.p∨loop A.pLemma17If linear.p then p |p|−→nil.Lemma18If linear.p and q∈[p]then q=last.p⇔next.q=nil.Lemma19If circular.p then circular.(next.p),[p]=[next.p],andlast.(next.p)=p.Lemma20If loop.p then circular.(next.(last.p)).5.1Partial reverse of a listWe define for the pointers p∈plist,q∈[p],and e∈Ωa partial reverse of the list from p to q as in Figure1.The links from p to next.q(the arrows labeled with1in Figure1)are replaced by links from q to p(the dashed arrows in Figure1).We also create a link from p to e.In different contexts the pointer e will play different roles.For example,to reverse a linear list we partially reverse it until the last element and use nil as e.Figure1:Partial reverse of a listDefinition21Suppose that q∈[p]and e∈Ω.We define the function preverse.next.p.q.e of typeΩ→Ωby induction on r∈[p:q]next.11•Case r=p:preverse.next.p.r.e=next[p←e]•Case r∈[p:q)next:preverse.next.p.(next.r).e=(preverse.next.p.r.e)[next.r←r]When next,p,q and e arefixed we denote with f.r=preverse.next.p.r.e for all r∈[p:q]next and next0=f.q.Lemma22(Partial reverse–properties)If q∈[p]next then1.∀r∈[p:q]next•(f.r).p=e∧p∈[r]f.r∧[p:r]next=[r:p]f.r2.∀r∈[p:q]next•next0.r=(f.r).r3.∀r∈[p:q)next•next0.(next.r)=r4.∀r∈[p:q]next•next0.r=next.rProof.By list induction.Lemma23(Partial reverse–commutativity)If q∈[p]next andq ∈[p ]next such that[p:q]next∩[p :q ]next=∅then1.∀s∈[p:q]next preverse.(next[s←e]).p.q.e =(preverse.next.p.q.e )[s←e]2.preverse.(preverse.next.p.q.e).p .q .e =preverse.(preverse.next.p .q .e ).p.q.eLemma24(Partial reverse–split)If q∈[p]next then∀r∈[p:q)next preverse.next.p.q.e=preverse.(preverse.next.p.r.e).(next.r).q.rProof.If[p:q)next is empty there is nothing to prove.Otherwise there exists q0∈[p:q)next such that[p:q)next=[p:q0]next.We prove the property above by induction on r∈[p:q ]next.Lemma25(Reverse twice)If q∈[p]next thenpreverse.(preverse.next.p.q.e).q.p.(next.q)=nextProof.We prove by induction on r∈[p:q]next thatpreverse.(preverse.next.p.r.e).r.p.(next.r)=next12•Case r=p:preverse.(preverse.next.p.p.e).p.p.(next.p)={preverse definition}next[p←e][p←next.p]={Lemma1}next•Case r∈[p:q)next,assume preverse.(preverse.next.p.r.e).r.p.(next.r)= next and denote next0=preverse.next.p.r.e andnext1=preverse.next.p.(next.r).e=next0[next.r←r]preverse.next1.(next.r).p.(next.(next.r))={Lemmas23and24}preverse.next1.(next1.(next.r)).p.(next.r)[next.r←next.(next.r)] ={assumptions}preverse.next1.r.p.(next.r)[next.r←next.(next.r)]={assumptions}preverse.(next0[next.r←r]).r.p.(next.r)[next.r←next.(next.r)] ={Lemma23}preverse.next0.r.p.(next.r)[next.r←r][next.r←next.(next.r)] ={assumptions and Lemma1}nextLemma26If q∈[p]next and next0=preverse.next.p.q.nil then linear next.q0 .qand p=last next5.2Linear listsTo reverse a linear list we have to partially reverse the list from the head to the last element.We also have to end the reversed list with nil.The head of the resulting list is the last element of the initial list.Formally we have. Definition27If linear next.p then we define reverse.(next,p)given by reverse.(next,p)=(preverse.next.p.(last next.p).nil,last next.p) Theorem28If linear next.p and(next0,p0)=reverse.(next,p)then13i.linear next.p0ii.[p0]next=[p]nextst next.p0=piv.if|p|next>1then p=p0v.reverse2.(next,p)=(next,p)The Theorem28states that if we reverse a linear list twice we get the original list.It also says that if the list has at least two elements then the head of the resulting list is different from the head of the original one.This fact will be used in thefinal algorithm that decides whether the list has a loop or not.The algorithm uses the fact that when reversing a loop list,the new list has the same header as the original one.5.3Loop listsReversing a loop list is equivalent to reversing the circular part of it.In Figure2we replace the arrows labeled by1with dashed arrows.This,in turn,is equivalent to partially reversing the list from h to q using q as e.Figure2:Reverse of a loop listDefinition29If loop next.p,q=last next.p and h=next.q then we define reverse.(next,p)given byreverse.(next,p)=(preverse.next.h.q.q,p)Before giving the main theorem about the properties satisfied by the reverse of a loop list we give some results about reversing a circular list. Lemma30If circular next.p,q=last next.p thenpreverse.next.p.q.q=preverse.next.(next.p).p.p14Lemma31If circular next.p,q=last next.p and next0=preverse.next.p.q.qthen circular next0.p and p=last next.qTheorem32If loop next.p,h=next.(last next.p)and(next0,p0)=reverse.(next,p),theni.p=p0ii.loop next.p0iii.[p0]next=[p]nextst next.p0=next.hv.reverse2.(next,p)=(next,p)ing Lemmas31,30,and25Although the definition of reverse for a loop list is sufficient for reasoning about its properties,it is not good for implementation purposes.The pointer h is not known when the program starts.We do not even know whether we have a loop list.In the next theorem we show that reversing a loop list is equivalent to reversing the elements from p to q(reversing the arrows labeled with1in Figure3)and then reversing back the elements from h to p (reversing the arrows labeled with2).Thefinal reversed list is given by the dashed arrows in Figure3.Figure3:Compute reverse of a loop listTheorem33(Compute reverse of a loop list)If loop next.p,q=last next.p,h=next.q and next0=preverse.next.p.q.nil theni.linear next.hii.p=last next.hiii.preverse.next.h.q.q=preverse.next0.h.p.q15Proof.We prove the case p=h.It follows that exists h0∈[p:h)next such that next.h0=h.It follows next0.h0=h.preverse.next0.h.p.q={Lemmas23and24}preverse.next0.(next0.h).p.h[h←q]={assumptions}preverse.(preverse.next.p.q.nil).h0.p.h[h←q]={Lemma24}preverse.(preverse.(preverse.next.p.h0.nil).(next.h0).q.h0).h0.p.h[h←q] ={Lemma23}preverse.(preverse.(preverse.next.p.h0.nil).h0.p.h).(next.h0).q.h0[h←q] ={assumptions}preverse.(preverse.(preverse.next.p.h0.nil).h0.p.(next.h0)).h.q.h0[h←q] ={Lemma25}preverse.next.h.q.h0[h←q]={Lemmas23and24}preverse.next.h.q.q5.4Refining a program for checking if a list is linearor notWefirst refine the partial reverse of a list to a while ing this refinement then we refine the reversing of a linear and a loop list to the same program.Finally we write a program that tests whether or not a list has a loop and prove that it is correct.Lemma34(Refinement of preverse)Ifαis a formula that does not con-tain the variables next,s,r free then for all program expressions q that does not contain next,s,r free we have16{next=next0∧q∈[p]next∧α} next,s:=preverse.next0.p.q.e,qnext,s,r:=next[p←e],p,next[p]{q∈[p]next0∧s∈[p:q]next∧r=next0.s∧α}while s=q donext,s,r:=next[r←s],r,next.r{q∈[p]next0∧s∈[p:q]next∧r=next0.s∧α}{s=q∧q∈[p]next∧r=next0.q∧α}Moreover if linear next0.p and q=last next.p then the while condition can bereplaced by r=nil.Proof:{next=next0∧q∈[p]next∧α}next,s:=preverse.next0.p.q.e,q{local variable introduction}{next=next0∧q∈[p]next∧α}[next,s,r:=next ,s ,r |next =preverse.next0.p.q.e∧s =s] {assignment merge}{next=next0∧q∈[p]next∧α}next,s,r:=next0[p←e],p,next0.p[next,s,r:=next ,s ,r |next =preverse.next0.p.q.e∧s =s] {moving assertion}next,s,r:=next[p←e],p,next.p{next=next0[p←e]∧q∈[p]next∧s=p∧r=next0.s∧α} [next,s,r:=next ,s ,r |next =preverse.next0.p.q.e∧s =s] {while introduction}•Let I=q∈[p]next0∧s∈[p:q]next∧r=next0.s∧next=preverse.next0.p.s.e∧α•Let t=|s:q|next•next=next0[p←e]∧q∈[p]next∧s=p∧r=next0.s∧α⇒I17•I∧s=q⇒next=preverse.next0.p.q.e∧q=s next,s,r:=next[p←e],p,next.p{I}while s=q do{I∧s=q}[next,s,r:=next ,s ,r |I(next ,s ,r )∧t(next ,s ,r )<t]{I}{I∧s=q}{assignment introduction}•I∧s=q⇒I(next[r←s],r,next.r)∧t(next[r←s],r,next.r)<tnext,s,r:=next[p←e],p,next.p{I}while s=q donext,s,r:=next[r←s],r,next.r{I}{I∧s=q}{assertion refinement}next,s,r:=next[p←e],q,next[p]{q∈[p]next0∧s∈[p:q]next∧r=next0.s∧α}while s=q donext,s,r:=next[r←s],r,next.r{q∈[p]next0∧s∈[p:q]next∧r=next0.s∧α}{s=q∧q∈[p]next∧r=next0.q∧α}Lemma35(Refinement of reverse for linear lists)We have: {linear next.p}next,s:=reverse.next.pnext,s,r:=next[p←nil],p,next.pwhile r=nil donext,s,r:=next[r←s],r,next.rLemma36(Refinement of reverse for loop lists)We have:18{loop next.p}next,s:=reverse.next.pnext,s,r:=next[p←nil],p,next.pwhile r=nil donext,s,r:=next[r←s],r,next.r Proof.{next=next0∧loop next0.p∧q=last next.p∧h=next0.q}next,s:=reverse.next0.p={Definition32and multiple assignment}{next=next0∧loop next0.p∧q=last next.p∧h=next0.q}next,s:=preverse.next0.h.q.q,p ={Lemma33}{next=next0∧loop next0.p∧q=last next.p∧h=next0.q}next,s:=preverse.(preverse.next0.p.q.nil).h.p.q,p ={assignment merge}{next=next0∧loop next0.p∧q=last next.p∧h=next0.q}next,s:=preverse.next0.p.q.nil,qnext,s:=preverse.next.h.p.q,p{assertion introduction and assertion refinement by Lemma33}{next=next0∧loop next0.p∧q=last next.p∧h=next0.q}next,s:=preverse.next0.p.q.nil,q {linear next.h∧p=last next.h}next,s:=preverse.next.h.p.q,p {Lemma34}{next=next0∧loop next0.p∧q=last next.p∧h=next0.q}next,s:=preverse.next0.p.q.nil,qnext,s,r:=next[h←q],h,next.hwhile r=nil donext,s,r:=next[r←s],r,next.r{Lemma34withα=loop next.p∧h=next0.q}19。