Formal methods in conformance testing a probabilistic re
- 格式:pdf
- 大小:189.22 KB
- 文档页数:17
Software Testing and Quality AssuranceChapter 1 Introducing Software Quality Assurance1. Please describe which tasks SQA Activities can be broken down into. (p1.4)A:SQA activities can be broken down into the following tasks:(1)Application of technical methods:This helps the development team to achieve high quality design specifications and develop high quality software design.(2)Conducting Formal T echnical Reviews (FTRs):These are structured review meetings in which a review team assesses the software product technically.(3)Enforcement of standards:This task is a combination of two subtasks:Process monitoring and Product Evaluation.(4)Control of change:It combines manual methods with automated tools to provide a mechanism for the control of change.This process ensures software quality by formalizing requests for change, evaluating the nature of the change, and controlling the impact of the change.(5)Measurement:The quality of a software product can be measured by using software metrics. (6)SQA Audits:These are conducted to inspect a process or a product in detail by comparing the process or product with established procedures and standards.Audits review the management, technical, and quality assurance processes being followed during software development.(7)Record keeping and reporting:This provides procedures for collecting and circulating SQA information.2. Which phases the SDLC consists of? (p1.5)A:(1)Software conception and initiation(2)Analysis(3)Design(4)Construction(5)T esting3. Please describe SQA activities in the Software Analysis Phase. (p1.6)A:(1)These involve reviewing of the Requirements Document created as part of the software requirement phase.(2)These ensure that the software requirements are complete, testable, and correctly expressed as functional, performance, and interface requirements.(3)The SQA activities in this phase can be recorded in the Software Requirement Review Checklist.4. Please describe SQA activities in the Software Design Phase. (p1.7)A:SQA activities in the software design phase involve assuring the following factors:(1)The design adheres to the approved design standards defined in the management plan created in the project initiation phase.(2)All software requirements are mapped to the software components.(3)All action items are resolved according to the review finding of the high-level design review documentation.(4)The approved design is placed under configuration management.(5)The development team follows approved design standards.(6)The allocated modules are included in the detailed design.(7)The results of design inspections are included in the design.(8)All action items are resolved according to the review findings of the detailed design review documentation.5. Please describe SQA activities in the Software Construction Phase. (p1.9)A:SQA activities in the software construction phase involve assuring the following factors: (1)Audit of the results of coding and design activities including the schedule in the software development plan(2)Audit of configuration management activities and the software development library (3)Audit of deliverable items(4)Audit of nonconformance reporting and corrective action system(5)FTR of code6. Please describe SQA activities in the Software Testing Phase. (p1.9)A:(1)These involve monitoring the testing process for conformance to standards.(2)These ensure that the software testing process is in accordance with plans and procedures.(3)T est documentation is reviewed for completeness and adherence to standards. (4)SQA activities in this phase also involve reviewing the test plan.(5)The observations from a test plan review are recorded in the Test Plan Review Checklist.7. Please describe the differents between two Quality activities, QA and QC. (p1.10) A:(1)QA is a planned and systematic set of activities that involve monitoring and improving the software development process.(2)QA is oriented to the prevention of defects rather than their detection and is used to implement the defined quality policy of an organization through the process of development and continuous improvement.(3)Quality Control (QC) is the process by which the quality of a product is compared with specific standards, and action is taken if the quality does not match the applicable standards.(4)QC is oriented to detection of defects rather than prevention.8. Please list some QA activities, and some QC activities. (p1.11)A:Quality Assurance (QA) activities include:(1)Quality Audit(2)Process definition(3)T ool selection(4)Training(5)Peer review(6)Requirements tracking(7)Quality metrics collectionQC activities include:(1)Inspection(2)T esting(3)Checkpoint review9. Please describe the Role of Metrics in SQA, and the four main steps of creating a metric. (p1.12)A:QA is a planned and systematic set of activities that involves monitoring and improving the software development process. Metrics are important in QA because they help measure and evaluate various aspects of the software development process These measurements help organizations improve their processesMetrics are crucial for the development process and project management because they enable you to measure the quality of each factor in a project. Measuring the quality of various factors helps determine if the project will meet time and quality requirements.In addition, over a period of time, metrics help track your progress. Y ou can use metrics to compare various projects of different sizes After calculating metrics, you need to communicate them to the management and to every person involved in the process. Then, you need to organize several meetings to analyze metrics. Based on. the analysis, .areas of improvement are identified and suggestions are invited to improve the processes Based on the suggestions, corrective action is decided and implemented. After implementing the changes, you need to again implement the processes to verify whether or not they solved the problem.The QA and develOpment team decides upon the metrics to be created and tracked in the beginning of a software project There are four main steps of creating a metricDefining the goal of the metric: It is important to define a goal because it helps design the metric The goal should be clear, measurable, and explicit For example,the goal can be to measure the number of defects reported by the clientIdentifying the requirements of the metric The requirements include human resource, data collection techniques, and methodologies used to process the data For example, the requirements of a metric that measures the number of defects reported by clients include the availability of quality assurance professionals and past data to specify severity criteria Identifying the organizational baseline value for the metric A baseline value is an average value that an organization identifies based on prior experience. A metric is designed to achieve the baseline value.1. Which of the following is a quality control activity?A. Quality auditB. T ools selectionC. TrainingD. Inspection2. Which of the following is a quality assurance activity?A. T estingB. T ools selectionC. InspectionD. Walkthrough3. Which of the following SQA activities involves assessing and review the prototype and product design for quality?A. Application of technical methodsB. Conducting FTRsC. Enforcement of standardsD. Control of change4. Which of the following SQA activities ensures that the development team follows the documented steps to complete a process?A. Application of technical methodsB. Conducting FTRsC. Enforcement of standardsD. Control of changeChapter 2 Introducing Software Testing1. Please describe the benefits of early testing. (p2.4)A:The benefits of early testing include:(1)Reduces the possibility of introducing errors when making changes.(2)Reduces the possibility of forgetting design decisions and conditions.(3)Saves the time required to reanalyze designs and code.(4)Reduces the possibility of similar errors by providing early feedback.(5)Reduces the number of defects that leak through various phases of SDLC, which helps reduce the defect tracking overhead.2. Please describe the steps of Testing Life Cycle. (p2.6)A:(1)Risk analysis(2)Planning progress(3)T est design(4)Performing tests(5)Defect tracking and management(6)Quantitative measurement(7)T est reporting3. Please describe the Roles in a testing team, and their responsibility of each role. (p2.8)A:(1)The key roles in a testing team are:T est managerT est leadT est environment specialistT ester(2)A test manager plans and coordinates the test process for a project and is responsible for:a. Representing the testing team for interdepartmental interactionsb. Interacting with customers and vendors, if requiredc. Recruiting, supervising, and training staffd. Creating a test plane. Creating the budget and schedule for the test process, including test-effort estimationsf. Acquiring hardware and software for the test environmentg. Ensuring proper configuration management of the test environment and the test producth. Defining the test processi. Tracking progress of the test processk. Coordinating pre- and post-test meetings(3)A test lead directs the testing team and is responsible for:a. Providing technical leadership for the test programb. Providing support for customer interface, recruiting, test-tool introduction, test plan execution, staff supervision, and cost and progress status reportingc. Verifying the quality of the requirements, including testability, requirement definition,test design, test-script and test-data development, test automation, test-environment configuration, test-script configuration management, and test executiond. Interacting with test-tool vendors to identify the best ways to leverage test tools on the projecte. Receiving information about the latest test approaches and tools, and transferring this knowledge to the test teamf. Conducting test-design and test-procedure walkthroughs and inspectionsg. Implementing test-process improvements based on surveys conductedh. Tracing the test procedures to the test requirements by using the Requirements Traceability Matrixi. Implementing the test processj. Ensuring that the test-product documentation is complete(4)A test environment specialist specializes in setting up the test environment and is responsible for:a. Installing test tools and establishing the test-tool environmentb. Creating and controlling the test environment by using environment setup scriptsc. Creating and maintaining the test databased. Maintaining a requirements hierarchy within the test-tool environment(5)A tester helps deliver a quality product and is responsible for the following activities during the testing process:a. Developing test cases and proceduresb. Creating test datac. Reviewing analysis and design artifactsd. Executing testse. Using automated tools for executing testsf. Preparing test documentationg. Tracking defectsh. Reporting test results4. Please describe the key performance areas of a tester. (p2.11)A:(1) Defect-detection efficiency(2)Schedule slippage in test case design and test execution(3)Productivity (total number of test cases designed or executed, depending on the nature of project)(4)Number of weighted defects in user acceptance testing(5) Initiatives taken in:Self developmentDeveloping toolsCertificationsT ools learned5. Please describe the main technical skills and behavior skills of a tester. (p2.11) A:(1)T echnical: The technical skills include the following:a. Knowledge of software development, operation, and maintenance processesb. Knowledge of the applicationc. Knowledge of tools that aid in software developmentd. Knowledge of project managemente. Knowledge of the testing processf. Knowledge of test process documentation(2)Behavioral: The behavioral skills include the following:a. Sensitivity to small detailsb. T olerance for chaosc. Organized approach1. Errors that are undetected at a particular stage in the development life cycle and are carried forward to next stage are called .A. Leakage errorsB. Logical errorsC. Debugging errorsD. Integration errors2. Which of the following cannot be achieved by testing?A. Detecting errors in a software productB. Verifying that a software product conforms to its requirementsC. Showing that a software product has no defectsD. Establishing confidence that a program does what it should3. Which of the following is the correct sequence of phases in the testing life cycle?A. Risk analysis, planning, test design, performing tests, defect tracking and management, quantitative measurement, test reportingB. Planning, risk analysis, test design, performing tests, defect tracking and management, quantitative measurement, test reportingC. Planning, risk analysis, test design, performing tests, test reporting, defect tracking and management, quantitative measurementD. Risk analysis, planning, test design, performing tests, quantitative measurement, test reporting, defect tracking and management4. In which phase of the testing life cycle are defects communicated to the development team?A. Defect tracking and managementB. Performing testsC. T est reportingD. Quantitative measurementChapter 3 Planning Software Tests1. Please describe which phases the test planning process includes. (p3.3)A:The test planning process includes the following phases:(1)Pre-planning(2)T est planning(3)Post-planning2. In the pre-planning phase, the test specifications are identified. Which components are included in the test specifications? (p3.3)A:This phase identifies the test specifications. The test specifications include the following components:a. T est objectivesb. T est assumptionsc. T est success/acceptance criteriad. T est entrance/exit criteria3. Which activities are included in the test planning phase? (p3.4)A:The test planning phase includes the following activities:(1) Performing requirements traceability(2) Estimating test effort(3) Scheduling the test iterations(4) Planning resources(5) Identifying testing approaches(6) Defining test quality control4. Which components the test plan should focused on? (p3.5)A:The test plan focuses on the following components:(1) Scope of test(2) T est objectives(3) List of assumptions(4) Results of risk analysis(5) Resource allocation(6) T est schedule(7) T est design(8) T est environment(9) T esting tools and techniques(10)T est completion criteria5. Which steps should be followed when create a test plan? (p3.5)A:T o create a test plan, the steps to be followed are:(1)Forming a test team(2)Understanding project risks(3)Building the test plan6. Which activities should be involved when developing a test plan? (p3.6)A:The development of a test plan involves the following activities:1. Documenting test objectives2. Creating a test matrix3. Writing the test plan7. The post-planning phase of the test planning process includes identifying a configuration management plan for the software project. Which activities are included in the configuration management? (p3.13)A:Configuration management includes the following activities:(1)Baseline control(2)Software configuration identification(3)Configuration control(4)Configuration status accounting(5)Software configuration authentication(6)Software development libraries8. Please describe the V model and W model. (p3.14)A:The cost of correcting a defect that is detected early in the development life cycle is much less than the cost of correcting a defect detected at a later stage. Therefore, to reducethe cost of correcting defects, you must try locating defects early in the development life cycle..The V model proposes an approach to software development in which both the software development process and the software test process begin simultaneously When the project starts, the development team starts the software development process and the testmg team starts planmng for the test process This planning is based on the documents created during the development processThe V model places the development phases such as requirements, analysis, design, and coding on one side of the V The various types of testing such as umt, integration, system, and acceptance, are placed on the other side of the V.Unit testing involves testing each individual unit of software to detect errors in its code. A developer or a peer programmer typically does unit testingIntegration testing involves testing two or more previously tested and accepted units to illustrate that they work together when combined into a single entity Integration testing exposes faults in interfaces and in the interaction between integrated components System testing is the process of testing a completely integrated system to verify that itmeets specified requirements This testing is performed to identify defects that will surface only when a complete system is assembled. System testing includes testing for performance, security, and recovery from failure.Acceptance testing is the process in which actual users test a complete information system to determine whether it satisfies the acceptance criteria This testing enables the customer to determine whether to accept or reject the system.1. According to the V model, documents created during the analysis phase can be used to define the .A. System test criteriaB. Acceptance criteriaC. Integration test criteriaD. Unit test criteria2. Which of the following configuration management activities involve performing configuration reviews and audits?A. Baseline controlB. Configuration controlC. Configuration status accountingD. Software configuration authentication3. Which of the following activities is performed as part of the pre-planning phase of testing?A. Documenting risks related to testingB. Creating test matrixC. Defining the success/acceptance test criteriaD. Forming a testing team4. Which of the following is a dynamic testing technique?A. ReviewB. WalkthroughC. AuditD. T estingChapter 4 Identifying Test Approaches1. Please describe static testing and dynamic testing. (p4.3)A:Static testing: Static testing verifies the conformance of a software system to its specification without executing the code. This testing involves analysis of the source text by individuals.Dynamic testing: Dynamic testing involves executing the source code to check if it works as expected.2. Please describe the types of errors can be located by using functional approaches. (p4.3)A:Functional approaches are useful for locating the following types of errors:Incorrect functionalityMissing functionalityInterface errorsIncorrect specificationsInitialization errorsT ermination errors3. Please describe the benefits and limitations of using functional test approaches. (p4.3)A:The benefits of using functional test approaches are:●They are effective for large units of code.●T esters do not need any knowledge of implementation, including specificprogramming languages.●T esters and developers can be independent of each other.●T ests are conducted from a user's point of view.●T ests help easily identify ambiguities or inconsistencies in the specifications.●T est cases can be designed as soon as the specifications are complete.●The limitations of functional test approaches are:●Can leave many program paths untested.●Cannot be used for complex segments of code. Therefore, such segments cancontain errors.●Cannot determine a reason for failure.●Difficult to design tests without clear and concise specifications.4. Please describe the benefits and limitations of using structural test approaches. (p4.5)A:The benefits of structural testing approaches are:●Useful in locating non-specified functions that cannot be detected using functionalapproaches●More effective than functional approaches for small modulesThe limitations of structural test approaches are:● A program contains a large number of logical paths. It is not practically possible tocheck all logical paths because it involves time and effort. Y ou can test only some important logical paths.●It is necessary for the tester to know programming languages.●These approaches do not ensure meeting user requirements.5. Please describe which types of testing structural test approaches and functional test approaches should be applied to. Four basic types of testing are: Unit Testing, Integration Testing, System Testing, and Acceptance Testing. (p4.5)6. Please describe the Structural Testing Techniques. (p4.6)A:The structural testing techniques are:●Stress testing: Involves testing the system in a manner that demands resources inabnormal quantity, frequency, or volume.●Recovery testing: Verifies the ability of the system to recover from varying degrees offailure.●Operations testing: Ensures that when an application is developed, it is tested andthen integrated into the operating environment.●Compliance testing: Verifies whether the application is developed in accordance withinformation technology standards, procedures, and guidelines.●Security testing: Identifies security defects in the software.7. Please describe the Functional Testing Techniques. (p4.8)A:The functional testing techniques are:●Requirements testing: This type of testing is conducted to verify that a systemperforms correctly over a continuous period of time.●Regression testing: When a change is made to one segment of a system.●Error-handling testing: This type of testing is done by a group of individuals who thinknegatively to anticipate what can go wrong with the system.●Manual-support testing: This involves testing the interface between users and theapplication system.●Intersystem testing: Applications are often interconnected with other applications.●Control testing: This type of testing is conducted to ensure that processing isperformed in accordance with the intent of the management.●Parallel testing: When a new system is developed.1. Which of the following is a static testing technique?A. Black-box testingB. White-box testingC. ReviewsD. Regression testing2. Which of the following is a structural testing technique? A. Unit testing B. System testing C. Acceptance testing D. Requirements testing3 Which of the following is a functional testing technique? A. Stress testing B. Executing testing C. Recovery testing D. Regression testing4. Which of the following is a dynamic testing technique? A. ReviewB. WalkthroughC. AuditD. White-box testingChapter 5 Designing the Test Environment1. Please describe the test process and its minor process activities by using ETVX diagram. (p5.4)A:A test process provides a systematic approach to accomplish the objective of testing. A test process can also be defined as a set of minor process activities within major process activities represented by the Entry-T ask-Verification-Exit (ETVX) diagram.2. Please describe the Life Cycle of a Test Process. (p5.5) A:There are various phases in the life cycle of a test process. These phases are as follows: ●System study: The purpose of the system study phase is to understand the testprocess and define its requirements.●Design test cases: The purpose of this phase is to design and build a set of intelligenttest cases for the test process.●Execution: The purpose of the execution phase is to execute the test cases preparedin the design test cases phase.●Wind-up: The purpose of the wind-up phase is to provide an organized and formalwrap up of the test execution phase.3. Please describe the criteria affecting the selection of an appropriate testing tool. (p5.6)A:The criteria affecting the selection of an appropriate testing tool are as follows:●The objectives of testing should be accomplished successfully.●The tool should be easy to use.●The time spent in installing and learning about the tool should be the least.●The tool should be compatible with the platform and software used for testing.●The purchase cost of the tool should be within the project budget.4. What steps the testing team should follow while designing the test environment? (p5.7)A:While designing the test environment, the testing team should follow the given steps:●Gather information about proposed test environment.●Document the test environment specifications for the project.●Simulate the server environment.●Simulate the client environment.●Design domains for testing.●Keep the test logs and reports safe for the future.5. What is a test bed? What are the benefits of test beds? And what are the factors that affect test bed decisions? (p5.8)A:A test bed is a test environment that contains the hardware, simulators, software tools, and other support elements necessary for conducting the test.Benefits of test beds are:●Observing the impact of running applications in an environment changed by softwarepatches,new software installed, ornew hardware purchased before these are used on an everyday basis.●Developing off-line maintenance procedures that help minimize non-functionalperiods of the application software.The following factors affect test bed decisions:●Budget and resource constraints: Setting up a test bed requires specific hardware,software, and other resources.●T echnical support constraints: Maintaining a test bed requires technical support fromspecialized personnel.6. Please describe the testing tool types. Which ones are the Manual tools? (p5.14) A:Some of the important testing tools are as follows:●Unit testing tools●Regression testing tools●Load testing tools●Traceability tools●Code coverage tools●Manual toolsThe most frequently-used manual tools are as follows:●Checklists●T est scripts●Decision T ables1. Which testing is used when there is high risk of a recent change affecting unchanged areas of the application software?A. Parallel testingB. Control testingC. Requirement testingD. Regression testing2. Which of the following steps is not a part of the process of setting up a regression-testing tool?A. Designing the framework of testingB. Identifying the utility functions related to the application softwareC. Configuring an isolated network with servers of specified configurationD. Designing test scenarios3. Which of the following statement holds true for test bed?A. A test bed is the key to quality and stability in a software testing processB. A test bed captures the input of test processC. A test bed helps define a test script in exact terms by defining the hardware and software requirements.D. A test bed executes a test case on time.4. Which testing ensures that the operations of an application software continue after a disaster?A. Recovery testingB. Operations testingC. Compliance testing。
在CNC中生成step文件的方案摘要STEP-NC (ISO14649)是STEP标准在CNC 领域的新扩展,重新规定了CAX 与CNC 之间的接口,将产品的设计信息与制造信息完整联系起来,消除CAM 的后处理系统,实现了CAX与CNC之间双向无缝信息流通,极大地方便了系统间的信息交换和共享。
根据STEP TOOL公司的预测,STEP-NC将在未来十年内逐渐成为数控技术的最终标准。
论文在详细研究STEP-NC体系原理的基础上,针对一个具体的CAD文件,开发了“STEP-NC程序生成器V1.0”。
目的在于从程序的层面,用软件方式来实现CAD文件的特征自动提取,最终生成STEP-NC程序源代码。
程序生成器V1.0采用面向对象的软件开发思想,基于Visual C++6.0 中MFC 的Dialog 形式建立,利用MFC 类库的软件资源,实现系统预定的功能。
系统主要部分有2个功能模块,第一模块是特征提取,利用DXF图形交换文件进行二次开发,对CAD文件进行基于STEP-NC制造特征的提取,并生成与之相应的EXPRESS实体语句。
第二模块是ISO14649源代码自动生成,在Visual C++6.0下,实现由零件特征EXPRESS语句生成相应程序源代码,以及零件完整的ISO14649源代码生成。
论文重点分析了STEP-NC体系结构特点、建模语言EXPRESS、数据模型及加工程序文件,构造利于加工信息表述的系统核心数据库,建立了STEP-NC 的形式化描述语言EXPRESS到C++语言的映射。
为“STEP-NC程序生成器V1.0”的进一步研究和开发工作,做了必要的基础性的研究工作。
图[31] 表[12] 参[65]关键词:产品数据交换标准;ISO14649;制造特征;STEP-NC;VC++分类号:(TH164)AbstractSTEP (ISO14649) expends STEP standards into the area of CNC and redefinition the data interface between CNC and CAX and integrates the data information of product design and manufacturing well. It eliminates post processor system of CAM and realizes the seamless bir-directional information stream between CAM system and CNC. It makes information exchanging and sharing among systems conveniently. According to STEP TOOL's forecast, STEP will become the ultimate standard of NC technology gradually in the next decade.On the base of studying priceples of STEP system, aiming at specific CAD file, “STEP-NC program generator V1.0”is invented, which intends to use software method in the program level to realize automatic extraction of features, and generate STEP-NC source code eventually.Based on MFC Dialog form of Visual C++ 6.0 and utilitiying object-oriented thinking of software and MFC software resource,the Program generator V1.0 is established to realize anticipated function of program system. The system has two functional modules mainly. The the first module is a feature extraction which uses DXF drawing exchanging files to develop secondary to extract features of CAD file based on STEP-NC feature and generates corresponding ENTITY EXPRESS statement. The second functional module is about automatic generation of source code of ISO 14649. under Visual C++6.0 developing envirionment,generaing corresponding resoure of part feature from a part feature EXPRESS statement and generating complete ISO14649 source code.Papers focuses on STEP architecture features, modeling language EXPRESS, data-model and constructs core database which can express information well. And mapping from formal description of the C++ language to EXPRESS language is established. doing the necessary basic research for "STEP-NC Programe Generator V1.0" further researching and developing work.Figure [31] table [12] reference [65]Key Words:STEP;ISO14649;Manufacturing feature;STEP-NC;VC++ Chinese books catalog:TH164目录摘要 (I)Abstract (III)注释说明清单 (VIII)引言 (IX)1 绪论 (1)1.1 STEP标准简介 (1)1.1.1 传统G、M代码不足 (1)1.1.2 STEP的产生与现状 (2)1.1.3 STEP标准的体系结构及特点 (2)1.2 产品数据交换标准STEP-NC (7)1.2.1 STEP的延伸——STEP-NC (7)1.2.2 STEP-NC研究现状及展望 (9)1.3 课题来源及意义 (11)1.4 论文主要研究内容 (12)2 STEP-NC体系及程序编制 (13)2.1 STEP-NC系统架构、原理 (13)2.1.1 系统架构 (13)2.1.2 STEP-NC的基本原理 (14)2.1.3 STEP-NC的几个概念 (14)2.1.4 STEP-NC特点 (16)2.2 形式化建模语言EXPRESS (17)2.2.1 EXPRESS简介 (17)2.2.2 EXPRESS语言的数据类型 (18)2.3 STEP-NC产品数据模型和编程 (19)2.3.1 数据模型基本架构 (19)2.3.2 编程原则 (21)2.3.3 产品数据描述 (21)2.3.4 ISO 14649文件头和数据段 (22)2.3.5 ISO 14649实现方法 (23)2.4 数据存取接口——SDAI (25)2.5 相关的辅助软件介绍 (26)3 零件特征提取和STEP-NC文件映射实现 (27)3.1 集成开发环境VC++6.0 (27)3.2 零件特征数据库建立及其链接 (27)3.2.1 零件特征数据库选择 (27)3.2.2 链接方式选取 (28)3.3 采用ODBC创建和注册数据源 (29)3.4 基于加工特征的信息提取 (30)3.5 EXPRESS与C++之间的数据映射及实现 (39)3.5.1 数据类型在C++中的映射 (39)3.5.2 EXPRESS与C++映射的实现 (41)4 STEP-NC程序生成器V1.0及应用 (46)4.1 零件分析和程序架构 (46)4.1.1 零件分析和特征提取 (46)4.1.2 刀具轨迹分析 (50)4.1.3 程序主要内容 (51)4.2程序运行及解释 (57)5 论文总结 (64)5.1 论文工作总结 (64)5.2 不足和以后工作设想 (65)参考文献 (66)致谢.................................................................................... 错误!未定义书签。
说明:在以下答案中,红色问题为范围之外的问题,各位同学可以不用阅读。
蓝色和黑色字体的题目需要给位阅读并背诵。
1. What is quality?The essential character of something, an inherent or distinguishing character, degree or grade of excellence. Quality means, “Meeting requirements.” Whether or not the product or service does what the customer needs. In another word Quality is, “fit for use.”2. Explain “Prevention”&“Detection”Prevention means to prevent quality defects or deficiencies in the first place, and to make the products and processes assessable by quality management program. Prevention decreases production costs because the sooner a defect is located and corrected, the less costly it will be in the long run.The greatest payback is with prevention.3. Explain “Verification”&“Validation”The overall goal of verification is to ensure that each software product developed throughout the software life cycle meets the customer’s needs and objectives as specified in the software requirements document.Validation checks that the system meets the customer’s requirements at the end of the life cycle.Usually verifications take place at the end of each phase. Validations take place just before the product is delivered.It is good practice to combine verification with validation in the testing process.4. Cost of Quality includes _______The cost of quality includes all costs incurred in the pursuit of quality or in performing quality-related activities.The total cost of quality is the sum of four component costs:Prevention cost, appraisal cost, internal failure cost, external failure cost.5. Explain “Prevention Cost” with examplePrevention costs consist of actions taken to prevent defects from occurring the first place.Quality planning, formal technical reviews, test equipment, training …6. Explain “Appraisal Cost” with exampleAppraisal costs consist of measuring, evaluating, and auditing products or services for conformance to standards and specifications.Inspection and testing of products, Equipment calibration and maintenance, Processing and reporting inspection data…7.What are the components of software quality assurance?Software testing, quality control, software configuration management8. What are the elements of software configuration management?Component identification, version control, configuration building, change control.9. Why change control is the key role in the Software ConfigurationManagement?SCM answers Who, What, When, and Why.Who made the changes?What changes were made to the software?When were the changes made?Why were the changes made?10. What are the main responsibilities of SQA group?The SQA group works with the software project during its early stages to establish plans, standards, and procedures that will add value to the software project and satisfy the project constraints and the organization’s policies.The SQA group reviews project tasks and audits software work products throughout the SDLC lifecycle and provides management with visibility as to whether the software project is adhering to its established objectives and standards.11. Why do Dev people think that SQA group arespies tomanagement?12. What is CMMI?Capability Maturity Model Integration——即软件能力成熟度模型集成其目的是帮助软件企业对软件工程过程进行管理和改进,增强开发与改进能力,从而能按时地、不超预算地开发出高质量的软件。
I. INTRODUCTIONThis guidance provides recommendations to applicants on submitting analytical procedures, validation data, and samples to support the documentation of the identity, strength, quality, purity, and potency of drug substances and drug products.1. 绪论本指南旨在为申请者提供建议,以帮助其提交分析方法,方法验证资料和样品用于支持原料药和制剂的认定,剂量,质量,纯度和效力方面的文件。
This guidance is intended to assist applicants in assembling information, submitting samples, and presenting data to support analytical methodologies. The recommendations apply to drug substances and drug products covered in new drug applications (NDAs), abbreviated new drug applications (ANDAs), biologics license applications (BLAs), product license applications (PLAs), and supplements to these applications.本指南旨在帮助申请者收集资料,递交样品并资料以支持分析方法。
这些建议适用于NDA,ANDA,BLA,PLA及其它们的补充中所涉及的原料药和制剂。
The principles also apply to drug substances and drug products covered in Type II drug master files (DMFs). If a different approach is chosen, the applicant is encouraged to discuss the matter in advance with the center with product jurisdiction to prevent the expenditure of resources on preparing a submission that may later be determined to be unacceptable.这些原则同样适用于二类DMF所涉及的原料药和制剂。
201507FDA行业指南:分析方法验证(中英文)(中)A. Principle/Scope 原理/范围A description of the basic principles of the analytical test/technology (i.e., separation, detection); target analyte(s) and sample(s) type (e.g., drug substance, drug product, impurities or compounds in biological fluids).分析测试/技术(即分离、检测)基本原因的描述;目标分析物和样品类型(例如,原料药、制剂、杂质或生物流体中的化合物)。
B. Apparatus/Equipment 仪器/设备All required qualified equipment and components (e.g., instrument type, detector, column type, dimensions, and alternative column, filter type).所有需要的确认过的仪器和组件(例如,仪器类型、检测器、柱子类型、尺寸和可替代的柱子、过滤器类型)。
C. Operating Parameters 运行参数Qualified optimal settings and ranges (include allowed adjustments supported by compendial sources or development and/or validation studies) critical to the analysis (e.g., flow rate, components temperatures, run time, detector settings, gradient, head space sampler). A drawing with experimental configuration and integration parameters may be used, as applicable.确认过的优化的设置和范围(包括来自药典或研发和/或验证研究的允许调整),对于分析过程非常关键(例如,流速、部件温度、运行时间、检测器设置、梯度、顶空进样器)。
An Overview of OSI Conformance TestingJan TretmansFormal Methods&Tools groupUniversity of TwenteJanuary25,20011IntroductionThe development of distributed systems,in which the computer functionality,such as processing functions,information storage,and human interaction,is distributed over different computer systems,raises the need for exchanging information between these systems.To have computer systems communicate successfully,the communication must occur according to well-defined rules.A protocol describes the rules with which computer systems have to comply in their communication with other computer systems.A protocol entity is that part of a computer system that takes care of the local responsibilities in communicating according to the protocol.To have successful communication among computer systems,also from different manufac-turers,many protocols are not developed in isolation,but within groups of manufacturers and users,with the aim of standardizing such protocols.This has led for instance to the development of the OSI Reference Model for Open Systems[ISO84],which serves as a framework for a set of standards that enable computer systems to communicate.How-ever,to assure successful communication it is not sufficient to specify and standardize communication protocols.Implementations of these protocol standards are required for which it must be ascertained that these implementations really behave according to these standards protocol specifications,i.e.,conform to these standards.One way to do this is by testing these protocol implementations.This activity is known as protocol confor-mance testing.This note gives an introduction into some of the important concepts of protocol con-formance testing.It is largely based on the standard ISO9646:“Conformance Testing Methodology and Framework”[ISO91].This standard was originally developed to give a framework and define common terminology for testing of OSI systems.Although OSI protocols themselves are not often used anymore the concepts for testing their imple-mentations have a broader applicability and can,and actually are,also used in testing of other kinds of protocol systems.This note starts with a discussion of what conformance testing is,after which the main phases and aspects of conformance testing according to[ISO91]are presented.12Conformance TestingTesting is the process of trying tofind errors in a system implementation by means of experimentation.The experimentation is usually carried out in a special environment, where normal and exceptional use is simulated.The aim of testing is to gain confidence that during normal use the system will work satisfactory:since testing of realistic systems can never be exhaustive,because systems can only be tested during a restricted period of time,testing cannot ensure complete correctness of an implementation.It can only show the presence of errors,not their absence.Protocol conformance testing is a kind of testing where an implementation of a protocol entity is tested with respect to its specification.The aim is to gain confidence in the correct functioning of the implementation with respect to a given specification,and hence to improve the probability that the protocol implementation will communicate successfully with peer protocol entities.To conduct testing,experiments,or tests must be systematically devised.These tests are applied to an implementation,and the test outcomes are compared with the expected or calculated outcomes.Based on the results of the comparison a verdict can be formulated about the correctness of the implementation,which,if negative,can be used for improving the implementation.In testing,in particular in software testing(see e.g.,[Mye79,Whi87]),a distinction is made between functional testing and structural testing.Structural testing,also referred to as white-box testing,is based on the internal structure of a computer program.The aim is to exercise thoroughly the program code,e.g.,by executing each statement at least once,or by trying to execute all paths through the program code taking into account decisions,branches,loops,etc.Tests are derived from the program code.Structural testing is most used in the early stages of program development.With functional testing the emphasis is on testing the externally observed functionality of a program based on its specification.Functional testing is also called black-box testing:a system is treated as a black box,whose functionality is checked by observing it,i.e.,no reference is made to the internal structure of the program.The main goal is to determine whether the right(with respect to the specification)product has been built.Functional tests are derived from the specification.Consequently,the most important prerequisite is a precise,complete and clear specification.Functional testing is usually concentrated in the later stages of program development.Protocol conformance testing is a kind of functional testing:an implementation of a protocol entity is solely tested for conformance with respect to the requirements given in its specification.The idea is that only systems with correctly implemented protocols can communicate successfully with peer entities.Often the specification is(internationally) standardized and then the goal is to certify the implementation with respect to the standard.Only the observable behaviour of the protocol implementation is tested,i.e., the interactions of an implementation with its environment;no reference is made to the internal structure of the protocol implementation. e.g.,memory consumption.In practical conformance testing the internal structure of the entity is usually not even accessible to the tester:the computer system in which the entity under test is located need not be accessible,e.g.,when testing is performed by an independent,accredited test laboratory,that has no access to the implementation details of an implementation.2Conformance testing in the development trajectory Conformance testing is only concerned with checking a protocol implementation,i.e.,a software product(executing code),with respect to its specification.This implies that a specification must be available and,moreover,that the specification is correct and valid.Checking the correctness of a specification is referred to as protocol validation.It involves checking that the specifica-tion indeed prescribes the intended behaviour.For testing,validity of the specification is assumed;it is not the topic of conformance testing:if the specification contains a design error then,if the conformance testing process is correctly performed,each conforming implementation will have the same error.Other kinds of protocol testing Since in practice it turns out that functional testing of an implementation in isolation,i.e.,conformance testing,does not guarantee successful communication between systems,products are also tested in a realistic environment,for example in a model of a communication network.In this kind of testing the interac-tion with other computer systems can be examined in more detail.It is referred to as interoperability testing.Apart from testing the functional behaviour of a protocol implementation,other kinds of testing test other aspects of a protocol,e.g.,performance testing to measure the performance characteristics of an implementation,robustness testing to examine the im-plementation’s behaviour in an erroneously behaving environment,and reliability testing to check whether the implementation continues to work correctly during a certain period of time.Parties involved Conformance testing can be performed by different parties.First, the implementer or supplier of a product tests its product before selling ers of products,or their representative organizations,test products for their correct function-ing.Telecommunications administrations check products before connecting them to their networks to prevent malfunctioning of a network caused by incorrectly implemented prod-ucts.Finally,independent third party test laboratories can perform conformance tests for any of the previously mentioned parties.A system of accreditation allows testing lab-oratories to certify implementations that they have tested and judged to be conforming. Certification by accredited testing laboratories makes repeated testing by supplier,buyer, and network owner superfluous.Also repetition of tests by different network operators in different countries is not necessary if we can rely on testing having been performed according to well-defined procedures and standards.Standardization of conformance testing If implementations of the same(interna-tionally)standardized protocol are tested it should not occur that different test labo-ratories decide differently about conformance of the same implementation.Ideally,it should not be necessary that the same product is tested more than once by different testing laboratories.This is possible if testing is based on generally accepted principles, using generally accepted tests,and leading to generally accepted test results.To achieve this the International Organization for Standardization(ISO),together with the CCITT (now ITU-T),has developed a standard for conformance testing of Open Systems.This is the standard ISO IS-9646:“OSI Conformance Testing Methodology and Framework”[ISO91].The purpose of this standard is‘to define the methodology,to provide a framework for specifying conformance test suites,and to define the procedures to be followed during3testing’,leading to‘comparability and wide acceptance of test results produced by differ-ent test laboratories,and thereby minimizing the need for repeated conformance testing of the same system’[ISO91,part1,Introduction].The standard does not specify tests for specific protocols,but it defines a framework in which such tests should be devel-oped,and it gives directions for the execution of such tests.The standard recommends that sets of tests,called test suites,be developed and standardized for all standardized protocols.3Overview of IS-9646The current practice of protocol conformance testing is based on the standard ISO IS-9646,“OSI Conformance Testing Methodology and Framework”[ISO91,Ray87].This standard defines a methodology and framework for protocol conformance testing assum-ing that protocols are specified using a natural language.It was originally developed for OSI protocols,but it is also used for testing other kinds protocols,e.g.,ISDN and ATM protocols.The standard consists offive parts,each defining an aspect of conformance testing:◦part1is an introduction and deals with the general concepts;◦part2describes the process of abstract test suite specification;◦part3defines the test notation TTCN;◦part4deals with the execution of tests;◦part5describes the requirements on test laboratories and their clients during the conformance assessment process.An overview of IS-9646is given with most attention devoted to parts1and2,i.e.,to the generation and specification of test suites.3.1The Conformance Testing ProcessIn the process of conformance testing three phases are distinguished[ISO91,Part1, Section1.3].They are depicted in Figure1,together with the activity of protocol imple-mentation.Thefirst phase is the specification of an abstract test suite for a particular (OSI)protocol.We refer to it by test generation or test derivation.This test suite is abstract in the sense that tests are developed independently of any implementation.It is intended that abstract test suites of standardized protocols are standardized themselves. The second phase consists of the realization of the means of executing specific test suites. It is referred to as test implementation.The abstract test cases of the abstract test suite are transformed into executable tests that can be executed or interpreted on a real testing device or test system.The peculiarities of the testing environment and the implemen-tation,which during testing is called IUT(Implementation Under Test),are taken into account.The last phase is the test execution The implemented test cases are executed with a particular IUT and the resulting behaviour of the IUT is observed.This leads to the assignment of a verdict about conformance of the IUT with respect to the standard protocol specification.The results of the test execution are documented in the protocol conformance test report(PCTR).In the next subsections these phases are described in more detail.This leads to a more4protocolIUT implementationverdicttest generationstandard conformance test suiteimplementationtest implementationprocess test executionconformance testingprocess of specification protocolstandardFigure 1:Global overview of the conformance testing process.5detailed view of the conformance testing process given in Figure2.3.2A Conforming ImplementationBefore an implementation can be tested for conformance it must be defined what con-formance is:What does it mean that an implementation conforms to its specification? The definition of what constitutes a conforming implementation determines what should be tested.IS-9646states that a system‘exhibits conformance if it complies with the conformance requirements of the applicable...standard’[ISO91,Part1,Section5.1]. This means that a correct implementation is one which satisfies all conformance require-ments,and that these conformance requirements must be mentioned explicitly in the protocol standard.Conformance requirements express what a conforming implementa-tion shall do(positively specified requirements),and what it shall not do(negatively specified requirements).A complication arises by the fact that a protocol standard does not uniquely specify one protocol,but a class of protocols.Most standards leave open a lot of options,which may or may not be implemented in a particular protocol implementation,but which,if implemented,must be implemented correctly.An implementer selects a set of options for implementation.All implemented options of a specific protocol implementation are listed by the implementer in the PICS,the Protocol Implementation Conformance Statement, so that the tester knows which options have to be tested.To assist in producing the PICS a PICS proforma is attached to the protocol standard.This is a questionnaire in which all possibilities for the selection of options are enumerated.Restrictions on the selection of options are given in the static conformance requirements of a standard.They define requirements on the minimum capabilities that an implemen-tation shall provide,and on the combination and consistency of different options. Example3.1In the ISO/OSI Transport Protocol[ISO86]five classes(0..4)are distinguished.In a particular implementation not all classes need be implemented.However,the choice is not completely free,e.g.,if class4is implemented also class2must be implemented. Such a restriction is recorded in the protocol standard as part of the static conformance requirements.In the PICS the implemented classes of a particular implementation are documented.2 The main part of a protocol standard consists of dynamic conformance requirements. They define requirements on the observable behaviour of implementations in the com-munication with their environment.They concern the allowed orderings of observable events,such as sending and receiving of PDUs(protocol data units)and ASPs(abstract service primitives),the coding of information in the PDUs,and the relation between information content of different PDUs.Example3.2A dynamic conformance requirement of the ISO/OSI Transport Protocol is the require-ment that after receiving a T-PDU-connect-request from the peer entity either the user of the Transport entity is notified by means of a T-SP-connect-indication service-primitive, or a T-PDU-disconnect-request is sent to the peer entity.2 Summarizing,the definition of a conforming implementation is[ISO91,Part1,Sections6certificationstatic conformance protocolimplementationIUTexecutable test suite process conformance requirementsconformance requirementsproforma dynamic purposes test suite implementation standard protocol specificationstatic PICStest generic test methods notationstandardized test standardized abstract test suitebasic interconnection capability behaviour test test selectionbasic interconnection capability behaviourimplementationPICS PIXITPIXIT proformareviewtest execution analysis of resultstest report verdictFigure 2:Detailed overview of the conformance testing process.73.4.10and5.6]:‘A conforming implementation is one which satisfies both static and dy-namic conformance requirements,consistent with the capabilities stated inthe PICS.’Conformance testing consists of checking whether an IUT satisfies all static and dynamic conformance requirements.For the static conformance requirements this means a re-viewing process of the PICS delivered with the IUT.This is referred to as the static conformance review.For the dynamic conformance requirements this means running a number of tests against the IUT.The specification of one test is referred to as a test case.A test suite is a complete set of test cases,i.e.,a set that tests all dynamic conformance requirements.3.3Test GenerationThefirst phase of the conformance testing process is test generation.It consists of sys-tematically deriving test cases from a protocol specification.The goal is to develop an abstract test suite,i.e.,a specification of a test suite that is implementation indepen-dent,specified in a well-defined test notation language,suitable for standardization,and testing all aspects of the protocol in sufficient detail.Since the relevance of a protocol specification with respect to conformance testing is its set of conformance requirements, and since the static conformance requirements are checked by reviewing the PICS,this means that the set of the dynamic conformance requirements in a protocol standard is the starting point for test generation.Test cases are derived systematically from the dynamic conformance requirements in a multi-step procedure.In thefirst step,one or more test purposes are derived for each conformance requirement.A test purpose is a precise description of what is going to be tested in order to decide about the satisfaction of a particular conformance requirement. As the next step it is recommended to derive a generic test case for each test purpose.A generic test case is an operationalization of a test purpose,in which the actions necessary to achieve the test purpose are described on a high level,without considering a test method or the environment in which the actual testing will be done.The last step is the derivation of an abstract test case for each generic test case.In this step a choice is made for a particular test method,and the restrictions implied by the environment in which testing will be carried out are taken into account.Test methodsA protocol standard specifies the behaviour of a protocol entity at the upper and lower access points of the protocol((N)-SAP en(N-1)-SAP).Hence the ideal points to test the entity are these SAPs.However,these SAPs are not always directly accessible to the tester.The points where the tester controls and observes the IUT are called the Points of Control and Observation(PCO).PCOs may,but need not coincide with the boundaries of the IUT.Normally in protocol conformance testing there are two PCOs, one corresponding with the upper access point of the IUT,and one with the lower access point.A similar conceptual separation is made for the tester.The part of the tester that controls and observes the PCO connected to the upper access point is called the Upper8Tester(UT).The part that controls and observes the PCO connected to the lower access point is called the Lower Tester(LT).A test method defines a model for the accessibility of the IUT to the tester in terms of PCOs and their place within the OSI reference model[ISO84].Aspects that can be distinguished are:◦existence of PCOs:if one of the access points is not accessible at all there is no PCO for that access point;◦whether there are other protocol layers between the PCO and the access point,and the kind of events that are communicated(ASPs or PDUs);◦the positioning of the PCOs in the same computer system as the IUT,called the System Under Test(SUT);◦the internal functioning of the tester in terms of the distribution of testing functions over LT and UT,and the rules that define their coordination:the test coordination procedures.By varying these aspects different test methods are obtained.Some have been identified and standardized in IS-9646[ISO91,Part2,Section12]for use in standardized abstract test suites.The basic configuration is Local Single-layer test method(LS-method),see Figure3.In all standardized test methods the lower access point of the IUT is always accessible,usually via an underlying service;the upper access point may be hidden. Standardized test methods,apart from the LS-method,are the Distributed Single-layer test method(DS-method),the Coordinated Single-layer test method(CS-method),and the Remote Single-layer test method(RS-method).As another example,Figure4shows the DS-method.There are two PCOs.An example of a test method with one PCO is the RS-method:in the RS-method there is no upper tester.A standardized abstract test suite refers to a particular test method,choosing the most appropriate one.The four test methods that were mentioned can be used in variations where the IUT consists of more than one subsequent protocol layers.These layers can be tested as a whole(multi-layer testing),or one layer can be tested embedded in the other layers (embedded testing).The test methods are LM,CM,DM and RM(Local Multi-layer, etc.),and LSE,CSE,DSE and RSE(Local Single-layer Embedded,etc).Test notationSince abstract test suites are standardized,they must be specified in a test notation that is well-defined,independent of any implementation,and generally accepted.IS-9646 recommends the semi-formal language TTCN,the Tree and Tabular Combined Notation, which is defined in[ISO91,part3]and more recently in[ISO97].(A major revision leading to TTCN Version3is expected to appear soon[GH99]).In TTCN the behaviour of test cases is specified by sequences of input and output events that occur at the PCOs.A sequence can have different alternatives,where different subsequent behaviours can be chosen,e.g.,depending on output produced by the IUT, the expiration of timers,or values of internal parameters of the tester.Successive events are indicated by increasing the level of indentation,alternative events have the same indentation.A sequence ends with the specification of the verdict that is assigned when the execution of the sequence terminates.The verdicts in the different possible alternative9IUTUTPCO LT(N)-ASPsPCO (N-1)-ASPs(N)-PDUscoordination testFigure 3:The local test method.service providerSUTtestsystem (N)-ASPs(N-1)-ASPsIUT(N)-PDUs coordinationLTUT PCOPCO test Figure 4:The distributed test method.10behaviours differ.Some alternatives will describe correct behaviour,ending with the positive verdict pass,while other alternatives describe erroneous behaviour,ending with the negative verdict fail.The verdict inconclusive indicates correct but not intended behaviour,see Section3.5.TTCN is defined in such a way that automatic execution is feasible.A simplified example of a TTCN behaviour is presented in Figure5.More about TTCN,apart from the defining standard,can be found in[PM92].Test Case Dynamic BehaviourTest Case Name:Conn EstabGroup:transport/connectionPurpose:Check connection establishment with remote initiativebehaviour constraints verdict+preambleLT!T-PDU-connect-requestUT?T-SP-connect-indicationUT!T-SP-connect-responseLT?T-PDU-connect-confirm passOTHERWISE failLT?T-PDU-disconnect-request inconclusiveOTHERWISE failFigure5:A simplified TTCN example.Classification of testsTests can be classified according to the extent to which they give an indication of con-formance.The following distinction is made:◦basic interconnection tests◦capability tests◦behaviour tests◦conformance resolution testsThe classification is applicable to generic,abstract and the executable tests,which will be discussed in Section3.4.Basic interconnection tests are used to guarantee a basic level of interconnection between the tester and the IUT.Their main purpose is economical:before an expensive test environment is developedfirst some basic functions of the IUT are checked,e.g.,the establishment of a connection between the tester and the IUT.Capability tests serve to verify the compliance between the implemented options and the options stated in the PICS.Behaviour tests constitute the main part of a test suite.They test the dynamic confor-mance requirements of a protocol standard in full detail within the limits of technical and economical feasibility.They are the basis for thefinal verdict about conformance. Conformance resolution tests do not belong to the actual conformance tests.They form supplementary tests that can be used to do extra testing if problems are encountered,or to trace errors.These tests have a heuristic nature,they are not standardized,and they11cannot be used as a basis for thefinal verdict.Hierarchical structuring of testsA test suite is a complete set of tests for conformance testing of a particular protocol. Elements of a test suite are tests,or test cases.A test case specifies one experiment, related to one test purpose and to one conformance requirement.Related test cases can be grouped into test groups with corresponding test group objective.Grouping can occur at different levels.Within a test case test steps and test events can be distinguished.A test event is one interaction at a PCO,e.g.,sending or receiving one PDU.A test step groups successive test events.An example of a test step is a preamble:a sequence of events that brings the IUT in a state from which the body of the test case that tests the test purpose can be tested.Analogously the postamble test step brings the IUT back to a specified state, e.g.,the initial state,after the main part of a test case has been executed.The hierarchical structuring is applicable to all levels of test cases.Also conformance requirements and test purposes can be grouped.3.4Test ImplementationStarting point for test implementation is the(standardized)abstract test suite.The abstract test suite is specified independently of any real testing device.In the test implementation phase it is transformed into an executable test suite,i.e.,a test suite which can be run on a specific testing device with a specific IUT.Before starting to implement,a selection from the abstract test suite must be made.The abstract test suite contains all possible tests for a particular protocol,for all possible options.It does not make sense to test for options that are not implemented according to the PICS.Therefore the tests relevant to the IUT are selected based on the PICS.In IS-9646this is called test selection1.The PICS contains protocol dependent information.To derive executable tests this is insufficient;also information about the IUT and its environment must be supplied.Such information is called PIXIT(Protocol Implementation eXtra Information for Testing). The PIXIT may contain address information of the IUT,or parameter and timer values which are necessary to implement the test suite.The PIXIT,like the PICS,is supplied by the supplier of the IUT to the testing laboratory.To guide production of the PIXIT the testing laboratory provides a PIXIT proforma.The selected and implemented test cases with parameter values according to the PIXIT form the executable test suite,which can be executed on a real tester or test system. During implementation care must be taken that the tests are implemented correctly, according to the semantics of the test notation used for the specification of the abstract test suite.1Note that the notion of‘test selection’is sometimes used in a different way,viz.as selecting from an(infinite)set of possible(automatically generated)test cases.12。
Formal methods in conformance testing:a probabilistic refinementLex Heerink and Jan TretmansTele-Informatics and Open Systems group,Dept.of Computer ScienceUniversity of Twente,7500AE Enschede,The Netherlands{heerink,tretmans}@cs.utwente.nlAbstractThis paper refines the framework of‘Formal Methods in Conformance Testing’by introducing probabilities for concepts which have a stochastic nature.Test execution is refined into test runs,where each test run is considered as a stochastic process that returns a possible observa-tion with a certain probability.This implies that not every possible observation that could be made,will actually be made.The development process of an implementation from a specifica-tion is also viewed as a stochastic process that may result in a specific implementation with a certain probability.Together with a weight assignment on implementations this introduces a valuation measure on implementations.The test run probabilities and the valuation measures are integrated in generalized definitions of soundness and exhaustiveness,which can be used to compare test suites with respect to their ability to accept correct,and to reject erroneous implementations.KeywordsConformance testing,test framework,test selection,formal methods,probabilities.1INTRODUCTIONConformance testing is a way to assess the correctness of an implementation with respect to its specification by means of performing experiments on the implementation and observing its responses.In case the specification is given as a formal description we need formal definitions of testing concepts,such as correctness of an implementation with respect to a formal specification, a test purpose,a sound test case,test execution,test generation,etc.Currently,the standardiza-tion group on‘Formal Methods in Conformance Testing’(ISO/IEC JTC1/SC21/WG7project 54,ITU-T SG10/Q8)develops a framework for conformance testing based on formal methods defining these concepts[ISO96].The framework defines terminology,abstract concepts,and minimal requirements on,and relations between these concepts.Since it is defined at a high level of abstraction,e.g.,it abstracts from specific test generation algorithms,even from a spe-cific formal description technique,use of the framework requires instantiating these concepts with specific choices for test generation algorithms,for the formal description technique,etc.This paper builds on the framework of‘Formal Methods in Conformance Testing’.Its goal is to refine this framework by adding probabilities for those concepts which have a stochasticnature.The refinement concerns the testing process;probabilistic extensions of specification languages or models are not considered.Thefirst addition is to consider test execution in a probabilistic setting.In the framework,test execution of a test case against an implementation under test is assumed to yield a unique observation.This assumption is replaced by refining test execution into a number of test runs.Each test run yields an observation of the implemen-tation,but a number of test runs does not necessarily yield all possible observations[LS89].A probability distribution is added to express which observations are likely to be obtained.A second refinement of the framework concerns the extension of the soundness and the exhaustiveness of a test suite.Soundness refers to the property of a test suite to accept all con-forming implementations,and exhaustiveness indicates that all nonconforming implementations are rejected[ISO96].These predicates are generalized to measures in the vein of[BTV91,Bri93], which take into account the probability of the occurrence of implementations,the gravity of er-rors in implementations,and the above mentioned probability on observations made during test runs.It is indicated how these soundness and exhaustiveness measures can then be used to compare test suites,in order to select a good,or the best one.The outline of this paper is as follows.In section2an overview of the framework‘Formal Methods in Conformance Testing’is given,as far as it is relevant for this paper,and some of the assumptions underlying this framework are discussed.Section3refines test execution into test runs,and it adds probabilities to the observations.In section4a valuation on implementations is defined,which assigns a value based on their probability of occurrence and on their weight. Section5uses the probability on test-run observations and the valuation on implementations to define soundness and exhaustiveness as two measures on test parison of test suites based on these measures is briefly discussed.In section6the concepts are illustrated for labelled transition systems with inputs and outputs;this section may be read in parallel with the sections2till5.Section7presents concluding remarks and items for further work.2FORMAL METHODS IN CONFORMANCE TESTINGThe emerging international standard‘Formal Methods in Conformance Testing’defines a frame-work for the use of formal methods in conformance testing[ISO96].It is intended to guide the testing process of an implementation with respect to a formal specification.In this section the main concepts of[ISO96],such as conformance,testing,and conformance testing,are presented, as far as they are needed for the subsequent sections.They are followed by a discussion of some of the explicit and implicit assumptions on which the framework is based.Conformance The definition of conformance concerns implementations under test(IUT)and specifications,so a universe of implementations IMPS,and a universe of formal specifications SPECS are assumed.Implementations are concrete,informal objects,such as pieces of hard-ware,or pieces of software.In order to reason formally about them,it is assumed that each implementation IUT∈IMPS can be modelled by a formal object i IUT in a formalism MODS, which is referred to as the universe of models.This hypothesis is referred to as the test assump-tion.Note that a model i IUT is only assumed to exist;it is not known apriori.Conformance is expressed by means of an implementation relation imp⊆MODS×SPECS. Implementation IUT∈IMPS is considered imp-correct with respect to s∈SPECS if the model i IUT∈MODS of IUT is imp-related to s:i IUT imp s.We use I s=def{i∈MODS|i imp s} for the set of imp-correct implementations,andTesting The behaviour of concrete implementations is investigated by performing experiments on the implementations and observing the reactions that the implementations produce to these experiments.Such experiments are called tests,and they are formally specified as elements of a universe of test cases TESTS.A set of test cases is called a test suite.The process of running a test against a concrete implementation is called test execution.Test execution leads to an observation in a domain of observations OBS.To each observation a verdict is assigned by a verdict assignment function verd t:OBS→{pass,fail}.It is said that a concrete implementation IUT∈IMPS passes a test suite T⊆TESTS if the test execution of all its test cases lead to an observation with verdict pass:IUT passes T=def∀t∈T:IUT passes t(1)IUT passes t=def test execution of t against IUT givesσ∈OBS,such that verd t(σ)=passAn implementation fails test suite T if it does not pass:IUT fails T=def¬(IUT passes T).The interpretation of test execution,i.e.,of IUT passes T,is given by modelling the process of test execution on models of implementations.By comparing the concrete observations made of IUT with the calculated observations of the model of test execution,conclusions can be drawn about the model i IUT,in particular,a set of candidate models for i IUT can be calculated.Let test execution be modelled as a function exec:TESTS×MODS→OBS,such that for each test case t∈TESTS and each model i∈MODS,exec(t,i)calculates the observation in OBS that results from executing t with the model i.If exec indeed faithfully models concrete test execution,then it can be concluded from successful test execution,IUT passes t,that the model of IUT is in the subset P t of models for which a pass-verdict is calculated:let P t=def{i∈MODS|verd t(exec(t,i))=pass}(2)then IUT passes t⇐⇒i IUT∈P tand moreover for a test suite T:let P T=def t∈T P t(3)then IUT passes T⇐⇒i IUT∈P TIn this way,for each test suite T,the universe of models of implementations MODS is partitioned into models in P T,which model passing implementations,and models not in P T.The set P T is called the formal test purpose of T.Conformance testing In order to judge whether a concrete implementation IUT conforms to its specification s∈SPECS by means of testing,the notion of conformance,i.e.,the set I s,and test execution,i.e.,the set P T(3),have to be linked,so that from test execution an indication can be obtained whether i IUT∈I s,i.e.,whether IUT conforms.A test suite is com-plete if it can distinguish exactly between all conforming and non-conforming implementations: I s=P T.Unfortunately,this is a very strong requirement for practical testing:complete test suites are usually infinite,and consequently not practically executable.Hence,[ISO96]poses a weaker requirement on test suites:they shall be sound,which means that at least all correct implementations(and possibly some incorrect implementations)will pass them:∀i∈MODS:i imp s=⇒i∈P T(4) In case all incorrect implementations(and possibly some correct ones)do not pass the execution of test suite T,the test suite is called exhaustive:∀i∈MODS:i imp s⇐=i∈P T(5)To quantify the error-detecting capability of a sound test suite a coverage measure can be defined, which expresses the extent to which a sound test suite is exhaustive(P(TESTS)is the powerset of TESTS,i.e.,the set of sets of test cases,so the set of test suites):cov:P(TESTS)−→[0,1]satisfying P T1⊇P T2=⇒cov(T1)≤cov(T2)(6)Assumptions In the definition of the formal testing framework of[ISO96]some explicit and implicit assumptions are made.Thefirst assumption,which is explicitly stated,is the test assumption:any concrete implementation IUT can be modelled by an i IUT∈MODS.In order to do any formal reasoning about implementations the existence of such a model should be assumed.However,the test assumption is unclear about unicity of i IUT,i.e.,about fully-abstractness with respect to observable behaviour,and about consequences of non-unicity.A second assumption has to do with practical test execution.Execution of one single test case usually consists of several test runs,where each test run consists of applying the test case once to the implementation under test.Due to nondeterminism in the implementation under test each test run may lead to a different outcome,and the outcomes of several,independent test runs make up one observation.Let the class of possible test-run outcomes be O,then an observation is a set of outcomes:OBS=P(O).Now exec:TESTS×MODS→P(O)calculates all possible test outcomes of given t and i.However,in the concrete test execution of t against an IUT it is difficult,or impossible,to be sure that all possible outcomes have really been obtained. Concrete test execution consists of performing afinite number of test runs,during which certain nondeterministic behaviours of the implementation may never be encountered.After anyfinite number of test runs it cannot be known whether all possible outcomes have been obtained or not.Consequently,if concrete test execution gives us an observation O⊆O,we cannot conclude that O=exec(t,i IUT)as above,and in[ISO96],but only that O⊆exec(t,i IUT).But even the conclusion O⊆exec(t,i IUT)is not always valid;it depends on another assump-tion,viz.that the concrete observations obtained from test runs are always those which can be calculated from the model of test execution.This only holds if we assume that test cases are correctly implemented,i.e.,that each test case is a valid model of its own implementation,and that exec correctly models the concrete observations that can be made during test runs.If the first assumption cannot be assumed to hold then we should verify or test the implementations of our test cases,for which we would have to derive and implement test cases again,which should also be tested,ually this assumption can be made since test cases are assumed to be an order of magnitude simpler than the implementations that are the aim of our testing.The second assumption requires an accurate modelling of the concrete test execution process,for any test case and any IUT,by means of the function exec,which is not always easy.Consider as an example the observation of a time-out.Time-outs are used to detect that an implementation does not react to a given stimulus.However,if no real-time requirements are specified for the reaction,then the time-out value should theoretically be infinite,which is practically infeasible. So it might be that a time-out is observed where only the implementation under test is slow. The observation of this time-out will usually not be modelled in exec,where the theoretically infinite timer is considered.In the probabilistic additions to the formal framework in the next sections we will not challenge the test assumption,but we will reconsider the assumption that all test-run outcomes can be obtained during test execution:we decompose test execution into test runs,taking into account that during concrete test execution some outcomes might be missed;this will be discussed in section3.Implemented test cases and test execution are,without change,assumed to be correctly modelled by test cases and a function exec,respectively.3TEST EXECUTION AS PROBABILISTIC TEST RUNSIt was argued in section2that the application of a single test case to a concrete implementation usually consists of multiple test runs,where the outcome of each test run may be different,due to nondeterminism in the implementation itself(e.g.,a gambling machine modelling the tossing of a coin),or due to nondeterminism introduced by interaction of the implementation with its environment(e.g.,afileserver writing afile to disk that is nondeterministically interrupted by the operating system when the disk is full).Each time a test-run experiment is repeated,it may result in another outcome,and it is never known when all possible outcomes of the IUT have been obtained.Consequently,test execution results in a set of test-run outcomes,where some outcomes might be more likely to occur than others.This leads to considering the occurrence of an outcome as a stochastic experiment,in which a single outcome from the set of possible outcomes is drawn.The probability of an outcome to occur can be thought of as depending on the frequency with which the implementation resolves the nondeterministic choices leading to the different outcomes.Let O be the class of outcomes,and let test execution be correctly modelled by exec: TESTS×MODS→P(O)(cf.section2),then in each test run of test case t against imple-mentation IUT,modelled by i,an outcomeσfrom sample set exec(t,i)is drawn,where each outcome in exec(t,i)has a nonzero probability to occur.This can be described by viewing a test-run as a stochastic experiment that produces an outcomeσ(t,i)∈O}(7) Note that the distribution of the random variableσ(t,i)may have another distribution.Instead of combining outcomes into one observation O⊆O,and then assigning the verdict verd t(O),we will assign verdicts to the outcomes,and then combine these outcome-verdicts into one verdict for the observation.The verdict for the observation will be pass if all outcome-verdicts are pass.Note that by combining outcome-verdicts into an observation-verdict instead of combining outcomes into an observation,we loose some observation power,e.g.,consider a system which shall produce either always x or always y.Two test runs yield the outcomes x and y,respectively.Since they are both valid outcomes the test-run verdicts assigned will be both pass,whereas the verdict assigned to the observation{x,y}would be fail.Let v t:O→{pass,fail}be a verdict assignment to outcomes,then the probability mea-sure P t,i o can be used to induce a probability measure on sets of verdicts in{pass,fail}.As each outcomeσ∈exec(t,i)uniquely leads to a verdict v t(σ)this holds in particular for the outcomeσ(t,i)is stochastically determined,it follows that the verdict assignment v t is a stochastic function,and hence a probability distribution P t,i v:P({pass,fail})→[0,1]is obtained.The probability that a test run of t with i results in verdict pass is the cumulative probability that an outcome in exec(t,i)occurs that leads to a pass-verdict:P t,i v({pass})=def P r{v t(σmulti-set,written as t n,so a test suite specifies the test cases it contains together with the number of test runs for each test case.For test suite t1,...,t n we have the random variables σ1(t n,i) ,where eachσk(t1,i))=pass,...,v t n(σn(t1,i),...,σn(t k,i))=pass}=nk=1P t k,i v({pass})(10)The probability to fail T immediately follows:P T,iv ({fail})=def1−P T,iv({pass}).It is evidentthat P T,iv denotes a probability measure on P({pass,fail}),which has the obvious property,expressed in the next proposition,that for non-zero probability of fail in a test run,test case execution willfinally result in fail if enough test runs are performed.Proposition1If P t,i v({pass})<1then lim n→∞P t n,iv({pass})=04A VALUATION ON IMPLEMENTATIONSBy considering test runs as stochastic experiments,section3formalized the notion of passing a test suite in the form of the probability that that test suite is passed,when a particular model of an implementation i∈MODS is given.However,in testing we do not have one particular, given model,but we need to reason about classes of possible models.Some of these possible models are more likely to occur than others,and some are more important than others.In this section we define a valuation measure on models of implementations.This valuation assigns a value to a set of models of implementations,i.e.,to subsets of MODS.This value gives an indication about the importance of the subset as a possible class of correct implementations with respect to a particular specification s and implementation relation imp.The valuation is defined analogous to[Bri93]as a measure-theoretic integral,which takes into account both the likeliness of occurrence of the implementations and the importance of the individual implemen-tations,expressed by a probability and a weight on models of implementations,respectively.Weight of implementations An implementation relation distinguishes between correct and incorrect models of implementations.However,to express the importance of each implementa-tion,more discriminating power has to be added.This is done by assigning a weight to each implementation.Let s∈SPECS be a specification,and imp⊆MODS×SPECS an implementation relation, then a function w:MODS→I R\{0}is a weight assignment function on MODS with respect to s and imp,if for all i∈MODS:w(i)>0⇐⇒i imp s(11)A weight assignment assigns a positive real number to each conforming implementation, and a negative number to each erroneous implementation.For conforming implementations theweight can express that one implementation is better than another;negative weights express the gravity of errors in erroneous implementations:if w(i1)<w(i2)<0then both i1and i2are not correct,but the errors of i1are more severe than those of i2.Note that the weight is defined with respect to a specification s and an implementation relation imp:it assigns a weight to each implementation as a candidate for an implementation of the particular s and imp. Probability on implementations Given a specification s∈S and an implementation rela-tion imp implementers will start developing a concrete implementation IUT∈IMPS.Many different implementations,modelled by different models,may occur as the result of the imple-mentation process,some of them conforming,and others nonconforming.Not every possible implementation will have the same chance to occur as the result of implementing a given speci-fication,e.g.,when designing a coffee machine which is supposed to serve coffee and tea it is less likely to end up with a completely different kind of machine(e.g.,a gambling machine)than to end up with a slightly different,but possibly incorrect,coffee machine.Moreover,assuming that an implementer can make several independent mistakes with non-zero probability it is less likely for the implementer to make all possible mistakes than to make only a small number of mistakes.Similar to[Bri93]we view the design and implementation process of a complex system as a stochastic experiment that draws a concrete implementation IUT modelled by the stochastic function i∈I}(14)then,under sufficient assumptions of neatness of the underlying sets(they should be Borel),the valuationµcan be expressed as the measure-theoretic integralµ(I)=def I w(i)dP s(15) Proposition21. ∅w(i)dP s=02.If I s=∅then I s w(i)dP s>03.IfI sw(i)dP s<05COMPARING TEST SUITESThe purpose of conformance testing is to increase the confidence in the correct functioning of an implementation by detecting whether it conforms to its specification or not.Test-suite execution is expected to reject,i.e.,yield the verdict fail with,nonconforming implementations,and to accept,i.e.,yield the verdict pass with,conforming ones.Since a perfect test suite exactly doing this is not likely to be encountered in practice(cf.section2),a method for comparing the quality of test suites is needed in order to select the best test suite for a given conformance testing problem.A comparison of test suites can be made if a quantitative measure can be assigned to each test suite.Such a measure should quantify the ability of a test suite to reject nonconforming implementations and to accept conforming ones,and it should take into account the probability of occurrence of implementations(a test suite that detects errors which are very likely to occur has higher quality),the weight of implementations(a test suite that detects a nonconforming implementation with severe errors,i.e.,with very negative weight,has higher quality),and the probability that an erroneous implementation indeed yields the verdict fail(a test suite that rejects an erroneous implementation with higher probability has higher quality).Soundness and exhaustiveness The probability of occurrence of implementations was ex-pressed by the probability measure P s on sets of implementations(14),and the probability that an implementation i yields the verdict pass with test suite T was expressed by the probabilitymeasure P T,iv (9,10).Under the assumption that the probability of occurrence of implementa-tions(the measure P s)is independent of the probability of yielding pass(the measure P T,iv )wecan integrate these measures to obtain the probability measure on MODS×{pass,fail}:πT(I,V)=def I V dP T,i v dP s(16)πT(I,V)expresses the probability that an implementation i∈I⊆MODS occurs,and that i,when tested with T,yields a verdict in V⊆{pass,fail}.So,πT(I,{pass})denotes the probability that an implementation i∈I occurs which passes test suite T.Note that the integrals cannot be interchanged,since the inner integral depends on i∈I.Taking also the weight of implementations(section4)into account we obtain analogously the valuation measureλ:λT(I,V)=def I V w(i)dP T,i v dP s(17)Given a test suite T,the valuationλassigns a value to each pair of I⊆MODS and V⊆{pass,fail}.Of course,the interesting values ofλare the combinations of conforming and nonconforming implementations with the verdicts pass and fail:I=I sλT(I s,{pass})λT(I s,{fail})I=I s,{pass})λT(For a good test suite,i.e.,one with a large ability to reject nonconforming implementations and to accept conforming ones,the valuesλT(I s,{pass})andλT(I s,{pass})are minimized.We define two measures on test suites,soundness snd,quantifying the ability to accept conforming implementations,and exhaustiveness exh,quantifying the ability to reject noncon-forming implementations,by normalization of the valuationλwith respect to all conforming implementations,and all nonconforming implementations,respectively.The soundness of test suite T with respect to I s⊆MODS and weight assignment w isλT(I s,{pass})snd(T)=defI s,{fail})(19)I s,{pass,fail})Of course,these definitions are only valid ifλT(I s,{pass,fail})=0andλT(Comparison In the approach of section2[ISO96],test suites are compared using the coverage function,which is defined to quantify the extent to which sound test suites are exhaustive. Moreover,test suites are trivially compared with their soundness:test suites which are not sound are simply not considered,i.e.,they are worse than any sound test suite.So,in fact,test suites are ordered lexicographically with the pair soundness,coverage ,where soundness can only take two possible values,viz.sound or not sound.The two measures snd(18)and exh(19)generalize the original definitions in(4)and(5): soundness and exhaustiveness are not logical properties anymore,but they are continuous mea-sures on test suites with values between0and1.Similar as above we can now consider an ordering on pairs of the formsnd(T),exh(T) (21) for comparing test suites.The rˆo le of exh(T)is analogous to the coverage function above:it defines a measure of the extent to which a test suite is exhaustive.And indeed,it can be shown that exh is a coverage satisfying(6),if exec faithfully models concrete test execution(cf.section2),i.e.,if P T,iv ({pass})=1⇔i∈P T and P T,iv({pass})=0⇔i∈P T.If this assumptiondoes not hold then exh is not a coverage,and it is difficult to define one which satisfies(6).There are various ways to compare test suites by ordering pairs of the form snd(T),exh(T) : the lexicographical ordering,projections on one of the constituents,addition,vector addition or multiplication of both constituents,comparing the maxima or minima,etc.The actual way of ordering the tuples(21)will depend on the application.In testing the software of a nuclear power plant the exhaustiveness will be the most important:notfinding an error if there is one has much more disastrous consequences thanfinding an error where there is none.If,on the other hand,testing is expensive,then detecting errors where there are none is costly,and should be avoided:soundness will prevail.6PROBABILISTIC TESTING WITH INPUTS AND OUTPUTSIn this section we instantiate the framework discussed in the previous sections and illustrate its applicability by a running example.The structure of this section follows the structure of the framework.First,we present some elementary notation for the description of system behaviour as labelled transition systems.Next,we instantiate the universes SPECS,MODS and TESTS as(restricted sets of)labelled transition systems.Then the concepts of conformance and testing(section2)are instantiated in such a way that they are applicable for the systems under consideration,followed by the concepts defined in section4(valuation on implementations). Finally,we discuss how section5(comparison)can be instantiated.Preliminary definitions The behaviour of systems is modelled by means of labelled transi-tion systems.We use the standard definitions of labelled transition systems as can be found in, e.g.,[Tre95]:a transition system p is a quadruple p= S,L,→,s0 ,where S is a(countable)set of states,L is a(countable)set of observable actions,→⊆S×(L∪{τ})×S is a set of transitions, and s0∈S is the initial state.The special actionτ∈L denotes the unobservable action.The universe of labelled transition systems over L is denoted by LTS(L).A traceσis a sequence of observable actions(σ∈L∗),and=⇒denotes the transition relation between states s,s′when performing traceσ,i.e.,sσ=⇒s′indicates that s′can be reached by performing the observable sequence of actionsσ∈L∗.Furthermore,we define the set of traces by traces(s)=def{σ∈L∗|∃s′:sσ=⇒s′},the set of reachable states from s by der(s)=def{s′|∃s′:sσ=⇒s′},the set。