AutoTest introduce
- 格式:pdf
- 大小:95.68 KB
- 文档页数:10
Linux内核测试工具Autotest简介【Introduction】Autotest is a framework for fully automated testing. It is designed primarily to test the Linux kernel, though it is useful for many other purposes such as qualifying new hardware, virtualization testing and other general user space program testing under linux platforms. It's an open-source project under the GPL and is used and developed by a number of organizations, including Google, IBM, Red Hat, and many others.Testing is not about running tests ... testing is about finding and fixing bugs. We have to:∙Run the tests∙Find a bug∙Classify the bug∙Hand the bugs off to a developer∙Developer investigates bug (cyclical)∙Developer tests some proposed fix (cyclical)∙Fix checked in∙New release issued to test team.So many test systems I see are oriented only around the first two (or even one!) steps. This is massively inefficient【Autotest vs Other harnesses】∙ONE harness to do performance, stress, multi-machine testing, etc. 性能,压力,分布式测试∙Consistent results & logging structure∙Web and CLI front end 网页前端控制测试用例∙Web and CLI analysis backend 网页前端分析∙Shared machine pool & scheduler∙EASY to write new tests: low entry barrier 易写新测试∙Open source – share tests with vendors 开源∙Control files are powerful!∙Proven scaling – 6000 machines+ 可控制机器数【setup】∙直接从github上的stonekim/autotest 克隆下来,其中添加了新的功能——测试用例可以不放在tests文件夹下,可在任意位置。
Python中的自动化测试和集成测试框架Python的自动化测试和集成测试框架随着软件业的迅速发展,测试变得越来越重要。
在软件开发过程中,测试是确保软件质量的关键步骤之一。
软件测试有很多种类型,其中两个主要类型是手动测试和自动化测试。
自动化测试是测试过程中使用一些专门的软件工具和脚本来自动执行测试任务的过程。
Python是一种流行的编程语言,它提供了许多用于测试的工具和框架。
在本篇论文中,我们将重点介绍Python中的自动化测试和集成测试框架。
1.自动化测试1.1自动化测试介绍自动化测试是一种自动执行测试用例的过程。
自动化测试可以显著提高测试效率和准确性,因为它可以自动执行大量的测试,而不需要人类的干预。
自动化测试需要一些专门的软件工具和脚本来实现。
自动化测试可以分为接口测试、功能测试、性能测试、安全测试等。
1.2 Python中的自动化测试工具和框架Python提供了许多用于自动化测试的工具和框架。
下面列举了一些常用的Python自动化测试工具和框架:1) Pytest:Pytest是一种成熟的Python自动化测试框架。
它提供了很多内置的断言功能,可以轻松地编写测试用例。
Pytest还提供了插件机制和丰富的命令行选项,可以扩展其功能。
2) Unittest:Unittest是Python内置的自动化测试框架。
它提供了测试用例的基本结构和一组断言方法。
Unittest还支持各种数据驱动测试和测试套件。
3) Selenium:Selenium是一个自动化测试工具,主要用于Web应用程序的测试。
它可以模拟用户在浏览器中进行的操作,并测试Web 应用程序的功能和性能。
Selenium可以使用Python编写测试脚本。
4) Robot Framework:Robot Framework是一个流行的自动化测试框架,它提供了丰富的关键字,并支持多种测试类型,如接口测试、Web应用程序测试和数据库测试。
Robot Framework可以使用Python 编写测试脚本。
自动化测试脚本编写规范一、引言自动化测试脚本编写规范是为了保证测试脚本的可读性、可维护性和可扩展性,提高测试效率和测试质量而制定的一系列规范和约定。
本文将详细介绍自动化测试脚本编写规范的各个方面。
二、命名规范1. 脚本文件名:脚本文件名应具有描述性,能够清晰表达脚本的功能和目的。
建议使用小写字母、数字和下划线组合,以便于识别和维护。
2. 函数和变量名:函数和变量名应具有描述性,能够清晰表达其用途和含义。
建议使用驼峰命名法,即首字母小写,后续单词首字母大写。
三、脚本结构1. 导入模块:首先导入所需的模块,如selenium、unittest等。
2. 定义测试类:使用unittest框架,定义一个继承自unittest.TestCase的测试类。
3. 定义测试方法:在测试类中定义测试方法,每一个测试方法应该只测试一个功能点或者场景。
4. 初始化方法:在每一个测试方法之前,编写setUp()方法进行测试环境的初始化,如启动浏览器、打开网页等。
5. 测试方法:编写具体的测试步骤和断言,确保每一个测试方法能够独立运行和验证。
6. 清理方法:在每一个测试方法之后,编写tearDown()方法进行测试环境的清理,如关闭浏览器、清除缓存等。
7. 测试套件:在脚本的最后,编写测试套件,将所有的测试方法组织起来,方便批量执行。
四、注释规范1. 文件注释:在每一个脚本文件的开头,添加文件注释,包括脚本名称、作者、版本号、修改日期等信息。
2. 函数注释:在每一个函数的开头,添加函数注释,描述函数的功能、参数和返回值。
3. 行注释:在代码行的末尾,添加行注释,解释该行代码的作用和用途。
五、代码规范1. 缩进:使用四个空格进行缩进,不使用制表符。
2. 空行:在函数和类之间、函数内部逻辑之间、代码块之间添加适当的空行,提高代码的可读性。
3. 行长限制:每行代码的长度不应超过80个字符,超过时应进行适当的换行。
4. 空格:在运算符两边、逗号后面、冒号后面添加适当的空格,提高代码的可读性。
Python中的自动化测试和持续集成(CI)自动化测试和持续集成是软件开发中至关重要的环节。
Python作为一门功能强大且易于学习的编程语言,提供了许多工具和框架来支持自动化测试和持续集成的实施。
本文将介绍Python中的自动化测试和持续集成的基本概念、工具和最佳实践。
一、自动化测试1.1 测试的基本概念在软件开发过程中,测试是确保软件质量的关键环节之一。
自动化测试是通过编写脚本或程序自动执行测试用例,以替代手动测试的一种方法。
它可以提高测试效率、减少人工错误,并能够持续地运行测试用例。
1.2 Python中的测试框架Python提供了多种测试框架,其中最常用的是unittest和pytest。
unittest是Python标准库中的一个测试框架,提供了一套用于编写和运行测试的工具。
pytest是一个第三方测试框架,相比unittest更加灵活和易用。
1.3 测试驱动开发(TDD)测试驱动开发是一种开发方法论,它要求在编写功能代码之前先编写测试用例。
Python中的自动化测试为TDD提供了良好的支持。
通过编写测试用例并确保测试通过,我们可以在开发过程中快速发现和修复问题,保证代码的质量和稳定性。
二、持续集成(CI)2.1 持续集成的基本概念持续集成是一种开发方法,它要求开发人员频繁地将代码集成到主干,并通过自动化构建和测试来验证代码的可靠性。
持续集成能够帮助团队快速发现和解决问题,确保代码的一致性和可部署性。
2.2 Python中的持续集成工具Python提供了多种持续集成工具,如Jenkins、Travis CI和CircleCI 等。
这些工具可以与版本控制系统集成,自动触发构建、运行测试,甚至自动部署到生产环境。
2.3 持续集成的最佳实践为了确保持续集成的顺利进行,我们需要遵循一些最佳实践。
首先,确保每次提交的代码都通过了测试。
其次,尽早解决失败的构建,并定期清理无用的构建。
此外,及时修复测试用例中的错误,保证测试的准确性和稳定性。
AutoTest功能演示操作步骤如下:1.安装AutoTest软件AutoTest由两个软件构成:上位机软件AutoTest和下位机软件AutoTestRunner,请安装匹配、稳定的版本,安装之后打开AutoTest和AutoTestRunner都会提示需要key,请向相关人员申请使用key,同时上位机软件AutoTest还需要有组件功能控制文件autotest_xxxx.efl,请获取此文件,并使能此文件:在AutoTest上菜单“工具”----“许可管理”点击可弹出如下画面,点击图中的”Install File License”按钮,通过浏览选择” autotest_xxxx.efl”之后,下面的功能模块由”none”变为”Available”.2.启动被测对象本演示示例中的被测对象为NetServer_2006_print_num.exe,此程序为一个TCP sever,绑定本地端口号2006,功能为接收client端发送的数据并把所接收的数据再发送给client 端,在机器上直接双击运行即可(请确保端口号2006可用)。
3.场景运行演示1)启动下位机软件AutoTestRunner,此程序默认绑定所在机器的端口号4096,请确保此端口号可用。
2)演示工程的名字为” network_demo.atproject”,打开AutoTest上位机软件,通过菜单”文件”-----“打开”----“项目”选择此工程打开即可。
3)在软件AutoTest的左上角”项目管理器”窗口中显示工程的名字以及工程下的内容“Document”为AutoTest的测试需求或测试计划管理功能,本质是文档管理,主要体现其对文本、图片和表格的编辑管理。
“ICDManagers”为AutoTest的通信协议管理器,在此管理器中可以构建所使用的通信协议,主要体现通信协议构建的灵活性,可支持多种数据类型(定长或变长),不同协议间可以嵌套,可支持位域类型等。
使用Python行自动化测试
Python 是一种功能强大且易于学习的编程语言,因此它非常适合用于自动化测试。
在Python 中,有许多用于自动化测试的库和框架,例如:unittest、pytest、nose、selenium 等。
以下是一些使用Python 进行自动化测试的步骤:
1.安装Python:如果您还没有安装Python,可以从Python 官网下载并安
装最新版本的Python。
2.选择自动化测试框架:选择一个适合您项目需求的自动化测试框架。
常用
的自动化测试框架有unittest、pytest、nose 等。
3.安装测试框架:使用pip 命令安装所选的测试框架。
例如,如果您选择了
unittest,则可以使用以下命令安装:pip install unittest
4.编写测试用例:使用所选的测试框架编写测试用例。
测试用例应该覆盖应
用程序的所有功能,并检查应用程序在各种情况下的行为。
5.运行测试用例:使用命令行或IDE 运行测试用例。
测试结果会显示测试用
例是否通过或失败,以及失败的原因。
6.分析测试结果:分析测试结果并确定是否需要修复应用程序或测试用例。
7.迭代测试用例:根据分析结果迭代测试用例,并重复运行测试以验证修复
是否有效。
总之,使用Python 进行自动化测试可以大大提高测试效率和质量,帮助您更快地发布更可靠的产品。
代码自动化测试工具使用技巧章节一:介绍自动化测试工具自动化测试是现代软件开发过程中必不可少的一环。
为了提高测试效率和质量,开发人员需要借助自动化测试工具来执行各种测试任务。
自动化测试工具可以帮助开发人员快速运行测试用例,检测代码错误和缺陷,并提供详细的测试报告。
本章将介绍几种常用的自动化测试工具。
1.1 单元测试工具单元测试是对代码最小可测试代码单元进行测试的过程。
常见的单元测试工具有JUnit(Java)和NUnit(.NET),可以自动运行测试用例并生成测试报告。
这些工具可以帮助开发人员捕获代码错误、缺陷和性能问题,并及时进行修复。
1.2 集成测试工具集成测试是测试多个模块或组件之间的接口和交互的过程。
常见的集成测试工具有Selenium和Cucumber(支持多种编程语言),它们可以模拟用户操作并对Web应用程序进行自动化测试。
这些工具可帮助开发人员验证代码是否正确地集成,并确保整个系统的功能正常运行。
1.3 接口测试工具接口测试是测试系统与外部组件或服务之间接口的过程。
常见的接口测试工具有Postman和SoapUI,它们可以帮助开发人员自动发送请求并验证响应。
这些工具能够检测接口是否遵循规范,并测试系统的性能和可靠性。
1.4 性能测试工具性能测试是测试系统在负载条件下的性能、稳定性和可靠性的过程。
常见的性能测试工具有JMeter和LoadRunner,它们可以模拟多个用户同时对系统进行访问,并提供详细的性能指标和报告。
这些工具能够帮助开发人员发现系统的性能瓶颈,并进行优化和调整。
章节二:代码自动化测试工具的使用技巧2.1 选择适合的自动化测试工具在选择自动化测试工具时,需要考虑测试的需求、技术栈和团队能力。
如果需要对Java开发的Web应用程序进行测试,可以选择使用Selenium进行自动化测试。
如果需要对接口进行测试,可以选择使用Postman或SoapUI。
选择适合的工具将能够提高测试效率和准确性。
自动化测试中的断言方法和示例自动化测试在软件开发中扮演着越来越重要的角色。
鉴于软件产品的复杂性,手动测试往往无法满足我们的需求,因此我们需要借助自动化测试来确保软件的质量。
在自动化测试中,断言起着至关重要的作用。
本文将介绍一些常用的断言方法和示例。
断言是什么?在自动化测试中,断言是一种强制条件,用于验证测试结果是否符合预期。
当测试结果与预期不符时,断言会引发测试失败。
断言是自动化测试中的基本组成部分之一,它被用于测试代码是否产生正确的结果。
例如,如果我们想要测试一个函数,我们可以使用断言来比较它的实际输出和预期输出是否相同。
常用的断言方法1. assertEqual()assertEqual()是Python unittest模块中最常用的断言方法之一。
它用于比较两个值是否相等。
如果两个值相等,则测试通过。
如果两个值不相等,则测试失败。
以下是assertEqual()的示例代码:```pythonimport unittestclass TestStringMethods(unittest.TestCase):def test_upper(self):self.assertEqual('hello'.upper(), 'HELLO')if __name__ == '__main__':unittest.main()```在这个例子中,我们测试字符串'hello'的upper()方法是否返回大写字符串'HELLO'。
如果输出与预期相同,则测试通过。
2. assertTrue()和assertFalse()assertTrue()和assertFalse()方法用于判断表达式的真假。
如果表达式的值为True,则测试通过;如果表达式的值为False,则测试失败。
以下是assertTrue()和assertFalse()的示例代码:```pythonimport unittestclass TestStringMethods(unittest.TestCase):def test_isupper(self):self.assertTrue('HELLO'.isupper())self.assertFalse('hello'.isupper())if __name__ == '__main__':unittest.main()```在这个例子中,我们测试字符串'HELLO'是否是大写,以及字符串'hello'是否是大写。
Python 自动化测试实例介绍自动化测试是软件开发过程中的重要环节,它可以提高测试效率、降低测试成本,并且能够减少人为错误的发生。
Python作为一种简洁、易学的编程语言,被广泛应用于自动化测试领域。
本文将介绍Python自动化测试的实例,包括自动化测试框架的选择、测试用例的编写、测试报告的生成等内容。
选择自动化测试框架在进行Python自动化测试之前,我们首先需要选择一个适合的自动化测试框架。
以下是几个常用的Python自动化测试框架:1. unittestunittest是Python自带的单元测试框架,它提供了一系列用于编写和执行测试用例的工具和方法。
unittest具有丰富的断言方法和测试装置,可以方便地进行测试用例的编写和管理。
2. pytestpytest是一个功能强大的Python测试框架,它支持自动发现和运行测试用例,并提供了丰富的插件和扩展功能。
pytest具有简洁的语法和友好的错误信息,使得测试用例的编写和调试变得更加容易。
3. Robot FrameworkRobot Framework是一个通用的自动化测试框架,它支持关键字驱动和数据驱动的测试方法。
Robot Framework具有丰富的内置库和插件,可以轻松地进行Web、API、移动端等各种类型的自动化测试。
根据实际需求和团队的技术栈,选择一个合适的自动化测试框架是非常重要的。
编写测试用例编写测试用例是Python自动化测试的核心内容。
测试用例应该覆盖系统的各个功能点和边界条件,以保证软件的质量和稳定性。
以下是一个简单的示例,演示了如何使用unittest框架编写测试用例:import unittestclass MyTestCase(unittest.TestCase):def test_add(self):self.assertEqual(1 + 1, 2)def test_subtract(self):self.assertEqual(3 - 1, 2)def test_multiply(self):self.assertEqual(2 * 3, 6)def test_divide(self):self.assertEqual(6 / 2, 3)if __name__ == '__main__':unittest.main()在上面的示例中,我们定义了一个继承自unittest.TestCase的测试类MyTestCase,并在其中定义了四个测试方法。
Android自动化测试框架推荐及使用指南在移动应用开发过程中,自动化测试是一个非常重要的环节。
通过自动化测试可以提高测试效率,减少人工测试成本,同时也能够保证产品的稳定性和质量。
在Android平台上,有许多优秀的自动化测试框架可以使用。
本文将介绍几种Android自动化测试框架,并提供使用指南,帮助你选择和应用适合的框架。
一、AppiumAppium是一个跨平台的开源自动化测试框架,支持多种移动操作系统,包括Android和iOS。
它使用标准的WebDriver协议,可以在任何支持WebDriver的平台上运行。
Appium支持多种编程语言,如Java、Python和Ruby等,开发者可以根据自己的喜好和熟悉程度进行选择。
使用Appium进行Android自动化测试,首先需要安装Appium的运行环境,包括Appium Server和相关的依赖库。
然后,通过编写测试脚本,使用Appium提供的API来实现测试功能。
测试脚本可以使用各种编程语言编写,具体的编程语言选择取决于开发者的需求和技术栈。
通过连接Android设备或模拟器,可以运行测试脚本并获取测试结果。
二、EspressoEspresso是Google官方推出的Android自动化测试框架,专注于应用内的交互测试。
它提供了一套丰富的API,可以模拟用户在应用中的各种操作,如点击、输入、滚动等。
Espresso还提供了强大的断言库,可以验证应用的各种状态和UI元素。
使用Espresso进行Android自动化测试,只需要在项目中引入Espresso相关的库和依赖,并编写相应的测试代码。
Espresso的API设计简洁明了,易于使用。
开发者可以通过链式调用的方式来组织测试步骤,使得测试代码更加清晰和易读。
同时,Espresso还提供了一些实用工具和插件,方便开发者快速构建和执行测试。
三、RobotiumRobotium是一款功能强大的Android自动化测试框架,具有较高的灵活性和易用性。
Autotest—Testing the UntestableJohn AdmanskiGoogle Inc. jadmanski@Steve HowardGoogle Inc. showard@AbstractIncreased automated testing has been one of the most popular and beneficial trends in software engineering. Yet low-level systems such as the kernel and hardware have proven extremely difficult to test effectively,and as a result much kernel testing has taken place in a manual and relatively ad-hoc manner.Most existing test frame-works are designed to test higher-level software isolated from the underlying platform,which is assumed to be stable and reliable.Testing the underlying platform it-self requires a completely new set of assumptions and these must be reflected in the framework’s design from the ground up.The design must incorporate the machine under test as an important component of the system and must anticipate failures at any level within the kernel and hardware.Furthermore,the system must be capable of scaling to hundreds or even thousands of machines under test,enabling the simultaneous testing of many different development kernels each on a variety of hard-ware platforms.The system must therefore facilitate ef-ficient sharing of machine resources among developers and handle automatic upkeep of thefleet.Finally,the system must achieve end-to-end automation to make it simple for developers to perform basic testing and incor-porate their own tests with minimal effort and no knowl-edge of the framework’s internals.At the same time,it must accommodate complex cluster-level tests and di-verse,specialized testing environments within the same scheduling,execution and reporting framework. Autotest is an open-source project that overcomes these challenges to enable large-scale,fully automated test-ing of low-level systems and detection of rare bugs and subtle performance ing Autotest at Google,kernel developers get per-checkin testing on a pool of hundreds of machines,and hardware test engi-neers can qualify thousands of new machines in a short time frame.This paper will cover the above challenges and present some of the solutions successfully employed in Autotest.It will focus on the layered system architec-ture and how that enables the distribution of not only the test execution environment but the entire test control system,as well as the leveraging of Python to provide simple but infinitely extensible job control and test har-nesses,and the automatic system health monitoring and machine repairs used to isolate users from the manage-ment of the test bed.1IntroductionAutotest is a framework for fully automated testing of low-level systems,including kernels and hardware.It is designed to provide end-to-end automation for func-tional and performance tests against running kernels or hardware with as little manual setup as possible.This automation allows testing to be performed with less wasted effort,greater frequency,and higher consistency.It also allows tests to be easily pushed upstream to vari-ous developers,moving testing earlier into the develop-ment cycle.Using Autotest,kernel and hardware engineers can achieve much greater test coverage than such compo-nents usually receive.This typical lack of effective low-level systems testing comes with good reason:au-tomated testing of such systems is a difficult task and presents many challenges distinct from userspace soft-ware testing.This paper introduces the requirements Autotest aims to meet and some of the unique challenges that arise from these requirements,including robust test-ing in the face of system instability,scaling to thousands of test machines,and minimizing complexity of test ex-ecution and test development.The paper will discuss solutions for each of these challenges that have been em-ployed in Autotest to achieve effective,fully automated low-level systems testing.2BackgroundHigh-quality automated testing is a necessity for any large,long-lived software project to maintain stability 1Figure1:High level operation of a complete Autotest systemwhile permitting rapid development.This is as true for the Linux kernel and other system software as it is for user-space software.However,so far the benefits of automated testing have been most successfully realized within user-space applications.Most existing test automation frameworks are targeted at software running on top of the platform provided by the hardware and operating system,the realm in which nearly all software operates.By taking advantage of the assumption that an application is running in a re-liable standardized environment provided by the plat-form,a framework can abstract away and simplify most of the underlying system.When attempting to provide the same services for kernel(and hardware)testing,this assumption is no longer reasonable since the underlying system is an integral component of what is being tested. This was part of the original motivation for the develop-ment of thefirst versions of Autotest and its predecessor, IBM Autobench[5][4].Autotest begins with the goal of testing the underlying platform itself,and this goal engenders a unique set of requirements.Firstly,because the platform on whichAutotest runs is itself under test,Autotest must be built from the ground up to assume system instability.This requires graceful handling of kernel panics,hardware lockups,network failures,and other unexpected fail-ures.In addition,tasks such as kernel installation and hardware configuration must be simple,commonplace activities in Autotest.Secondly,because the platform under test cannot be eas-ily virtualized,every running test requires a physical machine.Hardware virtualization may be used for basic kernel testing,but as it fails to produce accurate per-formance results and can mask platform-specific func-tional issues it is useful only for the most basic kernel functional verification.Autotest is therefore built to run every test on a physical machine,both for kernel and hardware testing.This makes coordination among mul-tiple machines a core necessity in Autotest and further-more implies that scaling requires distribution of testing among hundreds or even thousands of machines.This additionally creates a need for a system of efficient shar-ing of test machines between users to maximize utiliza-tion over such a large testfleet.2Finally,Autotest must fulfill the generic requirements of any testing framework.In particular,Autotest must minimize the overhead imposed on test developers.It must be trivial to incorporate existing tests,easy to write simple new tests,and possible to write complex multi-process or multimachine tests,all within the same basic framework.Furthermore,developing tests should be a simple,familiar process,requiring interaction with only a small subset of the available infrastructure.Tests must therefore be easily executable by hand and simultane-ously pluggable into a large-scale scheduling system. These levels of abstraction are broken down into distinct modules discussed in more detail throughout this paper. As illustrated in Figure1,the lowest layer of the sys-tem is the Autotest client,a simple test framework that runs on individual machines.The next layer,Autoserv, is designed to run on centralized test servers to automat-ically install and execute clients and to coordinate multi-machine tests.The outermost layer consists of a single frontend and job scheduler to allow multiple users to share a single testfleet and results repository.Note that the dependencies go in only one direction making the design more modular and allowing users to interact with the system on multiple levels.On a large scale users can push a button on a web interface to launch a complete test suite on a large cluster of machines while on a small scale users can run a single test on a local workstation by executing a shell command.2.1Related workThe Linux Test Project"has a goal to deliver test suites to the open source community that validate the reliabil-ity,robustness,and stability of Linux"[1].It is a collec-tion of functional and stress tests for the Linux kernel and related features as well as a client infrastructure for test execution.The client infrastructure eases the ex-ecution of a many tests(there are over3,000tests in-cluded),supports running tests in parallel,can generate background stress during test execution,and generates a report of test results at the end of a run.LTP is not,how-ever,intended to be a general-purpose,fully-automated kernel testing framework.There are a number of Au-totest goals that are specifically non-goals of LTP[8].It is essentially a collection of tests and is therefore suit-able for inclusion into Autotest as a test,and indeed such inclusion has been easily done.An automation framework called Xentest was developedfor testing the Xen virtualization project.David Bar-rera et al.note that“testing Linux under Xen and test-ing Linux itself are very much alike”and perform part of their testing by“running standard test suites under Linux running on top of Xen”,including LTP[3].Since testing Xen is much like testing the underlying hardware itself the goals of Autotest share much in common with those of Xentest,both from a kernel testing and a hard-ware testing point of view.Xentest is a collection of scripts with support for building and booting Xen,run-ning tests under it,and gathering results logs together.It does not support any automated analysis of test results to determine pass/fail conditions.Test runs are config-urable by a controlfile using the Python ConfigParser module.This provides simple configuration but lacks any programmatic power within controlfiles.Finally, Xentest is built closely around Xen and does not aim to be generic framework for kernel or hardware testing.On the other hand,Autotest could be used to perform Xen testing much like Xentest does and some work has been done on this in the past.Crackerjack is another test automation system,one de-signed specifically for regression testing[10].It focuses onfinding incompatible API changes between kernel versions.This is valuable testing but is a narrower focus from that of Autotest.Two frameworks that address the problem of distributed kernel testing are PyReT[6]and ANTS[2].The former depends on a sharedfile system for all communications while the latter uses a serial console.Both of these re-quirements on test machines were deemed too restrictive for Autotest,which relies solely on an SSH connection for communications.ANTS is quite robust to test ma-chine failures,as it configures all test machines from scratch using network booting and is capable of using remote power control to reset and recover machines that have become unresponsive.The system additionally in-cludes a machine reservation tool so that machines can be shared between developers and the automated sys-tem without conflict.These are all important features that have found their way into Autotest.However,the system is built strictly for nightly testing and does not support a general queue of user-customizable jobs.It includes very limited results analysis in the form of an email report upon completion of the night’s tests.It runsa number of open-source tests(including LTP)but doesnot support more complex,multimachine tests.Finally, the system is proprietary and therefore of little direct 3utility to the community.For distributed performance testing of the kernel there exist systems presented by Alexander Ufimtsev[9]and Tim Chen[7].In both systems,test machines operate autonomously,running a client harness which moni-tors the kernel repository,building and testing new re-leases as they appear.In this sense,the systems are built around the specific purpose of per-release testing, although the latter system includes support for testing arbitrary patches on any kernel.Both systems’clients transmit results to a central repository,a remote server in the former case and a shared database in the latter.The former system includes some automated analysis for re-gression detection based on differences from previous averages,a task not yet implemented in Autotest.The latter system includes a web frontend displaying graphs of each benchmark over kernel versions,with support for displaying profiler information,rerunning tests or bisecting tofind the patch responsible for a regression. Autotest includes partial support for these features but could benefit from improvements in this area.3Autotest ClientThe most basic requirement that Autotest is intended to fulfill is to provide an environment for running tests ona machine in a way that meets the following criteria:1.The lowest,most bare-metal access must be avail-able.2.Test results are available in a standard machine-parseable way.3.Standard tests developed outside of the frameworkcan be easily run within it.Thefirst of the criteria,low-level system access,seems fairly self-evident when writing tests which are aimed at the kernel and the hardware itself.To test a particular component of a system,the test must be written using tools that have access the standard API for that compo-nent.Since C is the lingua franca of the systems world, a C API can generally be counted on as being available, but even that isn’t always the case.When creating afile system during a test,mkfs is going to be the easiest and most readily available mechanism;so as well as be-ing able to easily incorporate custom C the framework must also make it easy to work with external tools.This initial requirement could have been satisfied by writing the framework itself in C,but that would ulti-mately have conflicted with the other requirements that Autotest was expected to meet.First,this would’ve made calling out to external applications ultimately more difficult;while functions like fork,exec, popen and system provide all the basic mechanisms needed to launch an external process and collect results from it,working with them in C requires a relatively large amount of boilerplate compared to a higher-level scripting language such as Perl or Python.This only be-comes more true if the output of the executed process needs to be manipulated and/or parsed in any way.The second requirement that test results be logged in a stan-dard way almost guarantees that the test will need to do string manipulation,another task simplified by using a scripting language.To meet these somewhat conflicting requirements,the Autotest framework itself was written in Python,with utilities provided to simplify the compilation and exe-cution of C code.Tests themselves are implemented by creating a Python module defining a test subclass,sat-isfying a standardized,pre-defined interface.Individual tests are packaged up in a directory and can be bundled along with whatever additional resources are needed, such as datafiles,C code to be compiled and executed or even pre-compiled binaries if necessary.This also satisfies the third of the three requirements,the ability to run standard tests written independently of Au-totest.All that is required is to bundle the components necessary for the test with a simple Python wrapper.The wrapper is responsible for setting up any necessary en-vironment,executing the underlying test,and translat-ing the results from the form produced by the test into Autotest standard logging calls.The wrappers are gen-erally quite simple;the median size of a test wrapper in the current Autotest distribution is only38lines.Using Python for implementing tests also provides an easy mechanism for bundling up suites of tests or cus-tomizing the execution of specific tests.Tests them-selves are executed by writing a“controlfile”which is simply a Python script executing in a predefined envi-ronment.It can be a single line saying“execute this test”,a more complex script that executes a whole se-quence of tests,or even a script that conditionally exe-cutes tests depending on what hardware and kernel are running on the machine.The environment provided by Autotest contains additional utilities that allow control 4files to put the machine into any state necessary for ex-ecuting tests,even if it requires installing a kernel and rebooting the machine.Having the full power of Python available allows test runners to perform limitless cus-tomization without having to learn a custom job control language.This power does come with one major drawback, though.Due to the dynamic nature of Python and the power available to controlfiles,it is impossible to stat-ically determine much information about a job.For ex-ample,it is impossible to know in advance what tests a job will run,and indeed the set of tests run may poten-tially be nondeterministic.This limitation has not been severe enough to outweigh the benefits of this approach.3.1Installation ProblemsAs this system was put into use at Google,the instal-lation of Autotest onto test machines quickly became a serious performance issue.Allowing test developers to bundle data,source code and even binaries with their tests made it easy to write tests but allowed the instal-lation size to grow dramatically.The situation could be somewhat alleviated by minimizing how often an install was necessary,but in practice this only helps if the test framework can be pre-installed on the systems.The solution to this problem is a fairly standard one: rather than treating Autotest and its test suite as a single, monolithic package,break it up into a set of packages:•a core package containing the framework itself •packages for the various utilities and dependencies such as profilers,compilers and any non-standard system utilities that would need to be installed •packages for the individual testsEach package is able to declare other packages as de-pendencies.The core package can be installed every-where and is fairly lightweight,consisting only of a set of Python sourcefiles without any of the more heavy-weight data and binaries required by some tests.When executing a job,the framework is then able to dynami-cally download and install any packages needed to exe-cute a specific test.4Autotest Server4.1Distributing test runs across machinesThe Autotest client provides sufficient infrastructure for running low-level tests but it only executes tests and col-lects results on a single machine.To test a kernel on multiple hardware configurations,a tester would need to install the test client on multiple machines,manually run jobs on each of these machines,and examine the results scattered across these systems.This deficiency led to the development of Autoserv,an Autotest Server,a separate layer designed around the client.It allows a user to run a test by executing a server process on a machine other than the test machine.The server process will connect to the remote test ma-chine via SSH,install an Autotest client,run a job on the client,and then pull the results back from the test machine.Localizing these server runs to a single ma-chine allows users to run test jobs on arbitrary sets of machines while collecting all the results into a central location for analysis.4.2Recovering failed test systemsOnce users start running tests on larger sets of machines, dealing with crashed systems becomes a much more common occurrence.As the number of test machines increases,bad kernels(and random chance)are going to result in more failed systems.When testing on a single machine,manual intervention is the simplest method of dealing with failure,but this does not scale to hundreds or thousands of machines.Automation becomes neces-sary with two major requirements:•Automatically detect and report on test machinefailures•Provide a mechanism for repairing broken systems Handling these requirements entirely within the client running on the test machine is impractical;detecting and reporting a kernel panic or hardware failure will not even be possible when the crash kills the test pro-cesses on the machine.Similarly,repair may require re-imaging a machine which will wipe out the client it-self.5With job execution controlled from a remote machine, handling these requirements becomes feasible.Au-toserv implements support for monitoring serial console output,network console output and general syslog out-put in/var/log.It can also interact with external ser-vices that collect crash dumps and even power cycle the machine if that capability is available.In the very worst case the server process can at least clearly log the failure of the job(and any tests it was running)along with the last known state of the failed test machine. Automated repair can also be performed.This is im-plemented in Autoserv in an escalating fashion,first by making several attempts to put the machine back into a known good state,then by optionally calling out to any local infrastructure in place to carry out a complete rein-stallation of the machine,andfinally,if necessary,by es-calating the repair process to a human.Testing on large numbers of machines now becomes much more practi-cal when systems broken by bad kernels(or bad tests) can be put back into a working state with a minimum of human intervention.4.3Multi-machine testsRemote control of test execution also introduces the opportunity to run single tests that span multiple ma-chines.While this could be done with the Autotest client alone by running the client on a master test system and having it drive other slave test systems,this would re-quire duplicating most of the“remote control”infras-tructure from the server directly into the client.This could also be problematic from a security point of view since,rather than routing control through a single server, the test machines would require much more liberal ac-cess to one another.Since Autotest already established the need for a sep-arate server mechanism,it was natural to extend it to support“server-side”testing.Instead of only providing afixed set of server operations(install client and run job, repair,etc.),Autoserv allows testers to supply a Python controlfile for execution on the server,just like on the client.This can be used to implement,for example,a network test with the followingflow:•Install Autotest client on two machines •Launch“network server”job on one machine •Launch“network client”job on one machine•Wait for both jobs to complete and collect results No single-machine networking test can duplicate the same results,particularly when attempting to quantify networking performance and not just test the stability of the network stack.This also allows for execution of larger-scale cluster testing.Although this begins to creep beyond the scope of systems testing it still has significant value,not as a way to test the cluster applications but rather as a way of testing the impact of kernel and hardware changes on larger-scale applications.A smaller-scale cluster test can follow a workflow similar to that for network test-ing.Alternatively,a server job can make use of pre-existing cluster setup and management tools,simply driving the external services and collecting results af-terwards.4.4Mitigating Network UnreliabilityWhile one of the primary goals of Autoserv is to in-crease reliability,it also introduces new unreliabilities as an unfortunate side effect.The primary issue is that it in-troduces a new point of failure,the connection between the server and the client machines.Working directly with the client,a user can launch a job on a machine and return after expected completion,and any transient network issues will not affect the test result.This is no longer the case when the job is being controlled by a re-mote server that continuously monitors the test machine.The problem can be alleviated somewhat by periodically polling the remote machine rather than continually mon-itoring it,but ultimately this only reduces susceptibility to the problem.Implementing more reliable communications over OpenSSH ultimately proved too difficult,primarily due to the lack of control over and visibility into network failure modes.One alternative considered was to usea completely separate communication mechanism,butthis was rejected as ing SSH provides Autotest with a robust and secure mechanism for com-munication and remote execution,without requiring the large investment of time and labor required to invent a custom protocol that would then need to be installed on every test machine.Instead the solution was to add an alternative SSH im-plementation that uses a Python package(paramiko1) 1/paramiko/6instead of launching an external OpenSSH -ing an in-process library allowed tighter integration and communication between Autoserv and the SSH imple-mentation,allowing the use of long-lived SSH connec-tions with automatic recovery from network failure.At the same time modifications were made to the Autotest client to allow it to be run as a detachable daemon so that the automatic connection recovery could re-attach to clients with no impact on the local testing.Adding paramiko support had the additional benefit of reducing the overhead of executing SSH operations from Autoserv by performing them in-process,as well as simplifying the use of multi-channel SSH sessions to avoid the cost of continually creating and terminat-ing new sessions.Within Autoserv this is implemented in such a way that the paramiko-based implementation can be used as a drop-in replacement for the OpenSSH-based one,allowing testers to make use of whichever is better suited to their needs.OpenSSH works better “out of the box”with most Linux configurations,while paramiko,which requires more setup and configuration, ultimately allows for more reliable,lightweight connec-tions.5Scheduler and Frontend5.1Shared machine poolAutoserv provides a convenient and reliable way for in-dividual users to test small numbers of platforms.As a standalone application,however,it cannot possibly ful-fill the requirement of scaling to thousands of machine and achieving efficient utilization of a shared machine pool.To address these needs the Autotest service ar-chitecture provides a layer on top of Autoserv that al-lows Autotest to operate as a shared service rather than a standalone application.Rather than execute the Au-totest client or server directly,users interact with a cen-tral service instance through a web-or command-line-based interface.The service maintains a shared machine pool and a global queue of test jobs requested by users. There are three major components that make this usage model possible.The Autotest Frontend is an interface for users to schedule and monitor test jobs and manage the machine pool.The Autotest Scheduler is responsi-ble for executing and monitoring Autoserv to run tests on machines in the pool in response to user requests.Fi-nally,the results analysis interface,not discussed in thispaper,provides a common interface to view,aggregate and analyze test results.The Autotest Frontend is a web application for schedul-ing tests,monitoring ongoing testing,and managing test machines.It operates on a database which takes the available tests,the machines in the shared test bed,and the global queue of test jobs that have been scheduled by users.The scheduler interacts with the frontend through this database,executing test jobs that have been sched-uled and updating the statuses of jobs and machines based on execution progress.The frontend supports a number of features to help users organize the machine pool.First,the system supports access control lists to restrict the set of users that can run tests on certain machines.Some machines may be open for general testing,but some users,particularly hard-ware testers,will have dedicated machines that cannot be used by others.Second,the system supports tagging of machines with arbitrary labels.The most common usage of this feature is to mark the platform of a ma-chine,which is often important for both job scheduling and results bels can additionally be used to declare machine capabilities,such as remote power con-trol,or to group together large numbers of machines for easier scheduling.The scheduler is a daemon running on the server whose primary purpose is to execute and monitor Autoserv pro-cesses.The scheduler continuously matches up sched-uled test jobs with available machines,launches Au-toserv processes to execute these jobs,and monitors these processes to completion.It updates the database with the status of each job throughout execution,allow-ing the user to track job progress.Upon completion, the scheduler executes a parser to read Autoserv’s struc-tured results logs into a database of test results.The user can then perform powerful analysis of these results through a special results analysis interface.An important feature of the scheduler is its statelessness.While it maintains plenty of in-memory state,all impor-tant state can be reconstructed from the database.This is exactly what happens upon scheduler startup,ensur-ing that when the scheduler needs to restart,all tests will continue running uninterrupted and machine time won’t be wasted.This is critical for minimizing user impact during deployments of new Autotest versions or after a scheduler crash.7。