Best Practices for Software Performance Engineering (2003)
- 格式:pdf
- 大小:165.36 KB
- 文档页数:10
龙源期刊网
IBM推出六大“软件能力”等
作者:
来源:《计算机世界》2011年第45期
企业动态
IBM推出六大“软件能力”
本报讯近日,IBM正式推出六大“软件能力”——包括洞察力、敏捷力、协作力、创新
力、优化力、安全力。
这些能力可以帮助企业将信息转化为洞察、推动产品和服务创新、驱动业务整合及优化、联系和协作、管理风险、安全及合规、优化业务架构及服务的影响,这既是大多数企业最关注的业务需求,也是IBM软件具备的能力所在。
惠普携手AMD 发力数据中心
本报讯日前,惠普宣布推出全新的采用AMD皓龙6200系列处理器的 HP ProLiant G7服务器。
该系列服务器可提高数据中心的效率、可扩展性和高达30%的性能,以支持大规模虚拟数据库工作负载。
惠普此次发布了HP ProLiant BL465c G7、HP ProLiant BL685c G7等五款新的ProLiant x86系列工业标准服务器。
IT软件厂商在信息产业中最具潜力
本报讯近日,2011年“北京信息网络产业新业态创新企业榜”公布,有30个最具创新力和增长潜力的信息网络企业上榜。
据了解,本次遴选活动的参评企业主要分布于软件业、互联网信息服务、云计算、物联网、移动互联网等领域。
最终进入30强的企业主要涉及软件业、互联网信息服务、电子产品、移动通信、云计算等领域,以软件提供商居多。
acp考试模拟试题五ACI考试模拟试题五1. 请选择下面哪种技术不是ACI的关键特性:A. Application Centric InfrastructureB. Leaf-and-Spine架构C. 集成网络与存储D. `Open Shortest Path First(OSPF)协议2. ACI的主要目标是什么?A. 为数据中心提供更高的可扩展性和可管理性B. 提供更多的带宽和速度C. 降低网络部署和维护成本D. 扩展数据中心的物理空间3. 您可以使用哪些工具来管理ACI Fabric?A. Cisco APICB. Cisco ACI ToolkitC. Cisco DCNMC. 所有上述4. 哪个选项描述了ACI中的Policy Model?A. 基于策略的自动化B. 传统的CLI管理C. 独立的网络和存储D. 手动配置5. 以下哪个选项不是ACI中tenant的常见元素?A. Application ProfileB. Endpoint Groups (EPGs)C. VLANsD. Contracts6. 以下哪个选项描述了ACI Fabric中的SPI(Switch Peer Group)?A. 用于路由解决方案B. 用于冗余连接C. 用于IS-IS协议D. 用于BGP7. 在ACI中,哪种协议用于Fabric Discovery?A. BGPB. OSPFC. IS-ISD. LLDP8. 以下哪种情况可以触发ACI中的Policy Resolution?A. 新的Application ProfileB. 新的Tenant创建C. 网络故障D. 所有上述9. 在ACI中,哪个选项描述了VMM(Virtual Machine Manager)域?A. 用于虚拟机和虚拟交换机的集成B. 用于物理网络设备的管理C. 用于负载均衡D. 用于数据中心备份10. 下面哪个选项最好描述了ACI的关键优势?A. 传统的CLI管理B. 基于策略的自动化C. 手动配置D. 独立的网络和存储11. 选择所有适用的情况,以确保ACI中的正确通信:A. 配置EPG之间的ContractsB. 使用VLAN划分TrafficC. 创建Application Profile和EPPD. 提供VRF到EPG的关联12. ACI中的哪个概念用于虚拟化网络和存储资源?A. Application Network ProfileB. Endpoint Groups (EPGs)C. VMM IntegrationD. 操作系统13. 下列哪个选项不是ACI中的Fabric访问控制方式?A. Micro-SegmentationB. MAC SecC. Service ChainingD. EIGRP14. 在ACI中,哪种设置用于实现Fabric中的跨Rack链路?A. Link AggregationB. ACI Spine节点设置C. 端口ChannelingD. 10G链路15. 选择以下哪个描述符不适用于 ACI 中的外部连接?A. 与外部设备连接B. 用于Internet连接C. 端口通道D. VRF分割答案:1. D2. A3. D4. A5. C6. B7. D8. D9. A10. B11. A, C, D12. C13. D14. C15. D。
提高计算机软件操作效率的工具与插件推荐第一章:桌面操作工具为了提高计算机软件的操作效率,我们可以利用各种桌面操作工具来简化常见的操作。
以下是一些常用的桌面操作工具推荐:1. Launchy:这是一个快速启动程序的工具,可以通过简单的按键组合来快速打开各种应用程序和文件,避免了繁琐的鼠标操作。
2. Fences:这是一个桌面整理工具,可以将桌面上的图标分组整理,提高桌面的可用性和整洁度,让你更快地找到需要的应用程序或文件。
3. Dexpot:这是一个虚拟桌面工具,可以将桌面划分为多个虚拟桌面,让你能够同时处理多个任务,提高工作效率。
第二章:浏览器插件浏览器作为我们日常使用最频繁的软件之一,如何提高浏览器的操作效率也显得尤为重要。
以下是一些常用的浏览器插件推荐:1. OneTab:这个插件可以将当前所有的浏览器标签页合并为一个页面,以便节省内存并提高浏览器的性能。
2. LastPass:这是一个密码管理工具,可以自动保存和填充各种网站的登录信息,避免了频繁输入密码的麻烦,提高了上网效率。
3. Evernote Web Clipper:这个插件可以帮助你快速保存网页内容到Evernote笔记中,方便日后查阅和整理。
第三章:文档处理工具对于经常需要编辑和处理各种文档的人来说,选择适合的文档处理工具也是提高操作效率的关键。
以下是一些常用的文档处理工具推荐:1. Office快捷键:熟悉和使用Office软件的快捷键可以极大地提高你在文档处理过程中的效率,减少冗长的鼠标操作。
2. WPS Office:与Microsoft Office兼容的办公套件,提供了一系列高效的编辑和格式化工具,能够满足日常文档处理的需求。
第四章:代码编辑工具对于从事编程工作的人来说,选择合适的代码编辑工具能够极大地提高开发效率。
以下是一些常用的代码编辑工具推荐:1. Visual Studio Code:这是一款轻量级的代码编辑器,提供了丰富的插件和快捷键,支持多种编程语言,适用于各种开发任务。
嵌入式系统基础B及答案院(系)班级姓名准考证号........................... 密封。
密封。
线2021―2021学年第二学期期末考试问题编号评分员1。
单选题(每题2分,共30分)一二三四五六总分1、下面哪种操作系统不属于商用操作系统。
()a、 windowsxpb、linuxc、 vxworksd、wince2.以下哪项不是嵌入式操作系统的功能。
()a、内核精简b、特异性强c、功能强大d、高实时性3、下面哪种不属于嵌入式系统的调试方法。
()a、模拟调试b、软件调试c、 BDM/JTAG调试D.单独调试4、在嵌入式arm处理器中,下面哪种中断方式优先级最高。
()a、重置b、数据中止c、 fiqd、irq5.nandflash和norflash的正确区别是()。
a、nor的读速度比nand稍慢一些c、nand的擦除速度远比nor的慢6、通常所说的32位微处理器是指()。
a、地址总线的宽度为32位C,CPU的字长为32位7,addr0,R1,[R2]属于()。
a、即时寻址b、寄存器间接寻址c、寄存器寻址d、基址变址寻址b、处理后的数据长度只能为32位D,通用寄存器的数量为32b、nand的写入速度比nor慢很多d、大多数写入操作需要先进行擦除操作8.数据字越长()。
a、时钟频率越快b、运算速度D越快,精度越高c、对存储器寻址能力越差9.典型的计算机系统结构是()。
a、冯诺依曼体系结构b、哈佛结构c、单总线结构d、双总线结构10.以下不是RISC指令系统的特点()。
a、大量使用寄存器b、使用固定长度的指令格式大学计算机基础试卷b共4页第1页学院(系)班名录取号………………………………密………………………………封………………………………线………………………………c、使用多周期指令d、寻址方式多11.以下哪些设备不是嵌入式系统产品()。
a、pdab、自动取款机c、个人计算机d、机顶盒12、下列不属于arm处理器异常工作模式的是()。
高级计算机程序设计员模拟试题含参考答案一、单选题(共90题,每题1分,共90分)1、Visual C++提供的()是一个用来创建或改变资源的特定环境。
它通过共享技术和界面来快速简捷地创建和修改应用资源。
A、AppWizardB、资源编辑器C、ClassWizardD、资源管理器正确答案:B2、下列关于HTMLHelpWorkshop说法正确的是()。
A、不可以浏览、编辑和转换图形B、不可以截取屏幕图形C、不可以对HTML 文件进行压缩D、不可以编辑声音和图像正确答案:B3、C# 中组件可以分为两类:即()和()。
A、不具备图形界面的类库,具有用户界面的控件B、不具备图形界面的控件,具有用户界面的类库C、不具备图形界面的类库,具有用户界面的类库D、不具备图形界面的控件,具有用户界面的控件正确答案:A4、"可通过()间接地给 " 用户账号 " 赋予了权限。
"A、组描述B、组成员C、组账号D、组密码正确答案:C5、TrackRecord是()公司的测试管理工具。
A、RationalB、CompurewareC、Mercury InteractiveD、IBM正确答案:B6、用例分为系统用例和()。
A、时序用例B、业务用例C、对象用例D、测试用例正确答案:B7、()决定 SQL Server在查询数据库时所采用的数据比较方式。
A、服务登陆标识B、字符集C、网络库D、排序方式正确答案:D8、()方法的作用是创建并返回一个与SqlConnection 相关联的SqlCommand对象。
A、ExecuteReader()B、Open()C、ExecuteNonQuery()D、CreateCommand()正确答案:D9、并行接口适用于()的场合,其接口电路相对简单。
A、传输距离较远,传输速度要求高B、传输距离较近,传输速度要求高C、传输距离较近,传输速度要求低D、传输距离较远,传输速度要求低正确答案:B10、当需要控制一个类的实例只有一个,而且客户端只能从一个全局的访问点访问它时,可以选用设计模式中的()。
涅槃计划开源能力通用基础考试试题及答案一、单选(共35分)1、要交换变量A和B的值,应使用的语句组( )A.A-A=B;B=C;C=AB.B-C=A;A=B;B=CC.C-A=B;B=AD.D-C=A;B=A;B=C答案:B2、有些软件只有创建它的人、团队、组织才能修改,并且控制维护工作,这类软件被称为()。
A.A-文档B.B-闭源软件C.C-软件D.D-开发手册答案:B3、从E-R模型向关系模型转换,一个M:N的联系转换成一个关系模式时,该关系模式的键是()。
A.A-M端实体的键B.B-IV端实体的键C.C-M端实体键与N端实体键组合D.D-重新选取其他属性答案:C4、按照AJAX之父Jesse James Garrett所认为的交互设计属于用户体验要素的()层。
A.A-表现层B.B-框架层C.C-结构层D.D-范围层答案:C5、当网络中任何一个工作站发生故障时, 都有可能导致整个网络停止工作, 这种网络的拓扑结构为A.A-星型B.B-环型C.C-总线型D.D-树型答案:B6、在常见的数据处理中,()是最基本的处理。
A.A-删除B.B-查找C.C-读取D.D-插入答案:B7、个人计算机简称PC机,这种计算机属于A.A-小型计算机B.B-巨型计算机C.C-微型计算机D.D-超级计算机答案:C8、现代计算机之所以能够自动、连续地进行数据处理,主要是因为A.A-采用了半导体器件B.B-采用了二进制C.C-采用了开关电路D.D-具有存储程序的功能答案:D9、开放源代码是指()。
A.A-原始代码B.B-可执行文件C.C-二进制文件D.D-软件使用文档答案:A10、计算机最早应用于()。
A.A-数据处理B.B-工业控制C.C-计算机辅助D.D-科学计算答案:D11、计算机采用二进制数的最主要理由是A.A-易于用电子元件表示B.B-存储信息量大C.C-符合人们的习惯D.D-数据输入输出方便答案:A12、计算机的主要特点是()。
题目:探究Hackerrank Prudential题库对技能测评的影响一、概述1.1 Hackerrank和Prudential介绍Hackerrank是一家专注于编程技能测评和招聘服务的公司,提供上线评测评台和编程挑战,帮助企业发掘技术人才。
Prudential是一家跨国金融服务公司,致力于为客户提供金融保障和投资解决方案。
1.2 题库的重要性在技能测评中,题库起着至关重要的作用。
题库设计合理与否不仅影响着测评结果的准确性,也直接关系到对技能的全面考量。
二、Hackerrank Prudential题库的设计2.1 题型多样性通过多种问题类型,包括编程题、算法题、逻辑题等,全面考察受测者的技能水平。
2.2 难度等级设置题库中包含不同难度等级的问题,从入门级到专业级,以满足不同技术水平人裙的需求。
三、Hackerrank Prudential题库的使用3.1 招聘流程中的应用Prudential在招聘技术人员时,可以利用Hackerrank提供的题库进行技能测评,更好地筛选出具备所需技能的候选人。
3.2 培训与培养Prudential内部也可以利用题库进行员工技能提升培训,根据题库结果制定培训方案,提高员工的技术能力。
四、题库对技能测评的影响4.1 准确性题库设计合理,能较为全面地评估受测者的技能水平,从而提高测评准确性。
4.2 公平性合理的难度设置和题型多样性,能够使测评更为公平,减少主观因素的干扰。
4.3 可信度题库经过合理设计和多次验证,使得测评结果更有可信度,为招聘和培训提供更可靠的依据。
五、结论Hackerrank Prudential题库通过合理的设计和应用,对技能测评起到重要的推动作用。
合理的题库设计能够提高技能测评的准确性、公平性和可信度,为企业招聘和培训提供更可靠的支持。
随着技术的不断发展,题库的不断更新和完善,将不断提升技能测评的效果和影响力。
六、参考文献[1] Hackerrank冠方全球信息站[2] Prudential冠方全球信息站[3] Beasley, John, John Ezell, and Doug Gerdes. "Creating an enhanced pre-hire skills assessment model." Industrial and Commercial Tr本人ning 49.3 (2017): 116-122.以上文章为知识答案格式的非Markdown格式的普通文本。
(新版)嵌入式系统设计师(中级)考试题库(含答案)单选题(总共129题)1.以下4种路由中,______路由的子网掩码是255.255.255.255。
A、远程网络B、静态C、默认D、主机答案:D解析:主机路由的子网掩码是255.255.255.255。
网络路由要指明一个子网,所以不可能为,默认路由是访问默认网关,而默认网关与本地主机属于同一个子网,其子网掩码也应该与网络路由相同,对静态路由也是同样的道理。
2.执行下面C语言程序段的结果是()。
main(){intx=l,a=l,b=l;switch(x){case0:b++;case1:a++;case2:a++;b++;}printf(”a=%d,b=%d“,a,b);}A、a=2,b=2B、a=3,b=2C、a=2,b=lD、a=3,b=3答案:B解析:switchcase语句语法,当匹配到了一个case条件,会从该条件开始往下执行其余所有条件语句,不再进行判断,因此这里x=1匹配到了case1,其会执行case1及case2的语句。
3.下面的一段C程序中,循环体语句______退出循环。
unsignedcharn;inttot al;n=50;while(n-->=0)?{total+=n;}A、执行49次后B、执行50次后C、执行51次后D、死循环,不会答案:D解析:本题考查C语言编程的基本知识。
在本题中考生需注意unsignedchar的用法,因为n为无符号整型,永远不会为负数,所以循环语句会陷入死循环,不会退出循环。
在实际的软件编程中一定要小心判断条件是否可达到。
4.以下关于直接存储器访问(DMA)的叙述中,错误的是()。
A、DMA是一种快速传递大数据的技术B、DMA将传输的数据从一个地址空间复制到另一个地址空间C、DMA数据传送过程中,由CPU和DMA控制器共同控制D、在DMA控制器控制下,主存和外设之间直接交换数据答案:C解析:DMA直接在主存和外设之间建立一条数据传输通道,无需CPU来控制传输过程,是一种快速传递大数据块的技术。
openjudge答案【篇一:整理的-----acm题目及答案】a + b problem (4)1001 sum problem (5)1002 a + b problem ii (6)1005 number sequence (8)1008 elevator (9)1009 fatmouse trade (11)1021 fibonacci again (13)1089 a+b for input-output practice (i) (14)1090 a+b for input-output practice (ii) (15)1091 a+b for input-output practice (iii) (16)1092 a+b for input-output practice (iv) (17)1093 a+b for input-output practice (v) (18)1094 a+b for input-output practice (vi) (20)1095 a+b for input-output practice (vii) (21)1096 a+b for input-output practice (viii) (22)1176 免费馅饼 (23)1204 糖果大战 (25)1213 how many tables (26)2000 ascii码排序 (32)2001 计算两点间的距离 (34)2002 计算球体积 (35)2003 求绝对值 (36)2004 成绩转换 (37)2005 第几天? (38)2006 求奇数的乘积 (40)2007 平方和与立方和 (41)2008 数值统计 (42)2009 求数列的和 (43)2010 水仙花数 (44)2011 多项式求和 (46)2012 素数判定 (47)2014 青年歌手大奖赛_评委会打分 (49)2015 偶数求和 (50)2016 数据的交换输出 (52)2017 字符串统计 (54)2019 数列有序! (55)2020 绝对值排序 (56)2021 发工资咯:) (58)2033 人见人爱a+b (59)2037 今年暑假不ac (61)2039 三角形 (63)2040 亲和数 (64)2045 不容易系列之(3)—— lele的rpg难题 (65)2049 不容易系列之(4)——考新郎 (66)2056 rectangles (68)2073 无限的路 (69)2084 数塔 (71)2201 熊猫阿波的故事 (72)2212 dfs (73)2304 electrical outlets (74)2309 icpc score totalizer software (75)2317 nasty hacks (77)2401 baskets of gold coins (78)2500 做一个正气的杭电人 (79)2501 tiling_easy version (80)2502 月之数 (81)2503 a/b + c/d (82)2504 又见gcd (83)2519 新生晚会 (84)2520 我是菜鸟,我怕谁 (85)2521 反素数 (86)2522 a simple problem (88)2523 sort again (89)2524 矩形a + b (90)2535 vote (91)2537 8球胜负 (93)2539 点球大战 (95)2547 无剑无我 (98)2548 两军交锋 .............................................................. 99 2549 壮志难酬 ............................................................. 100 2550 百步穿杨 ............................................................. 101 2551 竹青遍野 ............................................................. 103 2552 三足鼎立 ............................................................. 104 2553 n皇后问题 ............................................................ 105 2554 n对数的排列问题 ...................................................... 106 2555 人人都能参加第30届校田径运动会了 .................................... 107 2560buildings ............................................................ 110 2561 第二小整数 ........................................................... 112 2562 奇偶位互换 ........................................................... 113 2563 统计问题 ............................................................. 114 2564 词组缩写 ............................................................. 115 2565 放大的x .............................................................. 117 2566 统计硬币 ............................................................. 118 2567 寻梦 ................................................................. 119 2568 前进 ................................................................. 121 2569 彼岸 (123)2700 parity ............................................................... 124 2577 how to type . (126)北京大学:1035 spell checker ........................................................ 129 1061 青蛙的约会 ........................................................... 133 1142 smith numbers ........................................................ 136 1200 crazy search ......................................................... 139 1811 primetest ........................................................... 141 2262 goldbachs conjecture ................................................ 146 2407relatives ............................................................ 150 2447rsa .................................................................. 152 2503babelfish ............................................................ 156 2513 colored sticks . (159)acm算法:kurxx最小生成树 (163)prim ....................................................................... 164 堆实现最短路 ............................................................... 166 最短路dij普通版 (167)floyd (168)bell_man ................................................................... 168 拓扑排序 ................................................................... 169 dfs强连通分支 .............................................................. 170 最大匹配 ................................................................... 172 还有两个最大匹配模板 ....................................................... 173 最大权匹配,km算法 .......................................................... 175 两种欧拉路 (177)无向图: ............................................................... 177 有向图: (178)【最大流】edmonds karp (178)dinic (179)【最小费用最大流】edmonds karp对偶算法 (181)acm题目:【题目】排球队员站位问题 (182)【题目】把自然数N分解为若干个自然数之和。
电脑游戏优化工具推荐提升游戏性能和稳定性作为一名游戏爱好者,我们都知道电脑游戏的畅玩与否与电脑硬件配置有着密切的关系。
有时候,我们花了大价钱购买了最新的游戏,却发现在我们的电脑上运行得卡卡的,这可真是让人烦心。
好在市面上有许多优秀的电脑游戏优化工具,它们可以提升我们游戏的性能和稳定性,让我们享受到更加流畅的游戏体验。
今天,我要为大家推荐几款顶级的电脑游戏优化工具,相信它们能帮助到你!首先,我要推荐的是《Game Booster》。
作为最受欢迎的游戏优化工具之一,它的功能非常强大。
《Game Booster》可以通过优化电脑的系统设置,关闭后台无用的进程,释放更多的CPU和内存资源给游戏使用,从而提升游戏的运行速度和帧数。
同时,它还能自动检测和安装最新的显卡驱动程序,确保我们的显卡性能得到最优化的发挥。
不仅如此,《Game Booster》还可以实时监控游戏的帧率、延迟和网络连接状态,为我们提供准确的游戏性能分析。
总之,有了《Game Booster》这款神器,我们的电脑游戏将会变得更加顺畅和流畅!接下来,我要向大家介绍的是《Razer Cortex: Game Booster》。
这是一款专为Razer游戏设备设计的游戏优化软件,但它也兼容其他品牌的设备。
《Razer Cortex: Game Booster》提供了更加便捷的一键优化功能,只需点击一下,它就会自动关闭关闭其它占用系统资源的程序和进程,优化系统设置,提升游戏的性能。
此外,《Razer Cortex: Game Booster》还有一个独特的功能,就是自动清理电脑中的垃圾文件和无效的注册表项,从而释放更多的磁盘空间,提升系统的整体性能。
如果你想要提升游戏性能和释放电脑存储空间,那么《Razer Cortex: Game Booster》绝对是你的不二选择!最后,我要向大家介绍的是《Wise Game Booster》。
与前两款游戏优化工具相比,《Wise Game Booster》的优势在于其简洁易用的界面和全面的功能。
Executive SummaryIt is widely known in the fiber optic industry that scratches, defects, and dirt on fiber optic connector end faces negatively impact network performance. As bandwidth requirements continue to grow and fiber penetrates further into the network, dirty and damaged optical connectors increasingly impact the network. If dirty and damaged end faces are not dealt with systematically, these defects can degrade network performance and eventually take down an entire link.In the effort to guarantee a common level of performance from the connector, the International Electrotechnical Commission (IEC) created Standard 61300-3-35, which specifies pass/fail requirementsfor end face quality inspection before connection. Designed to be a common reference of product quality, use of the IEC Standard supports product quality throughout the entire fiber optic life cycle, but only when compliance to the standard occurs at each stage. In response, current best practices recommend systematic proactive inspection of every fiber optic connector end face before connection. While current research shows that this practice is eliminating the installation of contaminated fibers and improving network performance, the uncontrollable variables of technician eyesight and expertise, ambient lighting, and display conditions keep manual inspectionand analysis from being a 100-percent reliable and repeatable method of assuring IEC compliance. In addition, because manual inspection does not create a record of the inspection process, certification of quality at the point of installation is not practical.Because compliance to the IEC Standard is the onlyway to achieve the promise of today’s fiber-rich, high-connectivity networks, this white paper proposesthe automation of the inspection process throughthe addition of analysis software programmed to the Standard’s pass/fail criteria to the practice of systematic proactive inspection.Automation of the systematic proactive inspection process using software programmed to the IEC Standard eliminates the variables associated with manual inspection, provides a documentable record of the quality of the connector end face at the point of installation, and provides a 100-percent repeatable and reliable process.White PaperAchieving IEC Standard Compliance for Fiber Optic Connector Quality through Automation of the Systematic Proactive End Face Inspection ProcessCombined, these benefits make automated end face inspection the most effective method available to assure and certify compliance to the IEC Standard throughout the fiber optic product life cycle, and achieve the promise of next-generation networks.IEC Standard 61300-3-35IEC Standard 61300-3-35 is a global common set of requirements for fiber optic connector end face quality designed to guarantee insertion loss and return loss performance. The Standard contains pass/fail requirements for inspection and analysis of the end face of an optical connector, specifying separate criteria for different types of connections (for example, SM-PC, SM-UPC, SM-APC, MM, and multi-fiber connectors). For more detail onthe Standard, copies of the copyrighted document are available for purchase at by searching for “61300-3-35”.These criteria are designed to guarantee a common level of performance in an increasingly difficult environment where fiber is penetrating deeper into the network and being handled by more technicians, many of whom may be unfamiliar with the criticality of fiber optical connector end face quality or possess the experience and technical knowledge required to properly assess it.Figure 1. Fiber Optic Product Life CycleThe standard is designed to be used as a common quality reference between supplier and customer, and between work groups in several ways:y As a requirement from the customer to the supplier (for example, integrator to component supplier or operator to contractor)y As a guarantee of product quality and performance from the supplier to the customer (for example, manufacturer to customer, contractor to network owner, or between work groups within an organization)y As a guarantee of network quality and performance within an organizationAs more stages in the fiber optic product life cycle, shown in Figure 1, are outsourced to disparate vendors, the standard takes on renewed importance in ensuring the optimized performance of today’s fiber-dense networks.The Development of the IEC StandardThe quality values used in the IEC standard are the result of years of extensive testing of scratched, damaged,or dirty optical connectors conducted by a coalition of industry experts including component suppliers, contract manufacturers, network equipment vendors, test equipment vendors, and service providers. This work has been published previously in a number of papers as noted in the References section of this paper.Understanding the variables and limitations of manual visual inspection, fiber optic test and measurement manufacturer VIAVI contributed its automated objective inspection and analysis software FiberChek2™, as illustrated in Figure 2, to the IEC for use in the development of the 61300-3-35 visual inspection standard. Automating the pass/fail process using research-based parameters extracted from testing conducted by the aforementioned industry coalition provided the IEC with a repeatable standard of quality that would guarantee a common level of performance, creating a positive impact on both product and network performance.More than 8 years of testing on a constantly expanding database of fibers and fiber devices (for example, SM, MM, Ribbon, E2000, SFP/XFP, Bend-insensitive fibers, Lenses, and other interfaces), combined with widespread use in the industry by component manufacturers, integrators/CMs, OEMs, third-party installers, and service providers, makes the VIAVI software program the only proven automated objective inspection software program that assures compliance to the IEC standard at every step of the fiber optic life cycle.Testament to this is the fact that this software program is currently used by three of the top five U.S. cable assembly manufacturers, along with six of the largest optical component manufacturers, five of the largest network equipment vendors, and five of the top Network Service Providers (NSPs) in the world, making VIAVI FiberChek2 software the current worldwide industry standard for automated objective fiber optic connector end face inspection.Figure 2. Example of the Proven Inspection and Analysis Software Program FiberChek2 from VIAVIThe criteria in the IEC Standard requires the user to know the exact location and size of surface defects (for example, scratches, pits, and debris) on the fiber optic connector end face. As a result, it is only through the use of automated inspection and analysis software that compliance to the IEC Standard (or customer specification) can be tested and certified.The combination of common requirements (the IEC Standard) and automated inspection and analysis (FiberChek2) have measurably impacted product quality through the supply chain. This is providing improved repeatability and stability of inspection analysis throughout the fiber optic product life cycle, ensuring consistent product performance regardless of the number and expertise of vendors and technicians involved in the manufacture, installation, and network administration processes.Proactive Inspection Model: Step One Toward Achieving IEC ComplianceDespite its role in the development of the IEC Standard and usage by industry leaders, automated inspection and analysis software is not yet in widespread use across the fiber optic industry. In an effort to enable compliance to the Standard even when using manual visual inspection equipment alone, IEC and industry leaders are supporting the promotion of fiber handling best practices. An example of one such educational effort is the proactive inspection model developed and promoted by fiber optic test equipment manufacturer VIAVI, “Inspect Before You Connect” (IBYC), as illustrated in Figure 3.The simple four-step IBYC model, which supports and is mandated by the IEC Standard, effectively guides technicians of varying levels of expertise in the proper implementation of systematic proactive inspection. y Step 1 Inspect: Use the microscope to inspect the fiber. If the fiber is dirty , go to Step 2. If the fiber is clean, go to Step 4.y Step 2 Clean: If the fiber is dirty , use a cleaning tool to clean the fiber end face.y Step 3 Inspect: Use the microscope to re-inspect and confirm the fiber is clean. If the fiber is still dirty , go back to Step 2. If the fiber is clean, go to Step 4.y Step 4 Connect: If both the male and female connectors are clean, they are ready to connect.Consistent use of the IBYC model ensures that proactive inspection is performed correctly every time and that fiber optic end faces are clean prior to mating connectors, eliminating the installation of dirty or damaged fibers into the network and optimizing network performance. As a result, IBYC has been incorporated intomanufacturing procedures for the majority of the world’s leading organizations using fiber, increasing knowledge of this process and helping it become routine practice around the world.Automated Inspection and Analysis: Achieving and Certifying IEC ComplianceEven with the aid of the IBYC model, manual inspection using only a video microscope can be difficult depending on the technician’s expertise and can result in variable connector quality and network performance. Reliant on technician eyesight and expertise along with variable display settings and ambient lighting, manual inspection and analysis is not 100 percent reliable, repeatable, or certifiable. Because it produces no visual record of the end face condition in the manual inspection process, certifying compliance at the point of installation through images or reporting is both unreliable and impractical, as Figure 4a shows.To ensure IEC compliance is achieved, automated inspection of fiber optic connector end faces using inspection and analysis software built on the IEC Standard’s pass/fail criteria is the most effective method available. With it technicians of all skill levels can effectively accomplish both compliance and certification through images andreports, as Figure 4b shows.PASSFigure 4b. Automated Inspection gives technicians a pass or fail result.Figure 4a. Manual Inspection requires technicians to judge whether the connector complies with the IEC Standard.Using the software, automated inspection and analysis can produce a visual record of the end face condition as shown in Figure 5, which can be used in reports and archived for future reference.)As a result, automated inspection and analysis presents several clear advantages over subjective inspection:y Eliminates variation in resultsy Certifies and records product quality at time of inspectionFigure 5. Automated inspection enables the technician to certify compliance to the standard byproducing a date stamped test report.y Enables technicians of all skill levels to certify quality reliably and systematicallyy Makes advanced pass/fail criteria simple to usey Improves product and network performance and yieldsUsing a fiber optic inspection and analysis software program that is preloaded with the IEC Standard specifications, such as VIAVI FiberChek2 software, any technician can effectively:y Inspect and certify compliance with IEC 61300-3-35 or other customer-specified standards at every stage of the fiber optic product life cycle at the push of a buttony Implement simple pass/fail acceptance testing; no skill in quality judgment is necessaryy Generate detailed analysis reports that can be archivedConclusion: Business Impact of Automated End Face AnalysisThe combination of common requirements (the IEC Standard) and automated fiber optic inspection and analysis software (FiberChek2) has positively impacted product quality across the supply chain. The business impacts of reliable, repeatable automated fiber optic connector inspection and certification include:y Insured and repeatable product quality through the quantification of connector end face condition at installationy Assurance of customer satisfaction and supplier protection through the reliable documentation of connector end face qualityy Competitive advantage for component and system vendors, and for installation contractors who can cost-effectively document end face qualityy A common, repeatable system provides correlation through the supply chainy Easy deployment of custom requirements analysisCombined, these benefits make automated end face inspection the most effective method available to assure and certify compliance to the IEC Standard throughout the fiber optic product life cycle, and achieve the promise of next-generation networks.© 2021 VIAVI Solutions Inc. Product specifications and descriptions in this document are subject to change without notice.Patented as described at /patents iecinspect-wp-fit-tm-ae 30168245 900 1010C ontact Us +1 844 GO VIAVI (+1 844 468 4284)To reach the VIAVI office nearest you, visit /contact VIAVI Solutions References1. “Qualification of Scattering from Fiber Surface Irregularities,” Journal of Lightwave T echnology , V .20, N 3, April 2002,pp. 634−637.2. “Optical Connector Contamination/Scratches and its Influence on Optical Signal Performance,” Journal of SMTA, V .16, Issue 3, 2003, pp. 40−49.3. “At the Core: How Scratches, Dust, and Fingerprints Affect Optical Signal Performance,” Connector Specifier, January2004, pp. 10−11.4. “Degradation of Optical Performance of Fiber Optics Connectors in a Manufacturing Environment,” Proceedings ofAPEX2004, Anaheim, California, February 19−Feb 26, 2004, pp. PS-08-1-PS-08-4.5. “Cleaning Standard for Fiber Optics Connectors Promises to Save Time and Money”, Photonics Spectra, June 2004,pp. 66−68.6. “Analysis on the effects of fiber end face scratches on return loss performance of optical fiber connectors”, Journalof Lightwave T echnology , V .22, N 12, December 2004, pp. 2749−2754.7. “Development of Cleanliness Specification for Single-Mode Connectors,” Proceedings of APEX2005, Anaheim,California, February 21−26, 2005, pp. S04-3-1, 16.8. “Keeping it clean: A cleanliness specification for single-mode connectors,” Connector Specifier, August 2005, pp.8−10.9. “Contamination Influence on Receptacle T ype Optical Data Links,” Photonics North, 2005, T oronto, Canada,September 2005.10. “Development of Cleanliness Specifications for 2.5 mm and 1.25 mm ferrules Single-Mode Connectors,” Proceedingsof OFC/NFOEC, Anaheim, California, March 5−10, 2006.11. “Standardizing cleanliness for fiber optic connectors cuts costs, improves quality ,” Global SMT & Packaging, June/July 2006, pp. 10−12.12. “Accumulation of Particles Near the Core during Repetitive Fiber Connector Matings and De-matings,” Proceedingsof OFC/NFOEC2007, Anaheim, CA, March 25−29, 2007, NThA6, pp.1−11.13. “Development of Cleanliness Specifications for Single-Mode, Angled Physical Contact MT Connectors,” Proceedingof OFC/NFOEC2008, San Diego, February 24−28, 2008, NThC1, pp. 1−10.14. “Correlation Study between Contamination and Signal Degradation in Single-Mode APC Connectors,” Proc. SPIE, Vol.7386, 73861W (2009); doi:10.1117/12.837545.。
题号:1 题型:单选题内容:以下哪一项属于所有指令均能被执行的操作系统模式?选项:A、问题B、中断C、监控D、标准处理标准答案:B题号:2 题型:单选题内容:企业将其技术支持职能(help desk)外包出去,下面的哪一项指标纳入外包服务等级协议(SLA)是最恰当的?选项:A、要支持用户数B、首次请求技术支持,即解决的(事件)百分比C、请求技术支持的总人次D、电话响应的次数标准答案:B题号:3 题型:单选题内容:IS审计师检查组织的数据文件控制流程时,发现交易事务使用的是最新的文件,而重启动流程使用的是早期版本,那么,IS审计师应该建议:选项:A、检查源程序文档的保存情况B、检查数据文件的安全状况C、实施版本使用控制D、进行一对一的核查标准答案:C题号:4 题型:单选题内容:将输出结果及控制总计和输入数据及控制总计进行匹配可以验证输出结果,以下哪一项能起上述作用?选项:A、批量头格式B、批量平衡C、数据转换差错纠正D、对打印池的访问控制标准答案:B题号:5 题型:单选题内容:审计客户/服务器数据库安全时,IS审计师应该最关注于哪一方面的可用性?选项:A、系统工具B、应用程序生成器C、系统安全文文件D、访问存储流程标准答案:A题号:6 题型:单选题内容:测试程序变更管理流程时,IS审计师使用的最有效的方法是:选项:A、由系统生成的信息跟踪到变更管理文档B、检查变更管理文档中涉及的证据的精确性和正确性C、由变更管理文档跟踪到生成审计轨迹的系统D、检查变更管理文档中涉及的证据的完整性标准答案:A题号:7 题型:单选题内容:分布式环境中,服务器失效带来的影响最小的是:选项:A、冗余路由B、集群C、备用电话线D、备用电源标准答案:B题号:8 题型:单选题内容:实施防火墙最容易发生的错误是:选项:A、访问列表配置不准确B、社会工程学会危及口令的安全C、把modem连至网络中的计算机D、不能充分保护网络和服务器使其免遭病毒侵袭标准答案:A题号:9 题型:单选题内容:为确定异构环境下跨平台的数据访问方式,IS审计师应该首先检查:选项:A、业务软件B、系统平台工具C、应用服务D、系统开发工具标准答案:C题号:10 题型:单选题内容:数据库规格化的主要好处是:选项:A、在满足用户需求的前提下,最大程度地减小表内信息的冗余(即:重复)B、满足更多查询的能力C、由多张表实现,最大程度的数据库完整性D、通过更快地信息处理,减小反应时间标准答案:A题号:11 题型:单选题内容:以下哪一种图像处理技术能够读入预定义格式的书写体并将其转换为电子格式?选项:A、磁墨字符识别(MICR)B、智能语音识别(IVR)C、条形码识别(BCR)D、光学字符识别(OCR)标准答案:D题号:12 题型:单选题内容:代码签名的目的是确保:选项:A、软件没有被后续修改B、应用程序可以与其它已签名的应用安全地对接使用C、应用(程序)的签名人是受到信任的D、签名人的私钥还没有被泄露标准答案:A题号:13 题型:单选题内容:检查用于互联网Internet通讯的网络时,IS审计应该首先检查、确定:选项:A、是否口令经常修改B、客户/服务器应用的框架C、网络框架和设计D、防火墙保护和代理服务器标准答案:C题号:14 题型:单选题内容:企业正在与厂商谈判服务水平协议(SLA),首要的工作是:选项:A、实施可行性研究B、核实与公司政策的符合性C、起草其中的罚则D、起草服务水平要求标准答案:D题号:15 题型:单选题内容:电子商务环境中降低通讯故障的最佳方式是:选项:A、使用压缩软件来缩短通讯传输耗时B、使用功能或消息确认(机制)C、利用包过滤防火墙,重新路由消息D、租用异步传输模式(ATM)线路标准答案:D题号:16 题型:单选题内容:以下哪一项措施可最有效地支持24/7可用性?选项:A、日常备份B、异地存储C、镜像D、定期测试标准答案:C题号:17 题型:单选题内容:某制造类公司欲建自动化发票支付系统,要求该系统在复核和授权控制上花费相当少的时间,同时能识别出需要深入追究的错误,以下哪一项措施能最好地满足上述需求?选项:A、建立一个与供货商相联的内部客户机用及服务器网络以提升效率B、将其外包给一家专业的自动化支付和账务收发处理公司C、与重要供货商建立采用标准格式的、计算机对计算机的电子业务文文件和交易处理用EDI系统D、重组现有流程并重新设计现有系统标准答案:C题号:18 题型:单选题内容:以下哪一项是图像处理的弱点?选项:A、验证签名B、改善服务C、相对较贵D、减少处理导致的变形标准答案:C题号:19 题型:单选题内容:某IS审计人员需要将其微机与某大型机系统相连,该大型机系统采用同步块数据传输通讯,而微机只支持异步ASCII字符数据通讯。
1.0I NTRODUCTIONPerformance—responsiveness and scalability—is a make-or-break quality for software. Software perfor-mance engineering (SPE) [Smith and Williams 2002],[Smith 1990] provides a systematic, quantitative approach to constructing software systems that meet performance objectives. With SPE, you detect prob-lems early in development, and use quantitative meth-ods to support cost-benefit analysis of hardware solutions versus software requirements or design solu-tions, or a combination of software and hardware solu-tions.SPE is a software-oriented approach; it focuses on architecture, design, and implementation choices. It uses model predictions to evaluate trade-offs in soft-ware functions, hardware size, quality of results, and resource requirements. The models assist developers in controlling resource requirements by enabling them to select architecture and design alternatives with acceptable performance characteristics. The models aid in tracking performance throughout the develop-ment process and prevent problems from surfacing late in the life cycle (typically during final testing).SPE also prescribes principles and performance pat-terns for creating responsive software, performance antipatterns for recognizing and correcting common problems, the data required for evaluation, procedures for obtaining performance specifications, and guide-lines for the types of evaluation to be conducted ateach development stage. It incorporates models for representing and predicting performance as well as a set of analysis methods.This paper presents 24 “best practices” for SPE in four categories: project management, performance model-ing, performance measurement, and techniques. A best practice is:“a process, technique, or innovative use of technol-ogy, equipment or resources that has a proven record of success in providing significant improve-ment in cost, schedule, quality, performance, safety,environment, or other measurable factors which impact an organization.” [Javelin 2002]The best practices presented here are based on:•observations of companies that are successfully applying SPE,•interviews and discussions with practitioners in those companies, and•our own experience in applying SPE techniques on a variety of consulting assignments.Many of them can be found in the Performance Solu-tions book [Smith and Williams 2002]. Ten of them were presented in [Smith and Williams 2003a]. This paper builds on the earlier paper, and puts them in the four categories.These best practices represent documented strategies and tactics employed by highly admired companies to manage software performance. They have imple-Best Practices for Software Performance EngineeringPerformance—responsiveness and scalability—is a make-or-break quality for software.Software Performance Engineering (SPE) provides a systematic, quantitative approach to constructing software systems that meet performance objectives.It prescribes ways to build performance into new systems rather than try to fix them later. Many companies suc-cessfully apply SPE and they attest to the financial, quality, customer satisfaction and other benefits of doing it right the first time.This paper describes 24 best practices for applying SPE to proactively managing the per-formance of new applications. They are vital for successful, proactive SPE efforts, and they are among the practices of world-class SPE organizations. They will help you to establish new SPE programs and fine tune existing efforts in line with practices used by the best software development projects.Connie U. Smith, Ph.D.Performance Engineering ServicesPO Box 2640Santa Fe, New Mexico, 87504-2640(505) 988-3811/Lloyd G. Williams, Ph.D.Software Engineering Research264 Ridgeview Lane Boulder, Colorado 80302(303) 938-9847boulderlgw@Copyright © 2003, Performance Engineering Services and Software Engineering Research. All rights reserved.mented these practices and refined their use to place themselves and their practitioners among the best in the business for their ability to deliver software that meets performance objectives and is on-time and within budget.2.0P ROJECT M ANAGEMENT B EST P RACTICES These are practices adopted by managers of software development projects and/or managers of SPE special-ists who work with development managers.2.1Perform An Early Estimate Of PerformanceRiskIt is important to understand your level of performance risk. A risk is anything that has the possibility of endan-gering the success of the project. Risks include: the use of new technologies, the ability of the architecture to accommodate changes or evolution, market factors, schedule, and others.If failing to meet your performance goals would endan-ger the success of your project, you have a perfor-mance risk. If your project supports a critical business function and/or will be deployed with high visibility (such as a key, widely publicized web application), then failing to meet performance objectives may result in a business failure and you have an extreme performance risk. Inexperienced developers, lack of familiarity with the technology, a cutting-edge application, and aggres-sive schedule all increase your risk of performance fail-ure.To assess the level of performance risk, begin by iden-tifying potential risks. You will find an overview of soft-ware risk assessment and control in [Boehm 1991]. Once you have identified potential risks, try to deter-mine their impact. The impact of a risk has two compo-nents: its probability of happening, and the severity of the damage that would occur if it did. For example, if a customer were unable to access a Web site within the required time, the damage to the business might be extreme. However, it may also be that the team has implemented several similar systems, so the probability of this happening might be very small. Thus, the impact of this risk might be classified as moderate. If there are multiple performance risks, ranking them according to their anticipated impact will help you address them sys-tematically.2.2Match The Level of SPE Effort To ThePerformance RiskSPE is a risk-driven process. The level of risk deter-mines the amount of effort that you put into SPE activi-ties. If the level of risk is small, the SPE effort can be correspondingly small. If the risk is high, then a more significant SPE effort is needed. For a low-risk project,the amount of SPE effort required might be about 1% of the total project budget. For high-risk projects, the SPE effort might be as high as 10% of the project bud-get.2.3Track SPE Costs And BenefitsSuccessful application of SPE is often invisible. If you are successfully managing performance, you do not have performance problems. Because of this, it is nec-essary to continually justify your SPE efforts. In fact, we have heard managers ask “Why do we have perfor-mance engineers if we don’t have performance prob-lems?”It is important to track the costs and benefits of apply-ing SPE so that you can document its financial value and justify continued efforts. The costs of SPE include salaries for performance specialists, tools, and support equipment such as workstations for performance ana-lysts or a dedicated performance testing facility. The benefits are usually costs due to poor performance that you reduce or avoid as a result of applying SPE. These include: costs of refactoring or tuning, contractual pen-alties, user support costs and lost revenue as well as intangible costs such as damaged customer relations. Once you have this information, it is easy to calculate the return on investment (ROI) [Reifer 2002] for your SPE efforts. The return on investment for SPE is typi-cally more than high enough to justify its continued use (see, for example, [Williams, et al. 2002] and [Williams and Smith 2003b]).2.4Integrate SPE Into Your Software DevelopmentProcess and Project ScheduleTo be effective, SPE should not be an “add-on;” it should be an integral part of the way in which you approach software development. Integrating SPE into the software process avoids two problems that we have seen repeatedly in our consulting practice. One is over-reliance on individuals. When you rely on individu-als to perform certain tasks instead of making them part of the process, those tasks are frequently forgotten when those individuals move to a different project or leave the company.The second reason for making SPE an integral part of your software process is that many projects fall behind schedule during development. Because performance problems are not always apparent, managers or devel-opers may be tempted to omit SPE studies in favor of meeting milestones. If SPE milestones are defined and enforced, it is more difficult to omit them.2.5Establish Precise, Quantitative PerformanceObjectives And Hold Developers and Managers Accountable For Meeting ThemPrecise, quantitative performance objectives help you to control performance by explicitly stating the required performance in a format that is rigorous enough so that you can quantitatively determine whether the software meets that objective. Well-defined performance objec-tives also help you evaluate architectural and design alternatives and trade-offs and select the best way of meeting performance (and other quality) requirements. It is important to define one or more performance objectives for each performance scenario. Throughout the modeling process, you can compare model results to the objective, to determine if there is significant risk of failing to meet the objectives, and take appropriate action early. And, as soon as you can get measure-ments from a performance test, you can determine whether or not the software meets the objective.A well-defined performance objective would be some-thing like: “The end-to-end time for completion of a ‘typ-ical’ correct ATM withdrawal performance scenario must be less than 1 minute, and a screen result must be presented to the user within 1 second of the user’s input.” Vague statements such as “The system must be efficient” or “The system shall be fast” are not useful as performance objectives.For some types of systems you may define different performance objectives, depending on the intensity of requests. For example, the response time objective for a customer service application may be 1 second with up to 500 users or less, 2 seconds for 500 to 750 users, and 3 seconds for up to 1,000 users.Unless performance objectives are clearly defined, it is unlikely that they will be met. In fact, establishing spe-cific, quantitative, measurable performance objectives is so central to the SPE process that we have made it one of the performance principles [Smith and Williams 2002]. When a team is accountable for, and rewarded for achieving their system’s performance, they are more likely to manage it effectively. If the team is only accountable for completion time and budget, there is no incentive to spend time or money for performance.2.6Identify Critical Use Cases And Focus On TheScenarios That Are Important To Performance Use cases describe categories of behavior of a system or one of its subsystems. They capture the user’s view of what the system is supposed to do. Critical use cases are those that are important to responsiveness as seen by users, or those for which there is a perfor-mance risk. That is, critical use cases are those for which the system will fail, or be less than successful, if performance goals are not met.Not every use case will be critical to performance. The 80-20 rule applies here: A small subset of the use cases (≤20%) accounts for most of the uses (≥80%) of the system. The performance of the system is domi-nated by these heavily used functions. Thus, these should be your first concern when assessing perfor-mance.Don’t overlook important functions that are used infre-quently but must perform adequately when they are needed. An example of an infrequently used function whose performance is important is recovery after some failure or outage. While this may not occur often, it may be critical that it be done quickly.Each use case is described by a set of scenarios that describe the sequence of actions required to execute the use case. Not all of these scenarios will be impor-tant from a performance perspective. For example, variants are unlikely to be executed frequently and, thus, will not contribute significantly to overall perfor-mance.For each critical use case, focus on the scenarios that are executed frequently, and on those that are critical to the user’s perception of performance. For some sys-tems, it may also be important to include scenarios that are not executed frequently, but whose performance is critical when they are executed, such as recovery from an outage.Select the scenarios, get consensus that they are the most important, then focus on their design and imple-mentation to expedite processing and thus optimize their responsiveness. People are more likely to have confidence in the model results if they agree that the scenarios used and workloads used to obtain the results are representative of those that are actually likely to occur. Otherwise, it is easy to rationalize that any poor performance predicted by the models is unlikely, because the performance scenarios chosen will not be the dominant workload functions. The sce-narios also drive the measurement studies by specify-ing the conditions that should be performance tested.2.7Perform an Architecture Assessment to EnsureThat the Software Architecture Will SupportPerformance ObjectivesRecent interest in software architectures has under-scored the importance of architecture in determining software quality. While decisions made at every phase of the development process are important, architectural decisions have the greatest impact on quality attributes such as modifiability, reusability, reliability, and perfor-mance. As Clements and Northrop note [Clements and Northrop 1996]:“Whether or not a system will be able to exhibit itsdesired (or required) quality attributes is largelydetermined by the time the architecture is chosen.” While a good architecture cannot guarantee attainment of performance objectives, a poor architecture can pre-vent their achievement.Architectural decisions are among the earliest made in a software development project. They are also the most costly to fix if, when the software is completed, the architecture is found to be inappropriate for meet-ing quality objectives. Thus, it is important to be able to assess the impact of architectural decisions on quality objectives such as performance and reliability at the time that they are made.Performance cannot be retrofitted into an architecture without significant rework; it must be designed into soft-ware from the beginning. Thus, if performance is important, it is vital to spend the up-front time neces-sary to ensure that the architecture will not hinder attainment of performance requirements. The “make it run, make it run right, make it run fast” approach is dangerous. Our experience is that performance prob-lems are most often due to inappropriate architectural choices rather than inefficient coding. By the time the architecture is fixed, it may be too late to achieve ade-quate performance by tuning.The method that we use for assessing the performance of software architectures is known as PASA SM [Will-iams and Smith 2002]. It was developed from our expe-rience in conducting performance assessments of software architectures in a variety of application domains including web-based systems, financial appli-cations, and real-time systems. PASA uses the princi-ples and techniques of software performance engineering (SPE) to determine whether an architec-ture is capable of supporting its performance objec-tives. The method may be applied to new development to uncover potential problems when they are easier and less expensive to fix. It may also be used when upgrading legacy systems to decide whether to con-tinue to commit resources to the current architecture or migrate to a new one.2.8Secure The Commitment To SPE At All LevelsOf The OrganizationThe successful adoption of SPE requires commitment at all levels of the organization. This is typically not a problem with developers. Developers are usually anx-ious to do whatever is needed to improve the quality of their software.If there is a problem with commitment, it usually comes from middle managers who are constantly faced with satisfying many conflicting goals. They must continu-ally weigh schedule and cost against quality of service benefits. Without a strong commitment from middle managers, these other concerns are likely to force SPE aside. Commitment from upper management is neces-sary to help middle managers resolve these conflicting goals.2.9 Establish an SPE Center of Excellence to Workwith Performance Engineers on Project Teams It is important that you designate one or more individu-als to be responsible for performance engineering. You are unlikely to be successful without a performance engineer (or a performance manager) who is responsi-ble for:•Tracking and communication of performance issues•Establishing a process for identifying and responding to situations that jeopardize theattainment of the performance objectives •Assisting team members with SPE tasks•Formulating a risk management plan based on shortfall and activity costs•Ensuring that SPE tasks are properly performed The responsible person should be high enough in the organization to cause changes when they are neces-sary. The performance engineering manager should report either to the project manager or to that person’s manager.The person responsible for performance engineering should be in the development organization rather than the operations organization. You will have problems if responsibility for SPE is in the operations organization because developers will likely put priority on meeting schedules over making changes to reduce operational costs.Making SPE a function of the capacity planning group is also a mistake in most organizations, even though that group usually already employs individuals with performance modeling expertise. While some capacity planners have the performance engineering skills, most are mathematical experts who are too far removed from the software issues to be effective.With the “SPE Center of Excellence” approach, mem-bers of the development team are trained in the basic SPE techniques. In the early phases of a project, the developers can apply these techniques to construct simple models that support architectural and design decisions. This allows developers to get feedback on the performance characteristics of their architecture and design in a timely fashion. Later, as the modelsbecome more complex, someone from the SPE Center can take them over to conduct more detailed studies that require more technical expertise.The SPE Center develops tools, builds performance expertise, and assists developers with modeling prob-lems. A member of this group may also review the team’s models to confirm that nothing important has been overlooked. The central group can also develop reusable models or reference models, as well as pro-vide data on the overhead for the organization’s hard-ware/software platforms. Finally, the performance group can provide assistance in conducting measure-ments.2.10 Ensure that Developers and PerformanceSpecialists Have SPE Education, Training, and ToolsSPE consists of a comprehensive set of methods. Edu-cation and experience in these methods improves the architectures and designs created by developers. It helps performance specialists interface with develop-ers, and shortens the time necessary for SPE studies. Performance tuning experience is helpful for SPE but it is not the same as proactive performance engineering. To be proficient you need additional education and training.Tools are essential for SPE. Modeling tools expedite SPE studies and limit the mathematical background required for performance analysts to construct and solve the models. Measurement tools are vital for obtaining resource consumption data, evaluating per-formance against objectives, and verifying and validat-ing results. However, simply acquiring a set of tools will not guarantee success. You must also have the exper-tise to know when and how to use them. It is also important to know when the result reported by a tool is unreasonable, so that problems with models or mea-surements can be detected and corrected.The project team must have confidence in both the pre-dictive capabilities of the models, and the analyst’s skill in using them. Without this confidence, it is easier to attribute performance problems predicted by the mod-els to modeling errors, rather than to actual problems with the software. If the developers understand the models and how they were created, they are more likely to have confidence in them.2.11 Require Contractors To Use SPE On YourProductsYou should require your contractors (e.g., external developers suppliers, etc.) to use SPE in developing your products to avoid unpleasant surprises when the products are delivered.It is also important to specify deliverables that will allow you to assess whether SPE is being properly applied. These deliverables fall into four broad categories:•Plans: These artifacts are targeted primarily at project management. They include technicalplans for each development phase, as well asconfiguration management plans, policies, andprocedures governing the production and main-tenance of other SPE artifacts.•Performance objectives: These artifacts include specifications for key performance scenarios,along with quantitative, measurable criteria forevaluating the performance of the system underdevelopment. They also include specificationsfor the execution environment(s) to be evalu-ated.•Performance models and results: This category includes the performance models for key sce-narios and operating environments, along withthe model solutions for comparison to perfor-mance objectives.•Performance validation, verification, and mea-surement reports (V&V): This category includesdocumentation and measurement results thatdemonstrate that the models are truly represen-tative of the software’s performance, and thatthe software will meet performance require-ments.3.0P ERFORMANCE M ODELING B ESTP RACTICESThese are best practices used by performance engi-neers who model the software architecture and design.3.1Use Performance Models To EvaluateArchitecture And Design Alternatives BeforeCommitting to CodeToday’s software systems have stringent requirements for performance, availability, security and other quality attributes. In most cases, there are trade offs that must be made among these properties. For example, perfor-mance and security often conflict with one another.It’s unlikely that these trade offs will sort themselves out and ignoring them early in development process is a recipe for disaster. The “make it run, make it run right, make it run fast” approach is dangerous.While it is possible to refactor code after it has been written to improve performance, refactoring is not free. It takes time and consumes resources. The more com-plex the refactoring, the more time and resources it requires. When performance problems arise, they are most often at the architecture or design level. Thus, refactoring to solve performance problems is likely to involve multiple components and their interfaces. Theresult is that later refactoring efforts are likely to be large and very complex.One company we worked with used a modeling study to estimate that refactoring their architecture would save approximately $2 million in hardware capacity. However, because the changes to the architecture were so extensive, they decided that it would be more economical to purchase the additional hardware. Another company used historical data to determine that its cost for refactoring to improve performance was approximately $850,000 annually [Williams, et al. 2002].Simple performance models can provide the informa-tion needed to identify performance problems and eval-uate architecture and design alternatives for correcting them. These models are inexpensive to construct and evaluate. They eliminate the need to implement the software and measure it before understanding its per-formance characteristics. And, they provide a quantita-tive basis for making trade-offs among quality attributes such as reliability, security, and performance.3.2Start With The Simplest Model That IdentifiesProblems With The System Architecture,Design, Or Implementation Plans Then AddDetails As Your Knowledge Of The SoftwareIncreasesThe early SPE models are easily constructed and solved to provide feedback on whether the proposed software is likely to meet performance goals. These simple models are sufficient to identify problems in the architecture or early design phases of the project. You can easily use them to evaluate many alternatives because they are easy to construct and evaluate. Later, as more details of the software are known, you can construct and solve more realistic (and complex) models.Later in the development process. As the design and implementation proceed and more details are known, you expand the SPE models to include additional infor-mation in areas that are critical to performance.3.3Use Best- And Worst-Case Estimates OfResource Requirements To Establish BoundsOn Expected Performance And ManageUncertainty In EstimatesSPE models rely upon estimates of resource require-ments for the software execution. The precision of the model results depends on the quality of these esti-mates. Early in the software process, however, your knowledge of the details of the software is sketchy, and it is difficult to precisely estimate resource require-ments. Because of this, SPE uses adaptive strategies, such as the best- and worst-case strategy.For example, when there is high uncertainty about resource requirements, you use estimates of the upper and lower bounds of these quantities. Using these esti-mates, you produce predictions of the best-case and worst-case performance. If the predicted best-case performance is unsatisfactory, you look for feasible alternatives. If the worst-case prediction is satisfactory, you proceed to the next step of the development pro-cess with confidence. If the results are somewhere in between, the model analyses identify critical compo-nents whose resource estimates have the greatest effect, and you can focus on obtaining more precise data for them.Best- and worst-case analysis identifies when perfor-mance is sensitive to the resource requirements of a few components, identifies those components, and permits assessment of the severity of problems as well as the likelihood that they will occur. When perfor-mance goals can never be met, best-and worst-case results also focus attention on potential design prob-lems and solutions rather than on model assumptions. If you make all the best-case assumptions and the pre-dicted performance is still not acceptable, it is hard to fault the assumptions.3.4Establish A Configuration Management PlanFor Creating Baseline Performance Models and Keeping Them Synchronized With Changes To The SoftwareMany of the SPE artifacts evolve with the software. For example, performance scenarios and the models that represent them will be augmented as the design evolves. Managing changes to these SPE artifacts is similar to the configuration management used to man-age changes to designs or code. Configuration man-agement also makes it possible to ensure that a particular version of a performance model is accurately matched to the version of the design that it represents. While it isn’t essential for many systems to have a for-mal configuration management plan, safety-critical sys-tems and others require both the plan and the control of SPE artifacts.Baselines for scenarios and models should be estab-lished following their initial validation and verification. Once an artifact has been baselined, it may only be changed using the established change control proce-dure.The configuration management plan should specify how to identify an artifact (e.g., CustomerOrder soft-ware model v1.2), the criteria for establishing a base-line for an artifact, and the procedure to be used when making a change.。