An Accelerated Test Method for Automotive Wiper Systems
- 格式:pdf
- 大小:3.34 MB
- 文档页数:11
专利名称:APPARATUS AND METHOD FOR TESTING A DRIVETRAIN OF A VEHICLE发明人:ZIMMER, Philipp,ZIMMER, Philipp申请号:EP2017/070496申请日:20170811公开号:WO2018/029358A1公开日:20180215专利内容由知识产权出版社提供专利附图:摘要:The invention relates to a method for testing at least a part of a drivetrain of a vehicle, having at least one rotatable drive drum (1) which can be driven by means of a drive (5), wherein the drive drum (1) has a frictional engagement plate (7), wherein thefrictional engagement plate (7) bears against a rotatably mounted tyred wheel (2), wherein the tyred wheel (2) is connected to an output shaft (3), wherein the output shaft (3) is connected to a rotationally fixedly mounted drive unit (4, 11), wherein the drive drum (1) is formed with the frictional engagement plate (7) so as to exchange torque with the tyred wheel (2). The invention also relates to a method for controlling the apparatus.申请人:BERTRANDT INGENIEURBÜRO GMBH,BERTRANDT INGENIEURBÜRO GMBH 地址:80939 DE国籍:DE代理人:PATENTANWALTSKANZLEI WILHELM & BECK更多信息请下载全文后查看。
修理旳产品repaired item不修理旳产品 non-repaired item服务service规定功能required function时刻instant of time时间区间 time interval持续时间 time duration累积时间 accumulated time量度 measure工作 operation修改(对产品而言) modification (of an item) 效能 effectiveness固有能力 capability耐久性 durability可靠性 reliability维修性 maintainability维修保障性 maintenance support performance 可用性 availability可信性 dependability失效 failure致命失效 critical failure非致命失效 non-critical failure误用失效 misuse failure误操作失效 mishandling failure设计失效 design failure制造失效 manufacture failure老化失效;耗损失效 ageing failure; wear-out failure忽然失效 sudden failure渐变失效;漂移失效 gradual failure; drift failure灾变失效 cataleptic failure关联失效 relevant failure非关联失效 non-relevant failure独立失效 primary failure附属失效 secondary failure失效因素 failure cause失效机理 failure mechanism系统性失效;反复性失效 systematic failure; reproducible failure 完全失效 complete failure退化失效 degradation failure部分失效 partial failure故障 fault致命故障 critical fault非致命故障 non-critical fault重要故障 major fault次要故障 minor fault误用故障 misuse fault误操作故障 mishandling fault设计故障 design fault制造故障 manufacturing fault老化故障;耗损故障 ageing fault; wear-out fault程序敏感故障 programme-sensitive fault数据敏感故障 data-sensitive fault完全故障;功能阻碍故障 complete fault; function-preventing fault 部分故障 partial fault持久故障 persistent fault间歇故障 intermittent fault拟定性故障 determinate fault非拟定性故障 indeterminate fault潜在故障 latent fault系统性故障 systematic fault故障模式 fault mode故障产品 faulty item差错 error失误 mistake工作状态 operating state不工作状态 non-operating state待命状态 standby state闲置状态;空闲状态 idle state; free state不能工作状态 disable state; outage外因不能工作状态 external disabled state不可用状态;内因不能工作状态 down state; internal disabled state可用状态 up time忙碌状态 busy state致命状态 critical state维修 maintenance维修准则 maintenance philosophy维修方针 maintenance policy维修作业线 maintenance echelon; line of maintenance维修商定级 indenture level (for maintenance)维修等级 level of maintenance避免性维修 preventive maintenance修复性维修 corrective maintenance受控维修 controlled maintenance计划性维修 scheduled maintenance非计划性维修 unscheduled maintenance现场维修 on-site maintenance; in sits maintenance; field maintenance 非现场维修 off-site maintenance遥控维修 remote maintenance自动维修 automatic maintenance逾期维修 deferred maintenance基本旳维修作业 elementary maintenance activity维修工作 maintenance action; maintenance task修理 repair故障辨认 fault recognition故障定位 fault localization故障诊断 fault diagnosis故障修复 fault correction功能核查 function check-out恢复 restoration; recovery监测 supervision; monitoring维修旳实体 maintenance entity影响功能旳维修 function-affecting maintenance阻碍功能旳维修 function-preventing maintenance削弱功能旳维修 function-degrading maintenance不影响功能旳维修 function-permitting maintenance维修时间 maintenance time维修人时 MMH; maintenance man-hour实际维修时间 active maintenance time避免性维修时间 preventive maintenance time修复性维修时间 corrective maintenance time实际旳避免性维修时间 active preventive maintenance time 实际旳修复性维修时间 active corrective maintenance time 未检出故障时间 undetected fault time管理延迟(对于修复性维修) administrative delay后勤延迟 logistic delay故障修复时间 fault correction time技术延迟 technical delay核查时间 check-out time故障诊断时间 fault diagnosis time故障定位时间 fault localization time修理时间 repair time工作时间 operating time不工作时间 non-operating time需求时间 required time无需求时间 non-required time待命时间 standby time闲置时间 idle time; free time不能工作时间 disabled time不可用时间 down time累积不可用时间 accumulated down time外因不能工作时间 external disabled time; external loss time 可用时间 up time初次失效前时间 time to first failure失效前时间 time to failure失效间隔时间 time between failures失效间工作时间 operating time between failures恢复前时间 time to restoration; time to recovery使用寿命 useful life初期失效期 early failure period恒定失效密度期 constant failure intensity period恒定失效率期 constant failure rate period耗损失效期 wear-out failure period瞬时不可用度 instantaneous unavailability平均可用度 mean availability平均不可用度 mean unavailability渐近可用度 asymptotic availability稳态可用度 steady-state availability渐近不可用度 asymptotic unavailability稳态不可用度 steady-state unavailability渐近平均可用度 asymptotic mean availability渐近平均不可用度 asymptotic mean unavailability平均可用时间 mean up time平均累积不可用时间 mean accumulated down time可靠度 reliability瞬时失效率 instantaneous failure rate平均失效率 mean failure rate瞬时失效密度 instantaneous failure intensity平均失效密度 mean failure intensity平均初次失效前时间 MTTFF; mean time to first failure平均失效前时间 MTTF; mean time to failure平均失效间隔时间 MTBF; mean time between failures平均失效间工作时间 MOTBF; mean operating time between failure 失效率加速系数 failure rate acceleration factor失效密度加速系数 failure intensity acceleration factor维修度 maintainability平均修复率 mean repair rate平均维修人时 mean maintenance man-hour平均不可用时间 MDT; mean down time平均修理时间 MRT; mean repair timeP-分位修理时间 P-fractile repair time平均实际修复性维修时间 mean active corrective maintenance time 平均恢复前时间 MTTR; mean time to restoration故障辨认比 fault coverage修复比 repair coverage平均管理延迟 MAD; mean administrative delayP-分位管理延迟 P-fractile administrative delay平均后勤延迟 MLD; mean logistic delayP-分位后勤延迟 P-fractile logistic delay验证明验 compliance test测定实验 determination test实验室实验 laboratory test现场实验 field test耐久性实验 endurance test加速实验 accelerated test步进应力实验 step stress test筛选实验 screening test时间加速系数 time acceleration factor维修性检查 maintainability verification维修性验证 maintainability demonstration观测数据 observed data实验数据 test data现场数据 field data基准数据 reference data冗余 redundancy工作冗余 active redundancy备用冗余 standby redundancy失效安全 fail safe故障裕度 fault tolerance故障掩盖 fault masking估计 prediction可靠性模型 reliability model可靠性估计 reliability prediction可靠性分派 reliability allocation; reliability apportionment故障模式与影响分析 FMEA; fault modes and effects analysis故障模式影响与危害度分析 FMECA; fault modes, effects and criticality analysis故障树分析 FTA; fault tree analysis应力分析 stress analysis可靠性框图 reliability block diagram故障树 fault tree状态转移图 state-transition diagram应力模型 stress model故障分析 fault analysis失效分析 failure analysis维修性模型 maintainability model维修性估计 maintainability prediction维修树 maintenance tree维修性分派 maintainability allocation; maintainability apportionment 老到 burn in可靠性增长 reliability growth可靠性改善 reliability improvement可靠性和维修性管理 reliability and maintainability management可靠性和维修性保证 reliability and maintainability assurance可靠性和维修性控制 reliability and maintainability control可靠性和维修性大纲 reliability and maintainability programme可靠性和维修性计划 reliability and maintainability plan可靠性和维修性审计 reliability and maintainability audit可靠性和维修性监察 reliability and maintainability surveillance设计评审 design review真实旳 true估计旳 predicted外推旳 extrapolated估计旳 estimated固有旳 intrinsic; inherent使用旳 operational平均旳 meanP-分位 P-fractile瞬时旳 instantaneous 稳态旳 steady state。
AS 2345—2006Australian Standard ™Dezincification resistance of copper alloysAS 2345—2006 s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 2007This Australian Standard was prepared by Committee MT-014, Corrosion of Metals.It was approved on behalf of the Council of Standards Australia on 25 May 2006.This Standard was published on 13 June 2006.The following are represented on Committee MT-014:Australian Corrosion AssociationAustralasian Institute of Metal FinishingAustralian Chamber of Commerce and IndustryAustralian Electrolysis CommitteeAustralian Paint Manufacturer’s FederationAustralian Paint Approval SchemeAustroadsBureau of Steel Manufacturers of AustraliaDepartment of DefenceDivision of Building, Construction and Engineering, CSIROGalvanizers Association of AustraliaTelstraUnited Water InternationalWater Corporation of Western AustraliaCorrosion consultantsWater Services Association of Australia (WSAA)Water Authority of Western AustraliaKeeping Standards up-to-dateS tandards are living documents which reflect progress in science, technology andsystems. To maintain their currency, all S tandards are periodically reviewed, andnew editions are published. Between editions, amendments may be issued.S tandards may also be withdrawn. It is important that readers assure themselvesthey are using a current S tandard, which should include any amendments whichmay have been published since the Standard was purchased.Detailed information about S tandards can be found by visiting the S tandards WebShop at .au and looking up the relevant Standard in the on-linecatalogue.Alternatively, the printed Catalogue provides information current at 1 January eachyear, and the monthly magazine, The Global Standard , has a full listing of revisionsand amendments published each month.Australian S tandards TM and other products and services developed by S tandardsAustralia are published and distributed under contract by S AI Global, whichoperates the Standards Web Shop.We also welcome suggestions for improvement in our S tandards, and especiallyencourage readers to notify us immediately of any apparent inaccuracies orambiguities. Contact us via email at mail@.au, or write to the ChiefExecutive, Standards Australia, GPO Box 476, Sydney, NSW 2001.This Standard was issued in draft form for comment as DR 06197.s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 2007AS 2345—2006 Australian Standard ™Dezincification resistance of copper alloysOriginated as AS 2345—1980.Previous edition 1992. Third edition 2006.COPYRIGHT© Standards AustraliaAll rights are reserved. No part of this work may be reproduced or copied in any form or by any means, electron i c or mechan i cal, i nclud i ng photocopy i ng, w i thout the wr ittenpermission of the publisher.s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 2007AS 2345—2006 2PREFACEThis Standard has been prepared by the Australian members of the Joint StandardsAustralia/Standards N ew Zealand Committee MT-014, Corrosion of Metals, to supersedeAS 2345—1992, Dezincification resistance of copper alloys.After consultation with stakeholders in both counties, Standards Australia and StandardsN ew Zealand decided to develop this Standard as an Australian Standard rather than anAustralian/New Zealand Standard.The objective of this revision is to reconfirm the procedures, and update the referencedocuments and formatting.The test method given in Appendix C of this Standard is based on the InternationalOrganization for Standardization publication ISO 6509:1981, Corrosion of metals andalloys—Determination of dezincification resistance of brass , but contains more detailedinformation on the testing procedure especially in respect to the measurement ofdezincification.The terms ‘normative’ and ‘informative’ have been used in this Standard to define theapplication of the appendix to which they apply. A ‘normative’ appendix is an integral partof a Standard, whereas an ‘informative’ appendix is only for information and guidance.s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 20073 AS 2345—2006CONTENTSPageFOREWORD (4)1 SCOPE (5)2 REFERE N CED DOCUME NTS (5)3 DEFI N ITIO N S (5)4 GE N ERAL REQUIREME NTS (6)APPENDICESA PURCHASIN G GUIDELIN ES (8)B LISTING OF EXAMPLES OF CATEGORY I ALLOYS (9)C TEST METHOD FOR THE ASSESSMENT OF THE SUSCEPTIBILITY TODEZINCIFICATION OF COPPER ALLOYS CONTAINING ZINC (11)D GENERAL INFORMATION ON FACTORS WHICH AFFECT THERESISTANCE OF COPPER ALLOYS TO DEZINCIFICATION (17)s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 2007AS 2345—2006 4FOREWORDIt is generally accepted that copper alloys that contain not more than 15 percent zinc andalpha brasses that are inhibited by the presence of adequate levels of arsenic or antimonyare resistant to the corrosion phenomenon called ‘dezincification ’ when in service in wateror soil environments. In addition, some alpha-beta brasses can be dezincification resistantprovided they have certain structural characteristics and the alpha phase is inhibited.An accelerated test method is given in this Standard for the assessment of the susceptibilityof brasses, and other copper alloys containing zinc, to dezincification. It enables themeasurement of depth but does not give information on the mode of dezincification.s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 20075 AS 2345—2006STANDARDS AUSTRALIAAustralian StandardDezincification resistance of copper alloys1 SCOPEThis Standard specifies procedures for determining the resistance of copper-base alloyscontaining zinc to the form of corrosion termed ‘dezincification ’, in two steps as follows:(a)Alloys are categorized into two groups based on their chemical composition which is used to assess their susceptibility to dezincification. (b) Alloys assessed to be resistant to dezincification on the basis of chemical compositiondo not require testing. Alloys considered to be susceptible can be further evaluated bytesting them under accelerated laboratory conditions after the completion of allmanufacturing stages. Following this test and where the specified acceptance criteriaare met, such alloys can also be regarded as resistant to dezincification.This Standard also specifies the acceptance criteria for the dezincification resistance ofcopper/zinc alloy components designed for use in contact with potable water or soils.NOTE: Advice and recommendations on information to be supplied by the purchaser at the timeof enquiry or order are contained in Appendix A.2 REFERENCED DOCUMENTSThe following documents are referred to in this Standard:AS1565 Copper and copper alloys —Ingots and castings1566Copper and copper alloys —Rolled flat products 2738 Copper and copper alloys —Compositions and designations of refineryproducts, wrought products, ingots and castingsAS/N ZS 1567 Copper and copper alloys —Wrought rods, bars and sections MP 52 Manual of authorization procedures for plumbing and drainage products 3 DEFINITIONS For the purpose of this Standard, the definitions below apply. 3.1 Dezincification Corrosion of copper/zinc alloys involving loss of zinc leaving a residue of spongy or porous copper. 3.2 Dezincification-resistant copper/zinc alloys Alloys having the appropriate chemical composition and physical characteristics to enable them to meet the dezincification test requirements of this Standard. 3.3 Test piece A piece prepared for testing and made from a test specimen by some mechanical operation.s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 2007AS 2345—2006 63.4 Test sampleA portion of metal or a group of items selected from a batch or consignment by a samplingprocedure.3.5 Test specimenA portion of metal or a single item taken from the test sample for the purpose of applying aparticular test.4 GENERAL REQUIREMENTS4.1 GeneralThe dezincification resistance of a component is assessed on the basis of chemicalcomposition, or by testing after the completion of all manufacturing and heat-treatmentprocedures.N OTE: For quality control purposes, the dezincification test may also be carried out oncomponents during their manufacturing stages.4.2 Acceptance requirementsThe chemical composition of an alloy or its ability to pass a dezincification test is the basisof acceptance or rejection in accordance with the following:(a) Category I alloys Category I alloys contain not more than 15 percent zinc and areacceptable on the basis of chemical composition. They are not required to besubjected to the dezincification resistance test.NOTE: Examples of Category I alloys are given in Table B1 of Appendix B.(b) Category II alloys Category II alloys are all alloys that do not meet the Category Irequirement and are deemed to be susceptible to dezincification. Since certain ofthese alloys may still prove to be resistant to dezincification (by virtue of heattreatment or specific compositional control), they shall be tested in accordance withthe test method specified in Appendix C. To be acceptable, the average depth ofpenetration resulting from dezincification, in each sample, shall be not greater thanthe values specified in Table 1.TABLE 1DEPTH OF PENETRATION CRITERIA FOR ACCEPTANCE OF COPPER-BASE ALLOYS WHEN TESTED IN ACCORDANCE WITH APPENDIX C Product type Average depth, µm (a) Forgings and castings including continuously-cast bar 100 (b) Extruded bar — longitudinal direction 300 — transverse direction 100 The dezincification resistance is normally assessed on the product after all manufacturing procedures are completed but prior to installation in the field. Any brazing operation performed at temperatures in excess of 600°C may change the dezincification resistance properties of some alloys. Where such a brazing operation has been part of the manufacturing process, the relevant average depth criterion shall be increased by 100 µm. NOTES: 1 The low temperatures used with soft solders complying with Specification N umber 009 of MP 52 are considered not to cause any significant change to the dezincification resistance of the product. s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 20077 AS 2345—20064.3 Identification markingEach individual component which has satisfied the acceptance requirements of thisStandard shall be clearly and durably marked with the letters ‘DR ’.s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 2007AS 2345—2006 8APPENDIX APURCHASING GUIDELINES(Informative)A1 GENERALAustralian Standards are intended to include the technical requirements for relevant products, but do not purport to comprise all the necessary provisions of a contract. This Appendix contains advice and recommendations on the information to be supplied by the purchaser at the time of enquiry or order.A2 INFORMATION TO BE SUPPLIED BY THE PURCHASERThe purchaser should supply the following information at the time of enquiry and order: (a)The product specification including the alloy designation, where appropriate. (b)Identification of the product or component and the method of manufacture. (c)The type of test required, i.e. chemical analysis or the test method in accordance with Appendix C. (d)Details of sampling procedures to be followed, if applicable. (e)Reference to this Australian Standard, i.e. AS 2345.s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 20079 AS 2345—2006.au Standards AustraliaAPPENDIX BLISTING OF EXAMPLES OF CATEGORY I ALLOYS(Informative)Table B1 of this Appendix gives a listing of some examples of Category I alloys and theirchemical compositions.N OTE: The impurity limits for some of the alloys in Table B1 are incomplete and reference should be made to their full specifications for chemical composition which are given in AS 1565, AS 1566, AS/NZS 1567 and AS 2738.A c c e s s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 2007Standards Australia.au10AS 2345—2006T A B L E B 1E X A M P L E S OF C A T EG O R Y I A L L O Y S C h e m i c a l c o m p o s i t i o n , p e r c e n tC o p p e r L e a d I r o n T i nZ i n c A l u m i n i u m P h o s p h o r o u s N i c k e l B i s m u t hS i l i c o n A n t i m o n yA l l o y d e s i g n a t i o nA l l o y n a m eM i n . M a . M i n .M a .M i n .M a .M i n .M a .M i n .M a . M i n . M a .M i n . M a .M a . M a . M a . M a .W r o u g h t a l l o y s C 220000 90/10g i l d i n g m e t a l 89.0 91.0 — 0.05 — 0.05 — — R E M — — — — — — — — —C a s t a l l o y sC 83600* 85/5/5/5 l e a d e d g u n m e t a l 84.0 86.0 4.0 6.0 — 0.30 4.0 6.0 4.0 6.0 — 0.005— 0.05 1.0——0.25C 83810 83/3/9/5 l e a d e dgu n m e t a l R E M — 4.0 6.0 — 0.3 2.0 3.5 7.5 9.5 — 0.005— —2.00.100.01—C 92410† 87/7/3/3 l e a d e dg u n m e t a l R E M — 2.5 3.5 — 0.20 6.0 8.0 1.5 3.0 — 0.005— —2.00.05 0.0050.25C 92610† 88/10/2 gu n m e t a l R E M — — 1.5 — 0.15 9.5 10.5 1.7 2.8 — 0.005— — 1.00.03 0.005—C W 602N ‡ 61.0 63.0 1.7 2.8 — 0.1 — R E M R E M — — 0.05 — — 0.3— ——* S 0.08%† M n 0.03%‡ A s 0.02% m i n –0.15% m a xN O T E : R E M i n d i c a t e s t h a t t h e r e m a i n d e r o f t h e a l l o y i s c o m p r i s e d o f t h e p a r t i c u l a r e l e m e n t .A c c e s s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 200711 AS 2345—2006.au Standards AustraliaAPPENDIX CTEST METHOD FOR THE ASSESSMENT OF THE SUSCEPTIBILITY TODEZINCIFICATION OF COPPER ALLOYS CONTAINING ZINC(Normative)C1 SCOPEThis Appendix specifies the method for assessing the susceptibility to dezincification of Category II alloys. C2 PRINCIPLEPrepared test pieces are exposed to a hot aqueous solution of copper(II) chloride for 24 h. The depth of any resultant dezincification is measured by microscopic examination. C3 REAGENTSThe following reagents are required: (a)Water, either distilled or demineralized.(b) Copper(II) chloride test solution. Dissolve 12.8 g of analytical grade CuCl 2.2H 20 inwater and make up to 1000mL ±10 mL. (c)Ethanol or other suitable solvent (analytical grade).C4 APPARATUSThe following apparatus is required:(a) A glass vessel sealed with a suitable cover to prevent concentration of the testsolution by evaporation (see Figure C1). (b) A water or oil bath, or other method of heating, controlled to maintain a temperature within ±3°C of the test temperature.(c)A microscope fitted with a stage micrometer and a calibrated measuring scale with a magnifying capability of 100×, minimum. The accuracy of the measuring scale shall be within ±5 µm.(d) A suitable thermometer, accurate to ±0.5°C.(e) Metallographic polishing equipment.A c c e s s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 2007AS 2345—2006 12Standards Australia.auFIGURE C1 ARRANGEMENT OF A TYPICAL TEST APPARATUS WITH ATHERMOSTATICALLY-CONTROLLED BATHC5 PREPARATION OF TEST PIECES C5.1 GeneralUnless otherwise specified in the relevant product Standard, test pieces shall be selected in accordance with Paragraph C5.2 and prepared from each test specimen in such a way that the heat or mechanical deformation resulting from the sawing, grinding or filing operations does not adversely affect the metal grain structure.The area of each test piece to be exposed to the test solution shall be not less than 50 mm 2 unless the cross-section of components is too small to permit such a size, in which case the largest practical test area shall be prepared. C5.2 Selection of test piecesTest pieces shall be selected as follows: (a)For extruded rod, drawn tube and components machined from these products Cut one test piece to expose a surface parallel to the direction of the grain flow and a second to expose a surface perpendicular to the grain flow. Test pieces taken from rod samples shall be cut in such a way to include the midpoint between the axis and the periphery.(b)For castings and forgings Cut one test piece from the thickest section of the casting or forging. If the casting or forging is machined then the thickest section prior to machining shall be selected.(c)For continuously-cast rod or for components machined from continuously-cast rod Cut one test piece in such a way to include the midpoint between the axis and the periphery.(d)For components that are brazed together Cut one test piece adjacent to the brazed joint. In the case of a joint made with extruded rod, cut two test pieces in accordance with Item (a). If both components of a brazed joint are manufactured from Category II alloys, then they both shall be tested.A c c e s s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 200713 AS 2345—2006.au Standards AustraliaIf during the selection process or at any stage during the examination process, gross porosity, cracks or other similar defects are detected, then the test piece shall be rejected and another test piece selected. C5.3 Mounting of test piecesMount each test piece in a suitable electrically insulating material such as phenolic or acrylic resin (see Figure C2). Following mounting, grind the test surface using progressively finer grades of abrasive paper lubricated with water, and finish wet on P1200 or equivalent grade paper to produce a uniform finish.Clean the prepared test piece, including the mounting, in ethanol or other suitable solvent prior to testing. C6 PROCEDUREC6.1 Exposure of the test pieces The test pieces shall be exposed as follows: (a)Fully immerse the test pieces in a glass vessel of appropriate size containing unused test solution, ensuring that the test surfaces are in the vertical plane and are at least 15 mm above the bottom of the glass vessel (see Figure C2). Use at least 250 mL of test solution for each 100 mm 2 of exposed test surface.NOTES:1 A flat may be ground in the side of the specimen mount to ensure the correct positioningand stability of the specimen during the immersion period. 2 Do not test different alloy types in the one vessel.(b)Seal the glass vessel with a suitable cover to prevent concentration of the test solution by evaporation.(c) P lace the glass vessel containing the test solution and test pieces in a water or oilbath, or other equipment for heating, controlled at 75 ±3°C. (d) Expose the test pieces to the test solution for 24 ±0.5 h. The time of exposure is taken from the time the test pieces are immersed in the test solution.(e)At the end of the 24 h test period, remove the test pieces from the test solution and wash them in distilled or demineralized water then in ethanol, or other suitable solvent, and allow to dry.FIGURE C2 TYPICAL POSITIONING OF TEST PIECE IN MOUNTING MATERIALA c c e s s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 2007AS 2345—2006 14Standards Australia .auC6.2 Preparation of exposed test pieces for assessment of dezincificationSection each test piece normal to and through the mid-point of the exposed surface. The total length of section through each exposed test piece surface shall be not less than 5 mm except for small test pieces which shall be of maximum possible length (see Paragraph C5.1).For rod over 25 mm in diameter, the length of section shall be not less than 10 mm.The sectioned test piece and mounting material may be remounted to facilitate handling during subsequent metallographic polishing. Grind the sectioned test piece using progressively finer grades of abrasive paper and a water lubricant and finish wet on P1200 or equivalent grade paper to produce a uniform finish.NOTE: It is recommended when preparing the sample that the metallographer should begin with P80 abrasive papers and progressively decrease in abrasiveness until P1200 is reached.Carry out final polishing using 0-1 µm polishing compound. Before examination, clean and dry the prepared surfaces using ethanol or other suitable solvent to prevent staining.NOTE: The use of a suitable etchant such as acidified fe ic chlo ide may assist in the examination of the surface.C6.3 Measurement of dezincificationMeasurement of the extent of dezincification should be carried out as soon as possible afterremoval from the test solution, using the following procedure: (a)With the microscope, measure and record the depth of dezincification at a minimumof 25 equally spaced positions along the total length of the exposed surface. The depth of dezincification at each position shall be the distance of greatest penetration of dezincification into the test piece, measured from the surface of the microsection along a line normal to the surface (see Figure C3).Edge effects along the line of the interface between the mounting material and the test piece may, after exposure to the test solution, show a greater depth of dezincification. In such cases measurements shall be taken at positions where edge effects are negligible (see Figure C4).Exposure of a test piece may result in the removal of a layer of metal from the surface exposed to the test solution. In this case, measurement of the dezincified layer shall be taken from the post-exposure test surface (see Figure C4).(b) Calculate the average depth of dezincification from the 25 or greater individualmeasurements.A c c e s s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 200715 AS 2345—2006.au Standards Australia204060801000EQUISPACED MEASURING POSITIONSMEASURED DEPTH OF DEZINCIFICATION D E P T H M E A S U R I N G S C A L EUnaffected betaphase stringerDezincified beta phase stringerSurface of microsectionAlpa phase matrixNOTES: 1 This illustr ation r epr esents a longitudinal section of an extr uded r od, the beta phase being pr esent as stringers.2 Fo cast p oducts, the di ectionality of beta phase st inge s may not be evident but the same measurement principles apply.3It is important to measure the actual dezincification at each measuring position. In this illustration, tow r eadings show zer o (0) dezincification r egar dless of the close pr oximity of dezincified beta phase stringers to these measuring positions.FIGURE C3 SCHEMATIC ILLUSTRATION FOR THE MEASUREMENT OF DEPTH OFDEZINCIFICATION OCCURRING IN AN ALPHA/BETA COPPER/ZINC ALLOYA c c e s s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 2007AS 2345—2006 16Standards Australia .auDepth ofFIGURE C4 SECTIONED TEST SPECIMEN AND MOUNT AFTER IMMERSION TESTING, ILLUSTRATING EDGE EFFECTS CAUSED BY SOLUTION SEEPING BETWEEN SPECIMEN AND MOUNT AND THE METHOD OF MEASUREMENT OF THEDEZINCIFIED LAYER FROM THE POST-EXPOSURE TEST SURFACEC7 TEST REPORTThe test report shall include the following information: (a) The name of the testing authority.(b) The product identification.(c)The test piece identification and related product specification. (d) The number of test pieces taken and the total area of exposed surface, in squaremillimetres. (e)The number of depth measurements taken and the average depth of dezincification.(f) Magnification used.(g) Compliance or otherwise with the requirements of Table 1 of this Standard, ifapplicable. (h) Reference to this test method, i.e. Appendix C of AS 2345. (i) The report number and the date of test.A c c e s s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 200717 AS 2345—2006.au Standards AustraliaAPPENDIX DGENERAL INFORMATION ON FACTORS WHICH AFFECT THE RESISTANCEOF COPPER ALLOYS TO DEZINCIFICATION(Informative)D1 GENERALThis Appendix contains information on the nature of dezincification, the various types of copper alloys that are susceptible and advice on the heat treatment and brazing of susceptible alloys.D2 NATURE OF DEZINCIFICATIONDezincification occurs when susceptible alloys, usually ‘uninhibited ’ brasses containing more than 15 percent zinc, come into contact with natural or treated waters. Dezincification, as its name suggests, is a corrosion process where a brass alloy undergoes selective leaching of the zinc component into the water supply leaving only a porous copper residue. The dezincification process can either be localized, as often occurs with the seat of a tap, or can be general and affect the whole surface of a fitting. In some cases corrosion can be severe enough to cause leakage or breakage, as the porous copper has a very low mechanical strength.In some soft waters of relatively high pH, the dissolved zinc from the dezincification process will precipitate as a basic zinc salt. This phenomenon, commonly called ‘meringue ’ dezincification can result in the waterways of fittings becoming partially or completely blocked by this bulky corrosion product.D3 MEANS OF ACHIEVING DEZINCIFICATION RESISTANCE D3.1 GeneralMost copper alloys containing up to approximately 37 percent zinc can be rendered dezincification resistant either by suitable control of chemical composition, or by a combination of suitable chemical composition control, an appropriate manufacturing process and a specific heat treatment operation after the completion of any thermal processes such as casting, forging or extrusion.Copper alloys can be divided into two main groups as follows: (a)Alpha brasses These brasses have a microstructure of alpha phase only and are subject to dezincification when the zinc content is greater than 15 percent if the alpha phase is not ‘inhibited ’. Inhibiting is usually achieved by the addition of a very small amount of arsenic; the level being so low as to pose no health hazard. These brasses typically contain less than 35 percent zinc, are cold formed, and do not require heat treatment to develop their dezincification resistance. Such inhibited alloys have a very high degree of dezincification resistance.(b)Alpha-beta (duplex) brasses These brasses contain both the alpha and beta phases and may also contain small amounts of gamma phase. Inhibiting of the alpha phase is required. The beta phase cannot be inhibited and thus to achieve dezincification resistance, both the amount of the beta phase and the size of the beta phase grains need to be kept as small as possible. This is achieved by carefully controlling the chemical composition, the manufacturing processes and, if necessary, heat treatment.NOTE: A r senic can lose its inhibiting effect by combining with i r on, manganese o r magnesium impurities.A c c e s s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 2007AS 2345—2006 18Standards Australia .auD3.2 Heat treatment of duplex copper alloysHeat treatment, if required, should be carried out after the completion of hot working processes such as extrusion and forging. The following guidelines apply: (a)For rod supplied for machining into component parts Rod is usually supplied in the heat-treated condition. Upon request, a test certificate should be made available by the supplier to verify that the quality of the rod is in accordance with the requirements of this Standard.(b)For rod supplied for forging As the temperature of the forging process will negate the effects of any heat treatment, rod supplied for forging is usually in the unheat-treated condition. It is then the responsibility of the forger to heat treat the forgings in accordance with the recommendations of the supplier of the rod. The forger may reasonably request the rod supplier to provide a test certificate showing the material to be a dezincification resistant copper alloy which, when forged and heat treated in accordance with the supplier ’s recommendations, should be capable of passing the dezincification resistance test specified in this Standard. It should be noted that some forgings can be heat treated by quenching the hot forging in water. However, to successfully pass the DR test, these forgings are required to have the appropriate chemical composition for this type of heat treatment and be quenched at the correct temperature. Careful control is essential to achieve reliable results. Although particular dezincification resistant copper alloys are produced to suit particular manufacturing processes, most duplex copper alloys can be satisfactorily heat treated by holding at a temperature of 500°C for four hours followed by aircooling.(c)For ingots supplied for castings Ingots can vary greatly in composition depending on whether sand or die castings are to be produced. Often no heat treatment is required for castings, however it is necessary that the recommendations of the ingot supplier be followed.The duplex copper alloys can, when correctly manufactured and heat treated, develop a degree of dezincification resistance not significantly less than the alpha inhibited alloys. D4 THE EFFECT OF BRAZING ON DEZINCIFICATION RESISTANCEAny brazing performed at a temperature above 600°C will almost certainly affect the dezincification resistance of most alloys. Accordingly, it is important to avoid localized ‘hot spots ’ and to perform the brazing operations at as low a temperature as is practicable.A c c e s s e d b y U N I V E R S I T Y O F S O U T H A U S T R A L I A o n 03 F e b 2007。
专利名称:METHOD FOR TESTING THE BRAKE SYSTEM OF A VEHICLE发明人:KOKAL, Helmut,MONSCHEIN, Martin,ARCE ALONSO, Gorka申请号:EP17200957.3申请日:20171110公开号:EP3321657B1公开日:20200108专利内容由知识产权出版社提供摘要:The invention relates to a method for testing a brake system (B) of a vehicle (F), wherein simulation results are used for the adaptation of a test on a test stand (P). The object of the invention is to provide a method for improving the test of the brake system (B). In accordance with the invention, this is achieved by creating a first flow simulation model (S1) of the vehicle (F) from geometric data (G) and by creating a second flow simulation model (S2) of the test stand (P), and a first simulation result is calculated with at least one first input variable by using the first flow simulation model (S1), and a change (Δ) of at least one second input variable in the second simulation model (S2) is carried out until a second simulation result of the second simulation model (S2) is achieved which corresponds essentially to the first simulation result.申请人:AVL List GmbH代理机构:Babeluk, Michael更多信息请下载全文后查看。
常见维修英语词汇 Company number:【0089WT-8898YT-W8CCB-BUUT-202108】修理的产品 repaired item不修理的产品 non-repaired item服务service规定功能required function时刻instant of time时间区间 time interval持续时间 time duration累积时间 accumulated time量度 measure工作 operation修改(对产品而言) modification (of an item) 效能 effectiveness固有能力 capability耐久性 durability可靠性 reliability维修性 maintainability维修保障性 maintenance support performance可用性 availability可信性 dependability失效 failure致命失效 critical failure非致命失效 non-critical failure误用失效 misuse failure误操作失效 mishandling failure弱质失效 weakness failure设计失效 design failure制造失效 manufacture failure老化失效;耗损失效 ageing failure; wear-out failure 突然失效 sudden failure渐变失效;漂移失效 gradual failure; drift failure 灾变失效 cataleptic failure关联失效 relevant failure非关联失效 non-relevant failure独立失效 primary failure从属失效 secondary failure失效原因 failure cause失效机理 failure mechanism系统性失效;重复性失效 systematic failure; reproducible failure 完全失效 complete failure退化失效 degradation failure部分失效 partial failure故障 fault致命故障 critical fault非致命故障 non-critical fault重要故障 major fault次要故障 minor fault误用故障 misuse fault误操作故障 mishandling fault弱质故障 weakness fault设计故障 design fault制造故障 manufacturing fault老化故障;耗损故障 ageing fault; wear-out fault程序敏感故障 programme-sensitive fault数据敏感故障 data-sensitive fault完全故障;功能阻碍故障 complete fault; function-preventing fault 部分故障 partial fault持久故障 persistent fault间歇故障 intermittent fault确定性故障 determinate fault非确定性故障 indeterminate fault潜在故障 latent fault系统性故障 systematic fault故障模式 fault mode故障产品 faulty item差错 error失误 mistake工作状态 operating state不工作状态 non-operating state待命状态 standby state闲置状态;空闲状态 idle state; free state不能工作状态 disable state; outage外因不能工作状态 external disabled state不可用状态;内因不能工作状态 down state; internal disabled state 可用状态 up time忙碌状态 busy state致命状态 critical state维修 maintenance维修准则 maintenance philosophy维修方针 maintenance policy维修作业线 maintenance echelon; line of maintenance维修约定级 indenture level (for maintenance)维修等级 level of maintenance预防性维修 preventive maintenance修复性维修 corrective maintenance受控维修 controlled maintenance计划性维修 scheduled maintenance非计划性维修 unscheduled maintenance现场维修 on-site maintenance; in sits maintenance; field maintenance 非现场维修 off-site maintenance遥控维修 remote maintenance自动维修 automatic maintenance逾期维修 deferred maintenance基本的维修作业 elementary maintenance activity维修工作 maintenance action; maintenance task修理 repair故障识别 fault recognition故障定位 fault localization故障诊断 fault diagnosis故障修复 fault correction功能核查 function check-out恢复 restoration; recovery监测 supervision; monitoring维修的实体 maintenance entity影响功能的维修 function-affecting maintenance妨碍功能的维修 function-preventing maintenance 减弱功能的维修 function-degrading maintenance不影响功能的维修 function-permitting maintenance 维修时间 maintenance time维修人时 MMH; maintenance man-hour实际维修时间 active maintenance time预防性维修时间 preventive maintenance time修复性维修时间 corrective maintenance time实际的预防性维修时间 active preventive maintenance time 实际的修复性维修时间 active corrective maintenance time 未检出故障时间 undetected fault time管理延迟(对于修复性维修) administrative delay后勤延迟 logistic delay故障修复时间 fault correction time技术延迟 technical delay核查时间 check-out time故障诊断时间 fault diagnosis time故障定位时间 fault localization time修理时间 repair time工作时间 operating time不工作时间 non-operating time需求时间 required time无需求时间 non-required time待命时间 standby time闲置时间 idle time; free time不能工作时间 disabled time不可用时间 down time累积不可用时间 accumulated down time外因不能工作时间 external disabled time; external loss time 可用时间 up time首次失效前时间 time to first failure失效前时间 time to failure失效间隔时间 time between failures失效间工作时间 operating time between failures恢复前时间 time to restoration; time to recovery使用寿命 useful life早期失效期 early failure period恒定失效密度期 constant failure intensity period恒定失效率期 constant failure rate period耗损失效期 wear-out failure period瞬时可用度 instantaneous availability瞬时不可用度 instantaneous unavailability平均可用度 mean availability平均不可用度 mean unavailability渐近可用度 asymptotic availability稳态可用度 steady-state availability渐近不可用度 asymptotic unavailability稳态不可用度 steady-state unavailability渐近平均可用度 asymptotic mean availability渐近平均不可用度 asymptotic mean unavailability 平均可用时间 mean up time平均累积不可用时间 mean accumulated down time 可靠度 reliability瞬时失效率 instantaneous failure rate平均失效率 mean failure rate瞬时失效密度 instantaneous failure intensity平均失效密度 mean failure intensity平均首次失效前时间 MTTFF; mean time to first failure平均失效前时间 MTTF; mean time to failure平均失效间隔时间 MTBF; mean time between failures平均失效间工作时间 MOTBF; mean operating time between failure 失效率加速系数 failure rate acceleration factor失效密度加速系数 failure intensity acceleration factor维修度 maintainability瞬时修复率 instantaneous repair rate平均修复率 mean repair rate平均维修人时 mean maintenance man-hour平均不可用时间 MDT; mean down time平均修理时间 MRT; mean repair timeP-分位修理时间 P-fractile repair time平均实际修复性维修时间 mean active corrective maintenance time平均恢复前时间 MTTR; mean time to restoration 故障识别比 fault coverage修复比 repair coverage平均管理延迟 MAD; mean administrative delayP-分位管理延迟 P-fractile administrative delay 平均后勤延迟 MLD; mean logistic delayP-分位后勤延迟 P-fractile logistic delay验证试验 compliance test测定试验 determination test实验室试验 laboratory test现场试验 field test耐久性试验 endurance test加速试验 accelerated test步进应力试验 step stress test筛选试验 screening test时间加速系数 time acceleration factor维修性检验 maintainability verification维修性验证 maintainability demonstration观测数据 observed data试验数据 test data现场数据 field data基准数据 reference data冗余 redundancy工作冗余 active redundancy备用冗余 standby redundancy失效安全 fail safe故障裕度 fault tolerance故障掩盖 fault masking预计 prediction可靠性模型 reliability model可靠性预计 reliability prediction可靠性分配 reliability allocation; reliability apportionment故障模式与影响分析 FMEA; fault modes and effects analysis故障模式影响与危害度分析 FMECA; fault modes, effects and criticality analysis故障树分析 FTA; fault tree analysis应力分析 stress analysis可靠性框图 reliability block diagram故障树 fault tree状态转移图 state-transition diagram应力模型 stress model故障分析 fault analysis失效分析 failure analysis维修性模型 maintainability model维修性预计 maintainability prediction维修树 maintenance tree维修性分配 maintainability allocation; maintainability apportionment 老练 burn in可靠性增长 reliability growth可靠性改进 reliability improvement可靠性和维修性管理 reliability and maintainability management 可靠性和维修性保证 reliability and maintainability assurance可靠性和维修性控制 reliability and maintainability control可靠性和维修性大纲 reliability and maintainability programme可靠性和维修性计划 reliability and maintainability plan可靠性和维修性审计 reliability and maintainability audit可靠性和维修性监察 reliability and maintainability surveillance 设计评审 design review真实的 true预计的 predicted外推的 extrapolated估计的 estimated固有的 intrinsic; inherent使用的 operational平均的 meanP-分位 P-fractile瞬时的 instantaneous 稳态的 steady state。
4230014Accelerated corrosion test 加速腐蚀测试Atmospheric corrosion 大气腐蚀Orientation 定位This standard is identical to the previously issued standard STD 1027,14, issue 1.这个版本与以前的版本标准STD 1027,14第1版一致Contents 内容1 Scope 范围2 Apparatus 设备2.1 Temperature and humidity control 温度与湿度控制2.2 Application of salt solution 盐溶液的应用2.3 System for drying of wet test objects 干燥湿测试物的系统2.4 Requirements on salt solution 盐溶液的要求3 Test objects 测试物4 Procedure 过程4.1 Arrangement of test objects 测试物的准备4.2 Exposure conditions of test cycle 测试周期的暴露条件4.3 Duration of test 测试的持续时间5 Evaluation of results 结果的评估6 Test report 测试报告1 Scope 范围This standard defines an accelerated corrosion test method to be used in assessing the corrosion resistance of metals in environments where there is a significant influence of chloride ions, mainly as sodium chloride from a marine source or by winter road deicing salt.这个标准定义了一个腐蚀测试的方法用来评估金属在有氯化物离子的环境中的耐腐蚀性.眼珠要是来自大海的氯化钠或着冬天用来除冰盐The standard specifies a test procedure to be used in conducting the accelerated corrosion test to simulate atmospheric corrosion conditions in a controlled way.这个方法指明了一个测试过程用来操控模拟大气腐蚀条件下的加速腐蚀测试A suitable test equipment to fulfil the conditions for this standard is proposed.提议一个适合的测试设备来满足这个标准的条件In this standard, the term ”metal” includes metallic materials with or without corrosion protection. 在这个标准中,周期金属包括有保护和无保护的材料The accelerated laboratory corrosion test applies to:实验室中进行的加速腐蚀测试适用于- metals and their alloys 金属和其他合金- metallic coatings 金属的涂层- chemical conversion coatings 化学转化涂层- organic coatings on metals 金属上的有机涂层The method is suitable for comparative testing in the optimization of surface treatment systems for test panels, specially designed specimens and components.这个方法适用于在试板表面处理系统中最优化的比较测试,特别是有计划的样本和元件2 Apparatus 设备2.1 Temperature and humidity control 温度与湿度控制The climate chamber shall be designed so that the following test conditions can be obtained, controlled and monitored during the test.气候室应该设计成能够得到下列测试条件,在测试中能够控制和监视During a period of constant climate conditions an accuracy of ±3 % RH for the mean value in relative humidity shall apply, which corresponds to a minimum temperature accuracy requirement of in this case ±0,6 °C.在测试期间连续的气候条件相对湿度的精确平均值±3 %应适用.在这个情况下相应的温度最小值要求为±0,6 °CFor the instantaneous maximum deviation from set relative humidity a value of ±5 % RH in the range from 50 % RH to 95 % RH at 40 °C shall apply.对于与设定的相对湿度值±5 %的瞬间偏离,在40 °C相对湿度范围从50 %-90%The climate chamber must be designed so that the relative humidity may be changed linearly with respect to time, from 95 % RH to 50 % RH within 2 h. In figure 1 a suitable design of a climate chamber is shown.气候室应设计成这样,先队湿度可以按照时间线性的变化,在2小时内从50%到90%.在图1中一个适合的气候室如图示To meet the temperature and humidity accuracy requirements, the climate chamber shall be equipped with an apparatus to provide evenly distributed efficient circulation of air to secure small temperature and humidity variations in the chamber.为了满足温度和湿度精确要求,气候室应该装备一个设备提供均匀有效的空气环流来确保温度和湿度的细小变化Sufficient insulation of the chamber walls and lids is required in order to avoid excessive condensation on these surfaces.要求气候室的墙和盖子充分的绝缘为了避免在表面上过多的浓缩The humidity and temperature levels of the climate chamber during a test cycle shall be continuously monitored.在整个测试周期中,气候室的温度和湿度应该是连续监控的The humidity and temperature sensors should reflect the climate conditions in the very test area.温度和湿度传感器应该真实的反映测试区域的气候条件For measurements of the relative humidity use a hygrometer designed for measurements at high humidity levels, e.g. a gold mirror dew point meter.对于湿度的测量用专为测量高湿度等级而设计的湿度计For temperature measurements, the use of a resistance thermometer is required.对与温度的测量用要求用耐高温温度计1 Test chamber 试验箱2 Machinery unit 机械单位3 Test object area 测试物区域4 Well insulated walls/lids 好的绝缘墙/盖子5 Air distribution plate 空气分配器6 Swaying tube/member with spraying nozzles 带喷嘴的摇动管/装备7 Air purge outlet 空气净化出口8 Outlet 出口9 Climatization unit (cooling/heating/humidification)顺应气候装置(冷/热/潮湿)10 Wet and dry Pt100 sensors (psychrometric sensor) 100Pt干湿探头11 Cooling machine 冷却机械12 V essel with salt solution + pressurizing pump 有盐溶液的容器+压力泵13 Motor and link arms for swaying motion of precipitation tube/member振动管/设备的电动机和连接臂14 Control unit 颜色单位15 Electronics and regulatory devices 电子和调整设备2.2 Application of salt solution 盐溶液的应用It is advisable to install a spraying device for salt application inside the climate chamber.在试验箱内必须有一个储存装置用来收集喷雾掉下的盐溶液2.2.1 Spraying device 喷雾装置The spraying device shall be capable of producing a finely distributed, uniform spray falling on the test objects with a flow corresponding to a downfall of 15 mm/h ±5 mm/h.喷雾装置应该能够产生一个好的分布式的均衡的喷雾洒在试板表面,连续流量落下的量是15 毫米/小时±5毫米/小时.When using spray the solution must not be reused. 当使用喷雾时盐溶液不能再次使用The device for salt spraying shall preferably be made of a number of flat spraying nozzles mounted in series on a rail or tube in such a way that their spray patterns are partly overlapping, see figure 2.盐雾装置应该更适合制成平滑的喷嘴固定在一系列的横杆或光子上,用这样的方式,它们喷雾的类型就部分重叠,见图2A swaying mode of the tube member must be implemented in order to distribute the salt solution uniformly over the test area.水管组织摇动的模式必须实行为了均衡的分配盐溶液在测试区域上The spraying device shall be made of, or lined with, materials resistant to salt solution corrosion. The use of plastics material is recommended. 喷雾装置应该由耐盐溶液腐蚀的材料制成Recommended nozzle type: Spraying Systems Uni Jet 800050VP. C/C mounting of nozzles on the supporting tube 50-60 cm (if approx. 1 m above the test object).推荐的喷嘴类型:喷雾装置喷头800050VP.C/C喷嘴固定在50-60厘米的支撑管上A suitable design of spraying device is shown in figure 2. 喷雾装置适合的设计如图22.2.2 Salt application by immersion 盐溶液的应用是浸泡If a spraying facility is not available, an alternative to spraying is a complete immersion of the test objects in salt solution.如果喷雾设备不可用,那么就将测试物完全浸泡在盐溶液中来代替喷雾This is a less favourable alternative than spraying due to uncontrolled leaching and a risk for contamination of the test objects.这是一个比喷雾稍微差一点的方法,由于无法控制过滤和减小测试物被污染的风险Alloys of magnesium and some pure aluminium alloys are very sensitive to dissolved ions of other metals (forming cathodes when reduced), and should not be immersed together with other metals.镁合金和铝合金对于溶解其他金属的离子都非常敏感(形成阳极时减少)并且不应该和其他的金属浸泡在一起2.3 System for drying of wet test objects 湿测试物的干燥系统After having been sprayed intil wet, all test objects shall be dried from excessive visible wetness in order to regain climate control.在经过喷雾保湿后,所有的测试物应该干燥过多的可见湿度为了恢复气候控制The climate chamber should therefore be equipped with a system for drying in a forced airflow. 气候箱应该装备一个空气环流强制干燥系统Forced drying is preferably arranged by supercooling and reheating an internal circulating flow.强制干燥更适宜准备内在的过冷和重新加热环流Alternatively, drying may be arranged by letting a forced flow of pre-heated ambient air ventilate the chamber.作为选择,干燥可安排一个强制环流预热周围空气使试验箱通风For a 3climate chamber of the volume 1-2 m an airflow rate of 50-100 l/s is recommended.对于一个3米的气候箱,体积的1-2米推荐的空气环流速度为50-100 升/秒The forced airflow shall not be pre-heated to such temperature levels that a temperature of 40 °C will be exceeded. 强制空气环流应该不预热至如此的温度,温度为40 °C 将超过范围If wetting is achieved by immersion in salt solution, drying under ambient conditions may be permitted.如果湿度可以用浸泡盐溶液得到,那么在周围环境条件下干燥可能被允许2.4 Requirements on salt solution 盐溶液的要求The test solution shall be prepared by dissolving sodium chloride in deionized water to a concentration of 1,0 % ±0,1 % (percent by weight).试验溶液应该配制可溶解的氯化钠溶解在去离子水中浓度为1,0 % ±0,1 % (重量百分比) The salt can allow a maximum amount of the following impurities in the sodium chloride:在盐溶液中最多允许以下数量的杂质:Copper 0,00 1 % 铜0,00 1 %Nickel 0,01 % 镍0,01 %Sodium iodide 0,1 % 碘化钠0,1 %The total level of contamination counted on dry salt must not exceed 0,4 percent by weight.总的污染物总量等级占干燥盐的重量不得超过0.4%The 1 % NaCl solution shall be acidified to a 4 concentration of 1x10 M hydrogen ions.1%的氯化钠溶液应该被酸化成4单位浓度为1x10 M氢离子This is preferably done by a standard addition of sulphuric acid, e.g. 1 ml of 0,5 M H SO to 10 l of salt solution, 2 4which yields a pH of approximately 4,2.这个的完成是用一个标准增加硫酸,例1毫升盐溶液形成的浓度大约为4.2When using automatic spraying, the solution must not be reused.当使用自动喷雾时盐溶液不能重复使用When manual immersion is applied, the solution is preferably renewed before each immersion event. 当使用人工浸泡时,盐溶液更适合在每次浸泡前换新的溶液As a rule, different types of test material shall not be immersed in the same solution.作为规则,不同类型的测试材料应该不能浸泡在相同的溶液中However, painted zinc coated or cold rolled steel may be exposed together.但是,镀锌涂层和冷轧钢应可能一起暴露This also applies when different materials are integrated on the same test object.这也适用于当不同的测试材料都用在相同的试板上的时候3 Test objects 测试物The number and type of test objects, their shape, and their dimensions shall be selected according to the specification for the material or product being tested. 测试物的数量和类型,它们的形状和尺寸应该根据指示说明挑选材料或产品测试When this is not specified, these details shall be mutually agreed between the interested parties.当这个没有指明时,这些详情应该由相关双方协商For each series of test objects, data records shall be kept including the following information:对于每个系列的测试物,数据的记录应该保留包括以下信息a)Specification of the material to be tested.b)测试材料的说明b) If the test specimen is subjected to intentional damage in the coating, the shape and the location of the damage shall be described, as well as how the damage was achieved如果测试的样本有故意的损伤在涂层上,那么损伤的位置和形状应该描述并说明损伤是如何形成的.The orientation of the damage during testing shall also be specified.测试中损伤定位也应该指出c) Description of the preparation of the test object, including any cleaning applied before testing and any protection given to edges.描述试板的准备,包括测试前的清洁和边缘的一些保护d) Information of reference material or materials with which the test specimen is to be compared.参考材料与试板材料之间的比较信息e) How the test object is to be examined and which properties are to be assessed, see section 5. 试板是怎样检验的并且评估了哪些性能见图54 Procedure 程序4.1 Arrangement of test objects 试板的排列The test objects shall be placed in the cabinet on stands with their test surface facing upwards.测试物应该直立放在试验箱中并且将测试面向上The angle at which the surface of the test specimens is exposed in the cabinet is important.测试样本在试验箱中表面暴露位置的角度是很重要的For flat test objects the angle at which the test surface is inclined shall preferably be 15° ±5°from the vertical.对于平坦的测试物测试表面与垂直线的角度更适合于倾向15° ±5°In the case of irregular surfaces, for example entire components, this angle shall be adhered to as closely as possible.如果是不规则的表面,例如整个元件,应该尽可能接近这个角度The stands with the test objects shall be placed on the same level in the climate chamber.架子和试板应该在试验箱中应该放在相同水平高度The stands shall be made of inert non-metallic material, such as glass, plastics or suitably coated wood. 架子可以是惰性的非金属材料制成的,如玻璃,塑料或适合已油漆的木头If it is necessary to suspend test object, the material used shall on no account be metallic but be of synthetic fibre, cotton thread or other inert insulating material.如果有必要掉挂测试物,应该使用的材料不应该是金属而是合成纤维,棉线或其他合成绝缘材料The stands shall be designed in such a way that they do not obstruct passing air flow and at thesame time enable proper drainage.架子应该这样设计,它不能阻塞经过的空气环流,同时并有适当的排水装置4.2 Exposure conditions of test cycle测试环流中暴露的条件The one-week main test cycle (fig. 3a) is composed of two twelve-hour sub-cycles; one with controlled humidity cycling, sub-cycle 1 (fig. 3b), the other including salt application, sub-cycle 2 (fig 3c). 一周的主测试循环分成两个阶段,每个阶段12小时,一个阶段有控制的湿度,另外一个盐溶液的使用4.2.1 Sub-cycle 1 第一阶段The main cycle is principally based on a repetition of sub-cycle 1.主循环主要是基于重复第一阶段Step 1:1) Constant conditions at 35 °C and 95 % RH for 4 h.第1.1步:持续的条件作用,温度35 °C相对湿度95 % 时间4小时Step 1:2) Temperature increase from 35 °C to 45 °C with a simultaneous linear reduction of relative humidity from 95 % RH to 50 % RH over a period of 2 h.第1.2步温度从35 °C上升到45 °C同时线性的减少相对湿度从95 % 到50 %持续时间为2小时Step 1:3) Constant conditions at 45 °C and 50 % RH for 4 h.第1.3步连续的条件作用温度45 °C,相对湿度50% 时间4小时Step 1:4) Temperature decrease from 45 °C to 35 °C with a simultaneous increase of relative humidity from 50 % RH to 95 % RH over a period of 2 h.第1.4步温度下降从45 °C 到35 °C同时加强相对湿度从50%到90%持续时间为2小时4.2.2 Sub-cycle 2 第2阶段On Mondays and Fridays sub-cycle 1 is once replaced by sub-cycle 2.在星期一和星期五第1阶段被第2阶段代替Step 2:1) Spraying with salt solution for 15 min. 第2.1步用盐溶液喷雾15分钟Step 2:2) Constant conditions at 35 °C for 1 h 45 min with a relative humidity set point at 95 %-99 % RH in such a way that the test objects remain wet.第2.2步连续条件作用,温度35 °C ,时间为1小时45分钟,相对湿度设定点从95 %-99 % 用那样的方式测试物可以一直保湿Steps 2:1 and 2:2 are then repeated in sequence twice to give a total period of 6 h.第2.1和2.2步交替进行总的持续时间为6小时Step 2:3) Drying of the test objects, at a relative humidity set point of 50 % RH and at a temperature increase from 35 °C to 45 °C over a period of 2 h.第2.3步干燥测试物,在相对湿度设定点45%并且温度从35 °C 到45 °C持续时间为2小时The specified humidity level shall be reached within 2 h leaving the test objects and chamber interior without visible wetness.在2小时内指定的相对湿度等级应该达到并且试验箱的内部不得有看得见的潮气Step 2:4) Constant conditions at 45 °C and 50 % RH for 2 h.第2.4步连续的条件作用温度为45 °C相对湿度为50%时间2小时Step 2:5) Temperature decrease from 45 °C to 35 °C with a simultaneous increase of relative humidity第2.5步温度降低45 °C 到35 °C 同时增加相对湿度A simpler but less favourable alternative to spraying the test objects with salt solution is the use ofmanual immersion of the test objects in salt solution outside of the chamber.一个简单但是比喷雾稍微差点的方法就是将试板用人工浸泡的方法浸泡在在盐溶液中In such a case it is recommended that Steps 2:1 to 2:3 are replaced by: Step 2:1a)如果是那样的话推荐将第2.1a步代替2.1至2.3步Remove the test objects from the climate chamber and immerse them for 15 min in the salt solution.将测试板从试验想中拿出并将它们见跑在盐溶液中15分钟After imersion, manually spray the test objects with salt solution in order to restore droplets on the surface.在浸泡后,用盐溶液喷雾试板为了保存表面的露珠Step 2:2a) Let excessive fluid run off and return the test objects to the test chamber at 35 °C with the relative humidity set point at 95 %-99 % RH for 1 h and 45 min.第2.2A步将多余的盐溶液倒出并且将试板在放进试验箱中,温度35 °C相对湿度为95%-99%持续时间为1小时45分钟Steps 2:1 and 2:2 are then repeated in sequence two more times to give a total period of wetness of 6 h.第2.1和2.2步交替进行总的持续时间为6小时Step 2:3a) If the climate chamber is not equipped with a system for forced air drying, manual drying may be permitted provided the temperature does not exceed 40 °C and that the test objects are free from visible droplets within a period of 2 h.第2.3步如果试验箱没有配备相关的空气干燥系统,推荐使用手动干燥,温度不超过40 °C 测试物表面\要没有可见的露珠,时间为2小时4.3 Duration of test测试的持续时间The test duration shall be determined by the specification covering the material or product being tested.测试的持续时间是由覆盖的材料或正在测试的产品决定的When not specified, the test period shall be agreed by the orderer and the testing department.当没有指定时,持续时间应该由供应商与测试部门协商Recommended test duration for assessment of corrosion resistance of different kinds of materials is given below.推荐评估不同材料的腐蚀持续时间见下表In general, a six-week test should be sufficient to rank any bare metal (alloy) or a metal protected with a thin conversion coating or a metallic, inorganic or organic coating.一般,为期6周的测试应该有足够的裸露的金属或合金或金属产品涂装上薄的转化涂层或金属的,无机的或有机的涂层A twelve-week test is recommended for the ranking of high-quality coating systems.一个为期12周的测试推荐用高质量涂层系统Table 1 illustrates how the recommended periods of testing in accordance with this standard relate to two different kinds of field exposure test conditions in the case of cold rolled carbon steel and pure zinc (99,9 %).表1举例说明推荐的持续时间根据这个标准相关的两种不同的领域暴露测试条件,在冷轧钢和纯锌表面进行Material tested Corrosion rate in metal mass loss obtained after test (μm) 测试后金属质量损失腐蚀等级的得到测试的材料According to this standard根据这个标准In on-vehicle test after2 years, verticalexposure, Gothenburg.一个媒介的测试在2年后垂直的暴露歌德堡In accordance withcorrosivity class C5 inISO 9223, first yeardata第一年的数据根据腐蚀的等级ISO92233Cold-rolle carbon steel 冷轧钢170-190 (6 weeks of exposure)170-190 六周暴露320-380 (12 weeks of exposure)320-380 12周暴露Cold-rolled 170-190(6 weeks of exposure)80-150冷轧钢170-190 六周暴露80-200Pure zinc 纯锌6-8 (6 weeks of exposure)6-8 六周暴露12-16 (12 weeks of exposure)12-16 12周暴露5-8 4,2-8,45 Evaluation of results 结果的评估Many different criteria for the evaluation of the test results may be applied to meet particular requirements, for example:结果的评估偶许多不同的标准可能满足特殊的要求例如a) Appearance of the test. 测试的外观b) Change in adhesion properties. 附着力性能的变化c) Number and distribution of corrosion defects, i.e. pits, cracks, blisters, etc.腐蚀缺陷的数量和分布状态如凹陷,裂纹,起泡等These may be assessed by methods described in ISO 1462 or ISO 4540.这些可以用ISO 1462 和ISO 4540描述的方法评估d) The time elapsing until the appearance of the first signs of corrosion.时间的流去直到腐蚀的第一个标志出现为止e) Change in mass, or pit depth.质量的改变或凹陷的深度f) Change in mechanical properties.化学性质的改变g) Observations in the scribed line.划线出的观察6 Test report 测试报告The test report shall provide the following information:测试报告应该提供以下的信息a) Reference to this standard. 参考的标准b) Reference to method of salt application used including test equipment.盐溶液应用的参考包括测试设备c) Description of the test object. 测试物的描述d) Description of the preparation of the test object. 测试物准备的描述e) The number of cycles or the duration of the test. 循环的数量或测试的持续时间f) Any deviations from the prescribed testing method. 与规定的测试方法的偏离g) Test results after final evaluation of test objects in accordance with criteria under section 5.最后测试物的评估结果根据第5节的标准。
专利名称:Method of accelerated test automationthrough unified test workflows发明人:Mikhail Galburt申请号:US14133466申请日:20131218公开号:US09740596B1公开日:20170822专利内容由知识产权出版社提供专利附图:摘要:Various embodiments are describe techniques, methods, and system disclosing accelerated test automation that is invoking a first script representing a first test case of an application under test, in response to a set of input data. From the first script, aplurality of generalized script elements are invoked, where each generalized script element tests a specific functionality of the application under test. A second script, representing a second test case is executed, and at least some of the plurality of generalized script elements that were invoked by the first script are invoked by the second script. Thereafter, it is determined whether the first and second test cases have passed or failed the software testing based on execution of the first and second scripts.申请人:EMC Corporation地址:Hopkinton MA US国籍:US代理机构:Blakely, Sokoloff, Taylor & Zafman LLP更多信息请下载全文后查看。
The Software Testing AutomationFrameworkby C.RankinSoftware testing is an integral,costly,and time-consuming activity in the software development life cycle.As is true for software development in general,reuse of common artifacts can providea significant gain in productivity.In addition, because testing involves running the system being tested under a variety of configurations and circumstances,automation of execution-related activities offers another potential source of savings in the testing process.This paper explores the opportunities for reuse and automation in one test organization,describes the shortcomings of potential solutions that are available“off the shelf,”and introduces a new solution for addressing the questions of reuse and automation:the Software Testing Automation Framework(STAF),a multiplatform, multilanguage approach to reuse.It is based on the concept of reusable services that can be used to automate major activities in the testing process.The design of STAF is described.Also discussed is how it was employed to automate a resource-intensive test suite used by an actual testing organization within IBM.I n late1997,the system verification test(SVT)andfunction verification test(FVT)organizations with which I worked recognized a need to reduce per-proj-ect resources in order to accommodate new projects in the future.To this end,a task force was created to examine ways to reduce the expense of testing. This task force focused on improvement in two pri-mary areas,reuse and automation.For us,reuse re-fers to the ability to share libraries of common func-tions among multiple tests.For purposes of this paper,a test is a program executed to validate the behavior of another program.Automation refers to the removal of human interaction with a process and placing it under machine or program control.In our case,the process in question was software testing. Through reuse and automation,we planned to re-duce or remove the resources(i.e.,hardware,peo-ple,or time)necessary to perform our testing. To help illustrate the problems we were seeing and the solution we produced,I use a running example of one particular product for which I was the SVT lead.This product,the IBM OS/2WARP*Server for e-Business,encompassed not only the base operating system(OS/2*—Operating System/2*)but also included thefile and print server for a local area network (LAN)(known as LAN Server),Web server,Java** virtual machine(JVM),and much more.Testing such a product is a daunting,time-consuming task.Any improvements we could make to reduce the com-plexity of the task would make it more feasible. For our purposes,a test suite is a collection of tests that are all designed to validate the same area of a product.I discuss one test suite in particular,known affectionately as“Ogre.”This test suite was designed to perform load and stress testing of LAN Server and the base OS/2.Ogre is a notoriously resource-inten-sive test suite,and we were looking at automation to help reduce the hardware,number of individuals, and time necessary to execute it.Copyright2002by International Business Machines Corpora-tion.Copying in printed form for private use is permitted with-out payment of royalty provided that(1)each reproduction is done without alteration and(2)the Journal reference and IBM copy-right notice are included on thefirst page.The title and abstract, but no other portions,of this paper may be copied or distributed royalty free without further permission by computer-based and other information-service systems.Permission to republish any other portion of this paper must be obtained from the Editor.RANKIN0018-8670/02/$5.00©2002IBM IBM SYSTEMS JOURNAL,VOL41,NO1,2002 126With a focus on reducing the complexity of creating and automating our testing,we looked at existing so-lutions within IBM and the test industry.None of these solutions met our needs,so we developed a new one,the Software Testing Automation Frame-work(STAF).This paper explores the design of STAF, explains how STAF addresses reuse,and details how STAF was used to automate and demonstrably im-prove the Ogre test suite.The solution provided by STAF is quiteflexible.The techniques presented herecould be used by most test groups to enhance the efficiency of their testing process.The problemFigure1depicts the software testing cycle.Planning consists of analyzing the features of the product to be tested and detailing the scope of the test effort. Design includes documenting and detailing the tests that will be necessary to validate the product.De-velopment involves creating or modifying the actual tests that will be used to validate the product.Ex-ecution is concerned with actually exercising the tests against the product.Analysis or review consists of evaluating the results and effectiveness of the test effort;the evaluation is then used during the plan-ning stage of the next testing cycle.Reuse is focused on improving the development,and to a lesser extent the design,portions of the testing cycle.Automation is focused on improving the ex-ecution portion of the testing cycle.Although every product testing cycle is different,generally,most per-son-hours are spent in execution,followed by devel-opment,then design,planning,and analysis or re-view.By improving our reuse and automation,we could positively influence the areas where the most effort is expended in the testing cycle.The following subsections look individually at the areas of reuse and automation and delineate the problems we faced in each of these areas. Reuse.This subsection provides some examples from the OS/2WARP Server for e-Business SVT team that motivated the desire for reuse.Within the team, there were numerous smaller groups that were fo-cused on developing and executing tests for differ-ent areas of the entire project.We wanted to ensure that each of these groups could leverage common sets of testing routines.To better understand this de-sire for reuse,consider some of the potential prob-lems surrounding the seemingly simple task of log-ging textual messages to afile from within a test.Several issues arise when this activity is left to be re-invented by each tester or group of testers,instead of using a common reusable routine.The problems are:●Logfiles are stored in different places:Some groups create log routines that store the logfiles in the directory in which the test is run.Others create log routines that store them in a central directory. This discrepancy makes it difficult to determine where all the logfiles for tests run on a given sys-tem are stored.Ultimately,you have to scour the whole system looking for logfiles.●Logfile formats are different:Different groups or-der the datafields in a log record differently.This difference makes it difficult to write scripts that parse the logfiles looking for information.●Message types are different:One group might use “FATAL”messages where another would use “ERROR,”or one group might use“TRACE”where another would use“DEBUG.”This variation makes it difficult to parse the logfiles.It also increases the difficulty in understanding the semantic mean-ing of a given log record.None of these problems is insurmountable,and many could be handled sufficiently well through a“stan-dards”document indicating where logfiles should be stored,the format of the log records,and the meaning,and intended use,of message types.None-theless,this list provides justification for our desire for common and consistent reusable routines.Also, additional problems exist that cannot be addressed by adhering to standards.Figure 1 Software testing cyclePLANNINGEXECUTIONDESIGNANALYSISORREVIEW DEVELOP-MENTIBM SYSTEMS JOURNAL,VOL41,NO1,2002RANKIN127Multiple programming languages.Our testers write a wide variety of tests in a variety of programming lan-guages.When testing the C language APIs(applica-tion programming interfaces)of the operating sys-tem,they write tests in C.When testing the command line utilities of the operating system or applications with command line interfaces,they write tests in scripting languages such as REXX(which is the na-tive scripting language of OS/2).When testing the Java virtual machine of the operating system,they write tests in the Java language.In order for our testers to use common reusable routines to perform such tasks as logging,described above,the routines needed to be accessible from all the languages they use.Multiple codepages.OS/2WARP Server for e-Business was translated into14different languages,among them English,Japanese,and German.It is not un-common for problems to exist in one translated ver-sion but not in another.Therefore,we were respon-sible for testing all of these versions.Testing multiple versions introduces additional complexities in our tests,and in particular to any set of reusable com-ponents we wanted our testers to use.One specific aspect of this situation is the use of different codepages by different translated versions.A codepage is the encoding of a set of characters(such as those used in English or Japanese)into a binary form that the computer can ing differ-ent codepages means that one codepage can encode the letter“A”in one binary form and another can encode it in a different binary form.Hence,care must be taken when manipulating the input and output of programs that use different codepages—a situa-tion our testers would frequently encounter when testing across multiple translated versions of our product.If our testers were going to use a common set of routines for reading and writing logfiles,those routines had to be able to handle messages not only in an English codepage,but also in the codepages used by the other13languages into which our prod-uct was translated.Multiple operating systems.While we were directly testing OS/2WARP Server for e-Business,it was es-sential for us to run tests on other operating systems, such as Windows**and AIX*(Advanced Interactive Executive*)to perform interoperability and compat-ibility testing with our product.If we wanted our testers to use common reusable routines to perform such tasks as logging,described above,the routines needed to be accessible from all the operating sys-tems we used.Existing automation components.As we examined the types of components that were continually being re-created by our teams,as well as those that would need to exist to support the types of automation we wanted to put in place(as described in the following sub-section),we realized that we would need a substan-tial base of automation components.Some of these components included process execution,file trans-fer,synchronization,logging,remote monitoring, resource management,event management,data management,and queuing.Additionally,these com-ponents had to be available both locally and in a re-mote fashion across the network.If the solution did not provide these components,we would have to cre-ate them.Therefore,we wanted a solution that pro-vided a significant base of automation components. Automation.This subsection provides some exam-ples,using the Ogre test suite,to motivate the need for automation.As was mentioned,this test suite was designed to test the LAN Server and base OS/2prod-ucts under conditions of considerable load and stress, where load means a sustained level of work and stress means pushing the product beyond defined limits. The test suite consists of a set of individual tests fo-cused on a specific aspect of the product(such as transferringfiles back and forth between the client and server).These tests are executed in a looping pseudorandom fashion on a set of client systems.The set of client systems is typically large,ranging up-wards of128systems.The set of servers that are be-ing tested is usually very small,typically no more than three.The test suite executes on the client systems for an extended period of time,typically24to72 hours.The combination of the number and config-uration of clients and servers and the amount of run time represents a scenario.If all the clients and serv-ers are still operational after the prescribed amount of time,the scenario is considered to be successful. Multiple scenarios are executed during a given SVT cycle.Figure2shows the basic procedureflow used to ex-ecute a given Ogre scenario.Note the areas in red. These areas indicate which steps in the procedure are currently done manually.The following subsec-tions describe these areas in more detail.Test suite execution.Our existing mechanism for start-ing or stopping a scenario was to have one or more individuals walk up to each client and start or stop the test suite.Given the situation of128clients spread throughout a large laboratory,this exercise is expensive,both in time and human resources.ThisRANKIN IBM SYSTEMS JOURNAL,VOL41,NO1,2002 128method also introduces the potential of skipping one or more clients,which can have a significant impact on the scenario(such as not uncovering a defect due to insufficient load or stress).Therefore,we wanted a solution that would allow us to start and stop the scenario from a central“management console.”Test suite distribution.As new tests were created or existing tests were modified,they needed to be dis-tributed to all the client systems.Our existing mech-anism consisted of one or more individuals walking around to each client copying the tests from diskettes. This method was complicated by the fact that the tests did not always exist in the exact same location on each client.Like the previous problem of test suite execution,this mechanism was very wasteful of time and human resources.It also introduced another po-tential point of failure whereby one or more clients do not receive updated tests,resulting in false er-rors.Therefore,we wanted a solution that provided a mechanism for distributing our tests to our clients correctly and consistently.Test suite monitoring.While a scenario was running, we were responsible for continually monitoring it to ensure that no failures had occurred.Our existing mechanism consisted of one or more individuals walking around to each client system to look for er-rors on the system screen.Such monitoring was par-tially alleviated by the fact that the tests would emit audible beeps when an error occurred.The beeps generally made it possible to simply walk into the laboratory and“listen”for errors.Unfortunately,we still had to monitor the scenario after standard work hours and on the weekend,which meant having in-dividuals periodically drive into work and walk around the laboratory looking and listening for er-rors.Again,this method was very wasteful of time and human resources.It was also a negative morale factor,since it was considered“grunt”work.There-fore,we wanted a solution that provided a remote monitoring mechanism so that the status of the sce-nario could be evaluated from an individual’s office or by telneting in from home.Test suite execution dynamics.The Ogre test suite was already very configurable.An extensive list of prop-erties was defined in a configurationfile that was read during test suite initialization(and cached in envi-ronment variables for faster access).These proper-ties manipulated many aspects of the scenario,such as which resources were available on which servers, which servers were currently off line,and the ratios defining the frequency with which the servers were accessed relative to one another.This configurabil-ity allowed us,for example,to make a one-line change that would prevent the clients from access-ing a given server(in case a problem was currently being investigated on it)or increase or decrease the stress one server received in relation to another. However,the only viable way to modify these pa-rameters was to stop and start the entire scenario. As an example,assume that36hours into a72-hour scenario,we found a problem with one of the serv-ers.We could stop the scenario,change the config-urationfile to make the server unavailable,and then restart the scenario,which allowed us to exercise the remaining servers while the problem was being an-alyzed.Then,12hours later,when afix for the prob-lem had been created,we needed to bring the newly fixed server back into the mix.In order to do this, Figure 2Ogre scenario flow before automationPROCEDUREENTRY OR EXITDECISIONPOINTMANUALTASKMONITORSCENARIOSTOPSCENARIOUPDATECONFIGURATIONNEWBUILD?CONFIGURATIONCHANGE?INSTALLNEW BUILDSERVERFAILURE?YES NOYESYESYESNONONONOYESSTARTTEST CASESCHANGEDSTARTSCENARIODISTRIBUTETEST CASESENDSCENARIOCOMPLETE?IBM SYSTEMS JOURNAL,VOL41,NO1,2002RANKIN129we had to stop and start the entire scenario,which effectively negated all of the run time we had accu-mulated on the other servers at that point.Similar situations arose when we needed to change server stress ratios or other con figuration parameters.Therefore,we wanted a solution that would allow us to change con figuration information dynamically during the execution of a scenario.Another long-standing issue with Ogre was that we were only able to execute one instance of the test suite at a time on any given client.It was felt that the ability to execute multiple instances of the test suite on the same client at the same time would al-low us to produce equivalent stress with fewer cli-ents.Figure 3shows the basic procedure flow of a single instance of the Ogre test suite executing on a given system.Note the areas in red.These areas indicate places where running multiple instances of Ogre on the same system creates con flicts.The fol-lowing two subsections describe these areas in more detail.Test suite resource management.In order to make a connection to a server,the client must specify a drive letter (in the case of a file resource)or a printer port (in the case of a printer resource)through which the resource will be accessed.When running multiple instances of the test suite,race conditions arise sur-rounding which drive letter or printer port to spec-ify at any given time.Therefore,we wanted a solu-tion that allowed us to manage the drive letter and printer port assignments among multiple instances of the test suite.Test suite synchronization.Some of our tests have strict,nonchangeable dependencies on being the only process on the system running that particular test.When running multiple instances of the test suite,we needed a way to avoid having multiple instances executing the same test simultaneously.Therefore,we wanted a solution that allowed us to synchronize access to individual tests.Existing solutionsBecause we had two separate problems (reuse and automation),we realized we might need to find two separate solutions.However,we were hoping to find a single solution that would address both problems.Our preferences,in order,were:1.A single solution designed to solve both problems2.Two separate solutions designed to work together3.A solution to reuse,which provided components designed to support automation,from which we could build an automation solution4.Two separate,disjoint solutionsIn the following subsections,I describe existing so-lutions that we explored,how they addressed the problems of reuse and automation,and how they re-lated to our solution preferences.Scripting languages.Scripting languages such as Perl,Python,Tcl,and Java (although Java would not technically be considered a scripting language,since it does require programs to be compiled)are very popular in the programming industry as a whole,as well as within test organizations,since they facilitate a rapid development cycle.1As programming lan-guages,scripting languages are not intended to di-rectly solve either reuse or automation.Addition-ally,they are not directly targeted at the test environment,although their generality does not pre-clude their use in a test environment.Despite these limitations,we felt that given the wide popularity of scripting languages and the almost fanatical devo-tion of their proponents,we should examine their potential for solving our problems.Figure 3Single Ogre instance before multi-instance supportPROCEDURE ENTRY OR EXIT NO MULTI-INSTANCE CONFLICT MULTI-INSTANCE CONFLICTESTABLISHCONNECTION(S) TO SERVER(S)SELECT RANDOM SERVER(S)RELEASECONNECTION(S) TO SERVER(S)STARTSELECT RANDOM SUBTESTEXECUTE SUBTESTRANKINIBM SYSTEMS JOURNAL,VOL 41,NO 1,2002130Although scripting languages are not a direct solu-tion to reuse or automation,scripting languages do have some general applicability to the problem of reuse.To begin with,they are available on a wide variety of operating systems.They also have large well-established sets of extensions.Although not complete from a test perspective,these extensions would provide a solid base from which to build.Ad-ditionally,some languages(notably Tcl and Java) provide support for dealing with multiple codepages. The benefits of scripting languages would clearly place them in category3of our preferences.Unfor-tunately,these benefits are only available if one is willing to standardize on one language exclusively. As was mentioned earlier,our testers create tests in many different programming languages,and it would have been tremendously difficult to force them to switch to one common programming language.Even if we could have convinced all of the testers on our team,we could never have convinced all the testers in our entire organization(much less those in other divisions,or at other sites),with whom we hoped to share our solution.Therefore,we were unable to rely on scripting languages for our solution.Test harnesses.A test harness is an application that is used to execute one or more tests on one or more systems.In effect,test harnesses are designed to au-tomate the execution of individually automated tests.A variety of different test harnesses are available. Each is geared toward a particular type of testing. For example,many typical UNIX**tests are written in shell script or the C language.These tests are gen-erally stand-alone executables that return zero on success and nonzero on error.Harnesses such as the Open Group’s Test Environment Toolkit(TET,also known as TET ware)are designed to execute these types of tests on one or more systems.2In contrast, a harness such as Sun’s Java Test leverages the un-derlying Java programming language to create a har-ness that is geared specifically to tests written in the Java language.It would not be uncommon for a test team to use both of these harnesses.Additionally, it is not uncommon for test teams to create custom harnesses geared toward specialized areas they test, such as I/O subsystems and protocol stacks.It is clear that test harnesses have direct applicabil-ity to the problem of automation.However,as a gen-eral rule,test harnesses only solve the execution part of the automation problem.This solution still leaves areas such as test suite distribution,test suite mon-itoring,and test suite execution dynamics unsolved. Additionally,test harnesses have no direct or gen-eral applicability to the problem of reuse.Thus,test harnesses are,at best,only part of the solution to category4of our preferences.That having been said, the proximity of test harnesses to the test environ-ment made it likely that one or more test harnesses would play a role in our ultimate solution.However, we still needed tofind a solution for reuse and de-termine which,if any,of the existing test harnesses we would use and extend tofill in the rest of the au-tomation gaps.CORBA.At a very basic level,CORBA**(Common Object Request Broker Architecture)is a set of in-dustry-wide specifications that define mechanisms that allow applications running on different operat-ing systems,and written in different programming languages,to communicate.3CORBA also defines a set of higher-level services,sitting on top of this com-munication layer,that provide functionality deemed beneficial by the programming community at large (such as naming,event,and transaction services).It is important to understand that CORBA itself is not a product;it is a set of specifications.For any given set of operating systems,languages,and services,it is necessary to eitherfind a vendor who has imple-mented CORBA for that environment,or,much less desirably,implement it oneself.CORBA is not intended to directly solve the problems of reuse and automation.However,CORBA does have some general applicability to the problem of reuse. First,CORBA is supported on a wide variety of op-erating systems.Second,there is CORBA support for a wide variety of programming languages.Thus, CORBA solves two of our key reuse problems.In con-trast,CORBA has no direct support for multiple codepages.Additionally,the set of available CORBA services is not geared toward a test environment, which is understandable given the general applica-bility of CORBA to the computer programming in-dustry as a whole.Given the above,CORBA would clearlyfit in cate-gory3of our preferences,although significant work would be necessary to provide the missing support in terms of multiple codepages and existing automa-tion components.Additionally,as we mentioned above,there is no one company that produces a prod-uct called“CORBA.”What this means is that for a complete solution one must frequently obtain prod-ucts from multiple vendors and attempt to config-ure them to work together.This attempt has beenIBM SYSTEMS JOURNAL,VOL41,NO1,2002RANKIN131notoriously dif ficult in the past,4and,although the situation is improving,we would rather have avoided this layer of complication.All told,we felt that a CORBA solution was not worth the expense neces-sary to implement and maintain it.The design of STAFHaving exhausted other avenues,we decided to cre-ate our own solution.We had a two-phased approach to the development of STAF .The first phase ad-dressed the issue of reuse.This phase by itself would give us a solution that fell into category 3of our so-lution preferences.The second phase tackled the problem of automation.In this phase we would build on top of the reuse solution and extend it to solve our automation problem.This two-step approach provided a solution that fell into category 1of our solution preferences.The result of that work was the Software Testing Automation Framework,or STAF .In the subsections that follow,I present the under-lying design ideas surrounding STAF and how they helped provide a reuse solution.A subsequent sec-tion will then address how we built and extended this solution to solve the problem of automation.Services.STAF was designed around the idea of re-usable components.In STAF ,we call these compo-nents services .Each service in STAF exposes a spe-cialized set of functionality,such as logging,to users of STAF and other services.STAF ,itself,is fundamen-tally a daemon process that provides a thin dispatch-ing mechanism that routes incoming requests (from local and remote processes)to these services.STAF has two “flavors ”of services,internal and external.Internal services are coded directly into the daemon process and provide the core services,such as data management and synchronization,upon which other services build.External services are accessed via shared libraries that are dynamically loaded by STAF .These external libraries represent either the service itself,in the case of languages like C or C ϩϩ,which ultimately generate native executable object code,or a proxy interface to other languages,such as the Java or REXX languages,which do not generate na-tive executable object code.The differentiation of service “flavors ”and proxy handling can be seen in Figure 4.This ability to provide services externally from the STAF daemon process allowed us to keep the core of STAF very small,while allowing users to pick and choose which additional pieces they wanted.It min-imizes the infrastructure necessary to run STAF .Ad-ditionally,the small STAF core makes it easy to pro-vide support on multiple platforms,and also to port STAF to new platforms.Request-result format.Fundamentally,every STAF request consists of three parameters,all of which are strings.The first parameter is the name of the sys-tem to which the request should be sent.This pa-rameter is analyzed by the local STAF daemon to de-termine whether the request should be handled locally or should be directed to another STAF sys-tem.Once the request has made it to the system that will handle it,the second parameter is analyzed toFigure 4STAF service typesSTAF DAEMONINTERNAL SERVICEINTERNAL SERVICEEXTERNAL JAVA SERVICEEXTERNAL REXX SERVICE SERVICE DISPATCH LAYERREXX SERVICE PROXYJAVA SERVICE PROXYEXTERNAL C/C++ SERVICEINCOMING REQUESTRANKINIBM SYSTEMS JOURNAL,VOL 41,NO 1,2002132。
Designation:D1148–07aStandard Test Method forRubber Deterioration—Discoloration from Ultraviolet(UV) and Heat Exposure of Light-Colored Surfaces1This standard is issued under thefixed designation D1148;the number immediately following the designation indicates the year of original adoption or,in the case of revision,the year of last revision.A number in parentheses indicates the year of last reapproval.A superscript epsilon(e)indicates an editorial change since the last revision or reapproval.(This test method was prepared jointly by the Society of Automotive Engineers and ASTM International.)1.Scope1.1This test method covers techniques to evaluate the surface discoloration of white or light-colored vulcanized rubber that may occur when subjected to UV or UV/visible exposure from specified sources under controlled conditions of relative humidity,or moisture,and temperature.1.2This test method also describes how to qualitatively evaluate the degree of discoloration produced under such conditions.1.3The term“discoloration”applies to a color change of the rubber sample,as distinguished from staining(see Note1),that refers to a color change of a metalfinish in contact with or adjacent to the rubber specimen.1.4The values stated in SI units are to be regarded as the standard.The values given in parentheses are for information only.1.5This standard does not purport to address all of the safety concerns,if any,associated with its use.It is the responsibility of the user of this standard to establish appro-priate safety and health practices and determine the applica-bility of regulatory limitations prior to use.N OTE1—Tests for staining are covered by Test Methods D925.2.Referenced Documents2.1ASTM Standards:2D925Test Methods for Rubber Property—Staining of Sur-faces(Contact,Migration,and Diffusion)D2244Practice for Calculation of Color Tolerances and Color Differences from Instrumentally Measured Color CoordinatesD3182Practice for Rubber—Materials,Equipment,and Procedures for Mixing Standard Compounds and Prepar-ing Standard Vulcanized SheetsD3183Practice for Rubber—Preparation of Product Pieces for Test Purposes from ProductsG151Practice for Exposing Nonmetallic Materials in Ac-celerated Test Devices that Use Laboratory Light Sources G154Practice for Operating Fluorescent Light Apparatus for UV Exposure of Nonmetallic MaterialsG155Practice for Operating Xenon Arc Light Apparatus for Exposure of Non-Metallic Materials3.Summary of Test Method3.1Specimens to be tested for discoloration are exposed to UV or UV/visible radiation.The specimens shall include one or more control specimens of known discoloration character-istics.3.2After exposing the specimens to actinic radiation for specified periods of time,under controlled conditions of relative humidity,or moisture,and temperature,the degree of discoloration is rated against discoloration of the control specimens,which have been exposed simultaneously with the test specimens.4.Significance and Use4.1The surface of white or light-colored vulcanized rubber articles,or vulcanized rubber covered with an organicfinish, may discolor when exposed to conditions of humidity,or moisture,heat,and sunlight.This change in color of light-colored rubber surfaces is objectionable to the consumer. 4.2Results obtained should be treated only as indicating the effect of irradiance from the specified source(either UV A-340 lamps or a xenon arc with a Daylight Filter)and not as equivalent to the result of any natural exposure,unless the degree of quantitative correlation has been empirically estab-lished for the material in question.4.3This test method may be used for producer-consumer acceptance,referee purposes,and research and development work.1This test method is under the jurisdiction of ASTM Committee D11on Rubberand is the direct responsibility of Subcommittee D11.15on Degradation Tests.Current edition approved May1,2007.Published June2007.Originallyapproved st previous edition approved in2007as D1148–07.2For referenced ASTM standards,visit the ASTM website,,orcontact ASTM Customer Service at service@.For Annual Book of ASTMStandards volume information,refer to the standard’s Document Summary page onthe ASTM website.Copyright©ASTM International,100Barr Harbor Drive,PO Box C700,West Conshohocken,PA19428-2959,United States.5.Apparatus5.1Fluorescent UV/Condensation Apparatus (Practice G 154)—Use Fluorescent UV test apparatus that conforms to the requirements defined in Practices G 151and G 154.5.1.1Unless otherwise specified,the spectral power distri-bution (SPD)of the fluorescent lamp shall conform to the requirements of Table 1in Practice G 154(UV A-340Lamp).Refer to Fig.1.5.2Xenon Arc Light Apparatus (Practice G 155)—Use xenon arc test apparatus that conforms to the requirements defined in Practices G 151and G 155.5.2.1Unless otherwise specified,the spectral power distri-bution (SPD)of the filtered xenon lamp shall conform to the requirements of Table 1in Practice G 155(Xenon Arc with Daylight Filter).Refer to Fig.2.5.3Color meter capable of measuring tristimulus colors for amber,blue,and green with or without automatic calculation for L a b or L *a *b *values.If without automatic calculations,a computer program should be available for this calculation.6.Test Specimen6.1The test specimen shall be prepared from a vulcanized production part or from a test slab prepared in accordance with Practices D 3182and D 3183.The specimen shall be rectan-gular in shape,62by 12mm (2.4by 0.5in.).If a specimen of this size cannot be prepared from a production part,a modifi-cation of the size may be agreed upon between the purchaser and the seller.6.2An unexposed file specimen of the same compound as that stated in 6.1shall be prepared and reserved for color comparisons without being subjected to exposure.7.Procedure7.1The two procedures (Fluorescent UV/Condensation and Xenon Arc)contain different types of exposure sources and test conditions and may produce different test results.They cannotbe used interchangeably without supporting data that demon-strates equivalency of the procedures for the materials tested.7.2Refer to Table A3.1in Practice G 151for the allowed operational fluctuations of the specified set points for irradi-ance,temperature,and relative humidity.If the actual operating conditions do not comply with the maximum allowable fluc-tuations in Table A3.1after the equipment has stabilized,discontinue the test and correct the cause of the problem before continuing.7.3Specimens should be confined to an exposure area in which the irradiance is at least 90%of the irradiance at the center of the exposure area.Unless it is known that irradiance uniformity meets this requirement,use one of the procedures described in Practice G 151,Section 5.1.4,to ensure equal radiant exposure on all specimens,or to compensate for differences within the exposure chamber.If the specimens do not completely fill the racks,fill the empty spaces with blank metal panels to maintain the test conditions within the cham-ber.The apparatus shall be operated continuously.However,if the test needs to be interrupted to perform routine maintenance or inspection,it should be during a dry period.7.4Procedure for Exposure in Fluorescent UV/Condensation Apparatus (Practice G 154)—Unless otherwise specified,operate the fluorescent UV test apparatus with UV A-340lamps in accordance with Practice G 154.7.4.1Use the following exposure cycle:Set the irradiance level to 0.77W /(m2.nm)at 340nm.Expose specimens to a continuous cycle of 8h light at 60ºC uninsulated black panel temperature followed by 4h of condensation at 50ºC uninsu-lated black panel temperature.7.5Procedure for Exposure in Xenon Arc Apparatus (Prac-tice G 155)—Unless otherwise specified,use the following operating conditions:7.5.1The xenon arc test apparatus shall be used with a Daylight Filter and conform to the spectral power distribution specifications in Practice G 155.FIG.1Spectral Power Distributions of UVA-340Lamp VersusDaylight--`,``,``,````,,,`,,,,,,`,````,`-`-`,,`,,`,`,,`---7.5.2Set the irradiance level at 0.55W /(m2•nm)at 340nm.Consult the manufacturer of the apparatus for equivalent broad band irradiance levels at 300to 400nm and 300to 800nm.7.5.3The default exposure cycle shall be 102min light only followed by 18min light plus either water spray on the front surface or immersion in water (refer to Note 2).The water spray temperature is typically 2165°C,but may be lower if ambient water temperature is low and a holding tank is not used to store purified water.The immersion water is kept at a constant temperature,which shall be less than 40°C.N OTE 2—Water spray and immersion in water frequently produce different results.In the immersion technique,the test specimens are placed in a chamber that is periodically flooded with either recirculated or running water,which completely covers the specimens.The maximum temperature attained by a black colored specimen is determined with the black standard thermometer (BST)held under water on the same plane and distance from the surface as the test specimens.The immersion system is made from corrosion resistant materials that do not contaminate the water.7.5.4Set the uninsulated Black Panel Temperature (BPT)at 63°C during the dry period of exposure to light.Consult the manufacturer of the apparatus for the equivalent insulated black panel temperature (Black Standard Temperature (BST)).7.5.5Relative humidity shall be set at 60%during the dry period of exposure to light in xenon arc apparatus that provides for control of relative humidity.7.5.6The chamber air temperature shall be set at 44°C in apparatus that provides for adjustment of the chamber air temperature.7.6One or more control specimens using rubber material of known discoloration characteristics shall be included.7.7Any change in color of a test specimen in relation to the original sample shall be considered as discoloration.7.7.1The degree of discoloration can be judged visually to be greater or less than the control specimen and can be givena numerical rating based on an arbitrary scale of degree of discoloration which may be agreed upon between the purchaser and the seller.7.7.2The change in color can be measured instrumentally using commercial color meters (refer to Note 3)for specimens prepared according to Section 6.Calculate the difference between an unexposed specimen and the exposed specimen,as follows:D E 5E U 2E E (1)D E *5E U *2E E *(2)where:D E =difference between unexposed and exposed specimen E U =unexposed specimen E E =exposed specimenN OTE 3—Some color meters calculate D E and others calculate D E*,while some may calculate both.Either expression is suitable for use in this test method,but one must not mix them together.A complete equation can be found in Practice D 2244.8.Report8.1The report shall include the following information:8.1.1Type of radiation used,either UV A-340or filtered xenon arc lamp;8.1.2Exposure time in hours;8.1.3Date of test;8.1.4Identification of test and control specimens;8.1.5Size and shape of specimen(s)if not in accordance with the standard shape and size;8.1.6An estimate of the degree of discoloration according to a visual estimation,described in 7.7.1,or an instrumental evaluation according to 7.7.2.9.Precision and Bias9.1A precision and bias study has not yet beenprepared.FIG.2Spectral Power Distributions of Xenon Arc with Daylight Filter VersusDaylight--`,``,``,````,,,`,,,,,,`,````,`-`-`,,`,,`,`,,`---9.2Bias—Bias cannot be determined because no acceptablestandard weathering reference materials are available.10.Keywords10.1fluorescent UV;heat discoloration;rubber products;ultraviolet;ultraviolet light discoloration;UV A-340;xenon arcAPPENDIX(Nonmandatory Information)X1.COLOR METERX1.1Set up the color meter according to the directions supplied by the manufacturer for optimum performance.This will include any necessary calibrations.X1.1.1If the color meter requires a standard plaque,ob-serve any precautions for plaque storage,cleanliness,and re-calibration.X1.1.2Obtain amber,blue,and green tristimulus reflectance readings for each specimen to be measured,including the unexposedfile specimen(which was not exposed to ultraviolet light).X1.1.2.1Color meters present data differently;however,L a b and L*a*b*data can be obtained from any color meter either as part of the instrumental calculations or,if not, computer programs can be used to generate the necessary data. X1.2To assess the discoloration of a sample as compared to the unexposedfile specimen,subtract the E E or E E*value of the exposed specimen from the E U or E U*value of the unexposed specimen.This is the difference(D E)or(D E*)on which the discoloration evaluation is based.See7.7.2.ASTM International takes no position respecting the validity of any patent rights asserted in connection with any item mentioned in this ers of this standard are expressly advised that determination of the validity of any such patent rights,and the risk of infringement of such rights,are entirely their own responsibility.This standard is subject to revision at any time by the responsible technical committee and must be reviewed everyfive years and if not revised,either reapproved or withdrawn.Your comments are invited either for revision of this standard or for additional standards and should be addressed to ASTM International Headquarters.Your comments will receive careful consideration at a meeting of the responsible technical committee,which you may attend.If you feel that your comments have not received a fair hearing you should make your views known to the ASTM Committee on Standards,at the address shown below.This standard is copyrighted by ASTM International,100Barr Harbor Drive,PO Box C700,West Conshohocken,PA19428-2959, United States.Individual reprints(single or multiple copies)of this standard may be obtained by contacting ASTM at the above address or at610-832-9585(phone),610-832-9555(fax),or service@(e-mail);or through the ASTM website().--` , ` ` , ` ` , ` ` ` ` , , , ` , , , , , , ` , ` ` ` ` , ` -` -` , , ` , , ` , ` , , ` ---。
[绝对可靠] 可靠性维修性标准术语中华人民共和国国家标准GB/T 3178-94 [可靠性维修性术语]产品<MONK>item修理的产品<MONK>repaired item不修理的产品<MONK>non-repaired item服务<MONK>service规定功能<MONK>required function时刻<MONK>instant of time时间区间<MONK>time interval持续时间<MONK>time duration累积时间<MONK>accumulated time量度<MONK>measure工作<MONK>operation修改(对产品而言)<MONK>modification (of an item)效能<MONK>effectiveness固有能力<MONK>capability耐久性<MONK>durability可靠性<MONK>reliability维修性<MONK>maintainability维修保障性<MONK>maintenance support performance可用性<MONK>availability可信性<MONK>dependability失效<MONK>failure致命失效<MONK>critical failure非致命失效<MONK>non-critical failure误用失效<MONK>misuse failure误操作失效<MONK>mishandling failure弱质失效<MONK>weakness failure设计失效<MONK>design failure制造失效<MONK>manufacture failure老化失效;耗损失效<MONK>ageing failure; wear-out failure突然失效<MONK>sudden failure渐变失效;漂移失效<MONK>gradual failure; drift failure灾变失效<MONK>cataleptic failure关联失效<MONK>relevant failure非关联失效<MONK>non-relevant failure独立失效<MONK>primary failure从属失效<MONK>secondary failure失效原因<MONK>failure cause失效机理<MONK>failure mechanism系统性失效;重复性失效<MONK>systematic failure; reproducible failure; repeat failure完全失效<MONK>complete failure退化失效<MONK>degradation failure部分失效<MONK>partial failure故障<MONK>fault致命故障<MONK>critical fault非致命故障<MONK>non-critical fault重要故障<MONK>major fault次要故障<MONK>minor fault误用故障<MONK>misuse fault误操作故障<MONK>mishandling fault弱质故障<MONK>weakness fault设计故障<MONK>design fault制造故障<MONK>manufacturing fault老化故障;耗损故障<MONK>ageing fault; wear-out fault程序敏感故障<MONK>programme-sensitive fault数据敏感故障<MONK>data-sensitive fault完全故障;功能阻碍故障<MONK>complete fault; function-preventing fault部分故障<MONK>partial fault持久故障<MONK>persistent fault间歇故障<MONK>intermittent fault确定性故障<MONK>determinate fault非确定性故障<MONK>indeterminate fault潜在故障<MONK>latent fault系统性故障<MONK>systematic fault故障模式<MONK>fault mode故障产品<MONK>faulty item差错<MONK>error失误<MONK>mistake工作状态<MONK>operating state不工作状态<MONK>non-operating state待命状态<MONK>standby state闲置状态;空闲状态<MONK>idle state; free state不能工作状态<MONK>disable state; outage外因不能工作状态<MONK>external disabled state不可用状态;内因不能工作状态<MONK>down state; internal disabled state可用状态<MONK>up time忙碌状态<MONK>busy state致命状态<MONK>critical state维修<MONK>maintenance维修准则<MONK>maintenance philosophy维修方针<MONK>maintenance policy维修作业线<MONK>maintenance echelon; line of maintenance维修约定级<MONK>indenture level (for maintenance)维修等级<MONK>level of maintenance预防性维修<MONK>preventive maintenance修复性维修<MONK>corrective maintenance受控维修<MONK>controlled maintenance计划性维修<MONK>scheduled maintenance非计划性维修<MONK>unscheduled maintenance现场维修<MONK>on-site maintenance; in sits maintenance; field maintenance非现场维修<MONK>off-site maintenance遥控维修<MONK>remote maintenance自动维修<MONK>automatic maintenance逾期维修<MONK>deferred maintenance基本的维修作业<MONK>elementary maintenance activity维修工作<MONK>maintenance action; maintenance task修理<MONK>repair故障识别<MONK>fault recognition故障定位<MONK>fault localization故障诊断<MONK>fault diagnosis故障修复<MONK>fault correction功能核查<MONK>function check-out恢复<MONK>restoration; recovery监测<MONK>supervision; monitoring维修的实体<MONK>maintenance entity影响功能的维修<MONK>function-affecting maintenance妨碍功能的维修<MONK>function-preventing maintenance减弱功能的维修<MONK>function-degrading maintenance不影响功能的维修<MONK>function-permitting maintenance维修时间<MONK>maintenance time维修人时<MONK>MMH; maintenance man-hour实际维修时间<MONK>active maintenance time预防性维修时间<MONK>preventive maintenance time修复性维修时间<MONK>corrective maintenance time实际的预防性维修时间<MONK>active preventive maintenance time 实际的修复性维修时间<MONK>active corrective maintenance time 未检出故障时间<MONK>undetected fault time管理延迟(对于修复性维修)<MONK>administrative delay后勤延迟<MONK>logistic delay故障修复时间<MONK>fault correction time技术延迟<MONK>technical delay核查时间<MONK>check-out time故障诊断时间<MONK>fault diagnosis time故障定位时间<MONK>fault localization time修理时间<MONK>repair time工作时间<MONK>operating time不工作时间<MONK>non-operating time需求时间<MONK>required time无需求时间<MONK>non-required time待命时间<MONK>standby time闲置时间<MONK>idle time; free time不能工作时间<MONK>disabled time不可用时间<MONK>down time累积不可用时间<MONK>accumulated down time外因不能工作时间<MONK>external disabled time; external loss time 可用时间<MONK>up time首次失效前时间<MONK>time to first failure失效前时间<MONK>time to failure失效间隔时间<MONK>time between failures失效间工作时间<MONK>operating time between failures恢复前时间<MONK>time to restoration; time to recovery使用寿命<MONK>useful life早期失效期<MONK>early failure period恒定失效密度期<MONK>constant failure intensity period恒定失效率期<MONK>constant failure rate period耗损失效期<MONK>wear-out failure period瞬时可用度<MONK>instantaneous availability瞬时不可用度<MONK>instantaneous unavailability平均可用度<MONK>mean availability平均不可用度<MONK>mean unavailability渐近可用度<MONK>asymptotic availability稳态可用度<MONK>steady-state availability渐近不可用度<MONK>asymptotic unavailability稳态不可用度<MONK>steady-state unavailability渐近平均可用度<MONK>asymptotic mean availability渐近平均不可用度<MONK>asymptotic mean unavailability平均可用时间<MONK>mean up time平均累积不可用时间<MONK>mean accumulated down time可靠度<MONK>reliability瞬时失效率<MONK>instantaneous failure rate平均失效率<MONK>mean failure rate瞬时失效密度<MONK>instantaneous failure intensity平均失效密度<MONK>mean failure intensity平均首次失效前时间<MONK>MTTFF; mean time to first failure 平均失效前时间<MONK>MTTF; mean time to failure平均失效间隔时间<MONK>MTBF; mean time between failures平均失效间工作时间<MONK>MOTBF; mean operating time between failure失效率加速系数<MONK>failure rate acceleration factor失效密度加速系数<MONK>failure intensity acceleration factor维修度<MONK>maintainability瞬时修复率<MONK>instantaneous repair rate平均修复率<MONK>mean repair rate平均维修人时<MONK>mean maintenance man-hour平均不可用时间<MONK>MDT; mean down time平均修理时间<MONK>MRT; mean repair timeP-分位修理时间<MONK>P-fractile repair time平均实际修复性维修时间<MONK>mean active corrective maintenance time平均恢复前时间<MONK>MTTR; mean time to restoration故障识别比<MONK>fault coverage修复比<MONK>repair coverage平均管理延迟<MONK>MAD; mean administrative delayP-分位管理延迟<MONK>P-fractile administrative delay平均后勤延迟<MONK>MLD; mean logistic delayP-分位后勤延迟<MONK>P-fractile logistic delay验证试验<MONK>compliance test测定试验<MONK>determination test实验室试验<MONK>laboratory test现场试验<MONK>field test耐久性试验<MONK>endurance test加速试验<MONK>accelerated test步进应力试验<MONK>step stress test筛选试验<MONK>screening test时间加速系数<MONK>time acceleration factor维修性检验<MONK>maintainability verification维修性验证<MONK>maintainability demonstration观测数据<MONK>observed data试验数据<MONK>test data现场数据<MONK>field data基准数据<MONK>reference data冗余<MONK>redundancy工作冗余<MONK>active redundancy备用冗余<MONK>standby redundancy失效安全<MONK>fail safe故障裕度<MONK>fault tolerance故障掩盖<MONK>fault masking预计<MONK>prediction可靠性模型<MONK>reliability model可靠性预计<MONK>reliability prediction可靠性分配<MONK>reliability allocation; reliability apportionment故障模式及影响分析<MONK>FMEA; fault modes and effects analysis故障模式影响及危害度分析<MONK>FMECA; fault modes, effects and criticality analysis故障树分析<MONK>FTA; fault tree analysis应力分析<MONK>stress analysis可靠性框图<MONK>reliability block diagram故障树<MONK>fault tree状态转移图<MONK>state-transition diagram应力模型<MONK>stress model故障分析<MONK>fault analysis失效分析<MONK>failure analysis维修性模型<MONK>maintainability model维修性预计<MONK>maintainability prediction维修树<MONK>maintenance tree维修性分配<MONK>maintainability allocation; maintainability apportionment老练<MONK>burn in可靠性增长<MONK>reliability growth可靠性改进<MONK>reliability improvement可靠性和维修性管理<MONK>reliability and maintainability management可靠性和维修性保证<MONK>reliability and maintainability assurance可靠性和维修性控制<MONK>reliability and maintainability control 可靠性和维修性大纲<MONK>reliability and maintainability programme可靠性和维修性计划<MONK>reliability and maintainability plan可靠性和维修性审计<MONK>reliability and maintainability audit 可靠性和维修性监察<MONK>reliability and maintainability surveillance设计评审<MONK>design review真实的<MONK>true预计的<MONK>predicted外推的<MONK>extrapolated估计的<MONK>estimated固有的<MONK>intrinsic; inherent使用的<MONK>operational平均的<MONK>meanP-分位<MONK>P-fractile瞬时的<MONK>instantaneous稳态的<MONK>steady state。
400 Commonwealth Drive, Warrendale, PA 15096-0001 U.S.A.Tel: (724) 776-4841 Fax: (724) 776-5760SAE TECHNICAL PAPER SERIES2001-01-0779An Accelerated Test Method forAutomotive Wiper SystemsSandrine Lesterlin and Frédéric SurdelValeo Wiper SystemsSAE 2001 World CongressDetroit, Michigan March 5-8, 2001The appearance of this ISSN code at the bottom of this page indicates SAE’s consent that copies of the paper may be made for personal or internal use of specific clients. This consent is given on the condition,however, that the copier pay a $7.00 per article copy fee through the Copyright Clearance Center, Inc.Operations Center, 222 Rosewood Drive, Danvers, MA 01923 for copying beyond that permitted by Sec-tions 107 or 108 of the U.S. Copyright Law. This consent does not extend to other kinds of copying such as copying for general distribution, for advertising or promotional purposes, for creating new collective works,or for resale.SAE routinely stocks printed papers for a period of three years following date of publication. Direct your orders to SAE Customer Sales and Satisfaction Department.Quantity reprint rates can be obtained from the Customer Sales and Satisfaction Department.To request permission to reprint a technical paper or permission to use copyrighted SAE publications in other works, contact the SAE Publications Group.No part of this publication may be reproduced in any form, in an electronic retrieval system or otherwise, without the prior written permission of the publisher.ISSN 0148-7191Copyright 2001 Society of Automotive Engineers, Inc.Positions and opinions advanced in this paper are those of the author(s) and not necessarily those of SAE. The author is solely responsible for the content of the paper. A process is available by which discussions will be printed with the paper if it is published in SAE Transactions. For permission to publish this paper in full or in part, contact the SAE Publications Group.Persons wishing to submit papers to be considered for presentation or publication through SAE should send the manuscript or a 300word abstract of a proposed manuscript to: Secretary, Engineering Meetings Board, SAE.Printed in USAAll SAE papers, standards, and selected books are abstracted and indexed in the Global Mobility DatabaseFrédéric SurdelValeo Wiper systemsCopyright © 2001 Society of Automotive Engineers, Inc.This paper will present an Accelerated testing method,which has been developed for wiper systems. The main objectives are to run the product to failure in a minimal time period and to reproduce the typical life test failures.A better understanding of the correlation between the damage (leading to the defection of the product) and the operating conditions (speed, temperature…) was the basis of this test method. In this paper, the main focus was on the arm and blade sub-system. Field test and Laboratory test data dictate the requirements for the assembly of a test bench. This test bench system allows us to control the influential experimental parameters and occupies a small room area, while testing a large number of samples. The displacement (motion) of the samples is created using pneumatic actuators and the test conditions are computer controlled and monitored. The frequency, the duration, the shaft torque are recorded during the tests. The larger number of samples combined with computer control allowed a statistical treatment of the results and reliable calculations. Two methods have been adapted to ensure the reproduction of the field conditions on the test bench, these are utilization of strain gages and cracking varnish. Strain gages and cracking varnish showed the correlation between stress and effort on a partial car (buck) and on the test bench.INTRODUCTIONThe wiper system must clean a specific area of the car windshield according to different international specifications. When products are validated, the tests are performed according to customer’s specifications.These specifications suggest a simulated load condition,which simulates high speed and low speed wiping, on wet glass or dry glass, as well as simulate snow. At the moment, accelerated tests are developed to reach the failure of the product in less time. Durability tests are performed on bucks with a customer specific duration,which could be more than a hundred hours. This methodalso allows us the ability to make products more complex, to realize a short product development period,to make a product reliable and durable. Accelerated testing and overstress testing are developed in the R&D department.This paper will develop the schedule we established for the arms and blades subsystem validation according to the main goal which is: decrease project’s duration whatever the step of the project. The work was focussed on mechanical product’s study, with strain and strength calculation and with a particular attention for fatigue failures. A test bench has been built, and specific methodologies has been developed and adapted for wiper systems.ACCELERATED TESTING- DEVELOPMENTOBJECTIVESThe main objectives are to:- Decrease the validation test duration - Decrease the size of the test facility - Increase the number of samples tested SCHEDULEBased on published results [1, 2, 3, 4, 5], a schedule has been identified to apply accelerated testing on wiper products. The chart in Figure 1 will be utilized in this paper for the development of the arm and blade fatigue bench.WIPER SYSTEM DESCRIPTIONThe arm and blade are the visible parts of the wiper system. The related components of a wiper arm and blade sub-system are shown in Figure 2. The metallic vertebras ensure the rigidity of the rubber and theSandrine Lesterlin and ABSTRACTAn Accelerated Test Method for Automotive Wiper Systems2001-01-0779assembly primary and secondary levers allow the repartition of the pressure given by the spring along the length of the blade.When the products are in validation according to the customer specifications, parameters, which are taken into account, are wet and dry glass, the speed of the system and snow tests (the wipe angle is shortened). Tests are made on a buck, which is inside a small tile lined two-meter high one-meter square room. A supervisor computer controls the validation test varying the simulated wipe speed (motor speed) and the simulated rain intensity (mechanical load). ACCELERATED TEST BENCHAll type of arms and blades subsystems can be tested on this bench (see figure 3). Eight arms and blades can be tested with all the experimental conditions being the same for the subsystems. The bench has three axis of adjustment (X, Y, Z). A protector allows the adjustment of the attack (rotation) angle. Each blade is held rigid by two metal parallel plates. The base of the arm is held in place by shaft with a securing bolt, located on a solid holder assembly This ensures the proper arm and blade spring tension necessary for proper simulation. The inner surface of the two parallel plates contains metal extensions, which applies pressure to the vertebrae, which is attached via the claws to the arm and blade subsystem, which maintains the blade. The torque is measured on the shafts. Torque converters (0-10 N.m and 5-50 N.m) are fixed on the shaft, which maintain the arm. The torque values during the tests are imposed according to shaft torque measurements made on cars.A lateral uniform force is imposed along the eight wipers by eight pneumatic actuators. The pressure is adjustable from 1 to 5 bars with the arm frequency adjustable from 0.25 to6 Hz.During the test, the torque is recorded and can be displayed. The values can be adjusted from 0 to 20 N.m which translates to an on car measurement in the range of 0 to 200 km/h.BENCH VALIDATIONEvaluation of damaged wiper systems from test cars and on bucks has revealed very clearly that the main damages are caused by fatigue and wear phenomenon [6]. These two damages will be treated as separate issues. For this paper, the focus will be on the fatigue damage.To validate this bench as an accelerated test, we followed the standard accelerated testing chart (see figure 1). The reference information chosen for the operating life damage results are the buck results. The damage studied is fatigue failures. To ensure the reproduction of this damage on the bench, the main stresses and efforts have to be identified and calculated [6]. The external causes (speed, environment…) must be identified. To answer this fundamental question,experimental methods have been investigated in order to be able to apply them on the wiper systems. The two selected methods are cracking varnish (qualitative method) and strain gage information (quantitative method).Cracking varnishThis qualitative method is helpful in identifying the main stress direction, as well as to show the network of stress and the direction of the main stresses. Cracking varnish is a sensitive mix of resins, which react according to the temperature of the test room and to the intensity of the stress. A test procedure has been identified in order to obtain reliable and usable results. The wiper system simulated load will be a condition to mimic operation on the buck with or without rain. The actual tests were made under wet and dry glass, with high and low speed and with snow tests conditions. To be reliable, the cracks of the varnish must be numerous, so the area were varnish is deposited has to be big enough. The arm shank and the arm have been selected in which to apply the varnish. The varnish is hot when it is deposited and the wiper system is installed on the buck before cooling, Figure 4.When the varnish is at 25C, the test can begin, the wiper system is turned on. The first cracks, which will appear, correspond to the most intense effort, Figure 5.In all the experimental conditions, we have identified the same crack directions according to the part of the product we were tested. The cracks are perpendicular to the main effort directions. Consequently we have easily identified torsion on the arm shank and horizontal bending on the arm.On the buck, we have compared dry glass, wet glass and snow tests. The differences appear on the quantity and or size of the cracks and on the time necessary for the creation of the cracks. The same cracks characteristics were obtained for dry glass and for snow tests but on wet glass, very few cracks appeared, not enough to be able to give a conclusion.So, if comparing the results obtained in the same conditions on bench and on buck, the results obtained on the bench are very closed to those in dry glass and in snow tests conditions. On the arm shank, bending on the bench and on the buck in dry glass and in snow test conditions were observed, Figure 6. On the arm, horizontal bending on the bench and on the buck in dry glass and in snow test conditions were obtained, Figure 7. These experiments underlined that the most important of stress is when the blade reverses.The qualitative results obtained with the cracking varnish are consistent with the reproduction of the operating buck conditions on the bench but we need to pursue the study with quantitative results. Quantitative results canbe obtained with strain gages sticks according to the varnish cracks results.Strain gages measurementAccording to the cracking varnish results and to previous mechanical studies, three direction gages have been sticked on the arm. The experiment is represented on Figure 8.The most representative strain gage location has been chosen [6], Figure 9, and the graphs are presented in Figure 10.The graphs represent the stress as a function of the time. The test frequency is 1 Hz, so 1 second corresponds to one wipe on the windshield from the park position to the park position. The stress data are calculated in the (x,y,z) diagram showed in figure 8. These curves allow fundamental conclusions for running tests on the arm and blade fatigue bench and for given reliable results:-The bench was not dedicated to simulate wet glass conditions which are considered less damaging. This is confirmed by the two first spectrum, where the stress range of the wet glass conditions appears very low.-The spectrum are not shown here but high speed tests have given the same results than low speed conditions.-The values of the stress for the bench depends on the initial position of the blades, the level of σ12 can be fixed very easily.The study of these curves is made with the calculation of the amplitude and of the maximum stress:Stress amplitude MPa Mean stress MPaBuck-dry test∆σ12 = 35∆σ11 = 10σI = 40.3σII = -30.3bench∆σ12 = 35∆σ11 = 20σI = 46.4σII = -26.4σ22 =0 in the 3 conditions, this result is in accordance with the horizontal bending and with the torsion identified with cracking varnish.The mean stress, σ12 and σ11 are equal (or in the same range for σ11 ) in the two tested conditions.All these results are in accordance with the reproduction of the same operating life conditions on the bench than on the buck.ExperimentsTo validate the bench as an accelerated test, it is necessary to ensure the reliability of the results on the bench and to reproduce the same failure than on the buck for specification tests.•Test method validation:The arm product used for all the tests has always the same characteristics. The products we will change and compare are blades. The reference is over-designed and other solutions were proposed.Product 1: referenceProduct 2: technology aProduct 3: technology bProduct 4: technology a+b•Description of the tested conditions:The eigth systems have exactly the same tests conditions. The test frequency is closer to high speed, 1 Hz. The crack initiation appearance is verified every day. If a crack is detected, the test is stopped, if the product doesn’t failed, after 1x106 cycles which is considered higher than the car’s life, the product can be censured but failures are necessary to validate the bench, so some tests will be longer than 1x106 cycles.• Results:Product 1 : reference2.6 x106 cycles no failuresProduct 2 : technology a1.3 x106 cycles 4 failures of primary lever1.9 x106 cycles 4 censuredProduct 3 : technology b5.5 x105 cycles 5 failures of technology b1x106 cycles 3 censuredProduct 4 : technology a +b6.5 x105 cycles 3 failures of technology b1x106 cycles 2 failures of technology b1.6 x106 cycles 2 failures of technology b1.6 x106 cycles 1 censuredTo go further in the validation, two other products have been tested on the bench with two different technologies b, called b1 and b2.Product 5 : technology b12.8 x105 cycles 3 failures of technology b14.5 x105 cycles 1 failure of technology b17.2 x105 cycles 2 failures of technology b11.4 x106 cycles 1 failure of technology b11.4 x106 cycles 1 censuredProduct 6 : technology b2 (6 products)2.3 x105 cycles 4 failures of technology b23.7 x105 cycles 1 failure of technology b21.3 106 cycles 1 censuredIn all the cases, the tests produced the same failure mode found during buck validation. Diagrams shown in figure 11 and table shown in figure 12 allow easycomparisons between the tests results. A classification of the resistance of the product appears clearly and the scatter of the results is underlined, in accordance with production samples variations characteristics which follow statistical laws [1].ACCELERATED TEST PARAMETERSThe failures we work on are typically fatigue failures. In these condition, the main parameter which can allow very simply to decrease the test duration is the test frequency [7]. The decision taken was to increase the frequency. For that, first, a control have to be made to ensure that increase the frequency will not disturb the stress variations during the test. So, strain gages measurements have been made at the same location that for the validation at 1-2-3-4 Hz, cf. fig.8.When frequency is increased over 2 Hz, Figure 13, the inertia of the metals parts who maintained the blade become to heavy in comparison to the amplitude of their course and to the speed. That can explain the square waveform becoming triangular and the stress range decreases. At 2 Hz no modification of the signal can be observed. So to validate an accelerated parameter, the frequency of 2 Hz will be chosen and controlled by tests made on the product 2.Product 2 : technology a at 1 Hz1.3 x106 cycles 4 failures of primary lever (361h)1.9 x106 cycles 4 censured (528 h)Product 2 : technology a at 2 Hz1.5 x106 cycles 1 failures of primary lever (208 h)2.2 x106 cycles 2 failures of primary lever (305 h)2.7 x106 cycles 1 failure of primary lever (375 h)3 x106 cycles4 censured (416 h)The same failures are obtained. Results of increasing the frequency from 1 Hz to 2 Hz was found to have no effect on failure modes produced. As it is showed in the figure 14 where the failures of the product 2 at 1 and 2 Hz are plotted as a function of the time, the differences in time test are not very important but other tests have to be made to pursue the influence of frequency. For the weibull life distribution with an Eyring stress relationship, the acceleration factor calculated is 1,4 [1]. CONCLUSIONThe arm and blade fatigue bench is a helpful method to make first, comparative tests and to detect the product weak points, used like an overstress tests. The correlation of failures with buck results and stress and strain studies have shown that if we considered fatigue failures, the bench could be operative as an accelerated test. The next step will be the same study with free play measurements in order to be able to make accelerated testing for wear phenomenon.REFERENCES1. Wayne Nelson, “Accelerated testing”, WileyInterscience.2. Thermotron Industries, Fundamentals of acceleratedtesting”, 1998.3. ASTE, “Le rôle des essais dans la maitrise de lafiabilité”.4. SIA congress, “fiabilité expérimentale”, France, 2000may 16.5. Symposium 2000 tutorial notes, “Annual reliabilityand maintainability symposium”, Los Angeles, California USA, 2000 Jan 24-27.6. F.Surdel, Intermediate PHD report, 2000 June.7. Claude Bathias, Jean-Paul Baïlon, “la fatigue desmatériaux et des structures” second edition, Ed Hermès, 1997.Figure 1 : standard accelerated testing chartFigure 2 : schema of a wiper systemFigure 3: arm and blade test benchFigure 4 : Installation of the experimentFigure 5: Example of cracks underlying the reversal movement of the wiper system.Bending around x axisX1) bench2) buckFigure 6: Torsion on the bench and on the buck in dry glass and in snow test conditions- armshank.XHorizontal flexion Parallel to y1) bench2) buckFigure 7: Horizontal bending on the bench and on the buck in dry glass and in snow test conditions – arm.Figure 8: Picture of the experiment on the buckFigure 9: strain gage locationhigh speed - wet glass1,52σ22σ12high speed -dry glass-60,0-40,0-20,00,020,040,060,00,5s t r e s s (M P a )Figure 10: plots of wiper system instrumentation with gagesyFigure 11: diagrams showing the repartition of the failures according to the product referenceproduct numberproduct 1product 2product 3product 4product 5product 61censored 1,30E+065,50E+056,50E+052,80E+052,30E+052censored 1,30E+065,50E+056,50E+052,80E+052,30E+053censored 1,30E+065,50E+056,50E+052,80E+052,30E+054censored 1,30E+065,50E+051,00E+064,50E+052,30E+055censored censored 5,50E+051,00E+067,20E+053,70E+056censored censored censored 1,60E+067,20E+05censored7censored censored censored 1,60E+061,40E+068censoredcensoredcensoredcensoredcensorednumber of failed sample45775average failures1,30E+065,50E+051,02E+065,90E+052,58E+05Figure 12: average of the product’s failures in cyclesFigure 13: reference strain curves obtained on the bench at 1-2-3-4 Hz.Figure 14: comparison of the failures at 1 and at 2 Hz on the product 2。