NERV a parallel processor for standard genetic algorithms
- 格式:pdf
- 大小:105.48 KB
- 文档页数:8
PANalytical X-Ray Diffractometer Standard Operating ProcedureV ersion 21 (November 2012)Training Rules:First time: watch trainer and take notesSecond time: do by yourself with supervision following your notesThird time: do by yourself with supervision without help (Dr. Bykov must approve the authorization for independent use)After the third time, you are completely independent and responsible for the instrument.Operating Rules:1. Only authorized people can use XRD.2. Y ou must sign up for time on the XRD using the online system through Forum:http://134.74.50.69/ccnycores/main.php3. All samples must be recorded in XRD logbook including the date, time, user,powder, sample name, program name, and filename.4. If you have any problems, please DO NOT troubleshoot yourself - Call: Alexey Bykov212-650-5548 (Laboratory) or646-725-0270 (cell phone)Outline:1. Turn on the instrument2. Tube breeding3. Make a new program4. Typical Operation with fixed holder5. Typical Operation with spinner holder6. Export data7. Turn off the instrumentAppendix1.Slit selection2.PDF4+ database_____________________________________________________________________1.Turn on the Instrument:1.1Press power switch on the water recirculator to turn on cooling water. (Make sure pumpselection switch is turned to right side.) Cooling water valves on the wall must be alwayson.1.2If the program is already ON, but the generator status is OFF (Tension = 0 kV and Current = 0 mA) skip steps1.2-1.5 and go to step 1.6. If the program is ON and the generator status is also ON (Tension > 20 kV, Current > 5 mA) go to step 2.3.1.31.41.5sample changer and/or rotate your sample during scanning procedure (To switch sample1.6and wait until energyreaches to 30 kV and 10 mA .2.Tube BreedingIf X-rays have been turned off for an extended period of time, tube breedingisnecessary.Check the log book first to see when last time the instrument used was.Breed at normal speed if the instrument was left idle for 100 hours or more (breed takes about 30 minutes). Breed at fast speed if the instrument was used recently (within the last 100 hours). In this case, tube breeding takes about 6 min.2.12.2Tension during breeding procedure goes up to 60 kV then down to 40 kV. After2.3Stage Flate Samples and PIXcel detector 3.To Make a New Program:3.1.3.2-----the parameters of PIXcel detector : Active length and Number of active channels--size, time per step (or scan speed). Based on our tests of standard samples we recommend for regular measurements step size ~0.05-0.01o and scan speed ~0.1-0.2o/sec. The total scan time usually should not exceed 10 min.(5 min is optimal total time for 10o-60o Two Theta range)3.3 Save experiment-For previous saved program with XRD already on you may start here4. Typical Operation with Fixed Holder4.1.LoadXRD sample slides on the fixed sample holder.4.2.appropriate.4.3.program.4.4.4.5.4.6.Reflection Transmission Spinner with SampleChanger and PIXcel Detector5.Typical Operation with Spinner Holder5.1.TThe Fixed Stage is mounted to the base by four screws. Unscrew all of them and carefully remove the stage. Take the Spinner Holder and attach to the base. Mount the holder using three screws only.-connect the instrument--Move Sample Changer from corner to center position This will be done by XRD facility manager only-When Sample Changer will be fixed on center position it’s status will be changed fromIf there is no changes in Sample Changer status occurred,5.1. Load your sample you on position 1(top) of the Sample Changer.5.2.––5.3.5.4.5.5.5.6.scanning. After copmpleting measurements the Changer will unload spinner holder placing your sample back to position 1.5To export data:1.2.3.file.4.data format, such as: RD, UDF, DAT…etc.Data saved in C:\X’Pert Data\ automatically after running instrument6To Turn off the Instrument:1.2.Do not turn off the key.3. Turn off the cooling water switch on chiller (recirculator).Do not close the software program_____________________________________________________________________ Appendix.1. Slit selection for proportional detector .For general scanning:Incident beam (left) Detector (right)For low angle scanning:_____________________________________________________________________1. PDF4+ (ICDD DD view software) XRD database2.3.4.Select the corresponding elements in the table (Ex: Zn and O)5.6.。
März 2012DEUTSCHE NORMNormenausschuss Werkzeuge und Spannzeuge (FWS) im DINPreisgruppe 9DIN Deutsches Institut für Normung e. V. · Jede Art der Vervielfältigung, auch auszugsweise, nur mit Genehmigung des DIN Deutsches Institut für Normung e. V., Berlin, gestattet.ICS 25.140.30DDIN 3129Schraubwerkzeuge –Steckschlüsseleinsätze mit Innenvierkant für Sechskantschrauben,maschinenbetätigt und Zubehör –Maße, Ausführung und PrüfdrehmomenteAssembly tools for screws and nuts –Square drive socket wrenches, power-driven and accessories –Technical specifications and test torquesOutils de manoeuvre pour vis et écrous –Douilles à carré conducteur, à machine et accessoires –Spécifications techniques et couples d’essai©Ersatz fürDIN 3129:1987-02Gesamtumfang 14 SeitenDIN 3129:2012-032Inhalt SeiteVorwort ........................................................................................................................................................... 3 1 Anwendungsbereich ........................................................................................................................ 4 2 Normative Verweisungen ................................................................................................................. 4 3 Maße, Bezeichnung .......................................................................................................................... 5 3.1 Steckschlüsseleinsätze .................................................................................................................... 5 3.2 Verbindungsstifte und O-Ringe ....................................................................................................... 8 4 Werkstoff ......................................................................................................................................... 10 5 Härte ................................................................................................................................................. 10 6 Ausführung ...................................................................................................................................... 11 7 Drehmomentprüfung ...................................................................................................................... 11 8Kennzeichnung (12)Anhang A (informativ) Erläuterungen ........................................................................................................ 13 Literaturhinweise .. (14)BilderBild 1 — Steckschlüsseleinsätze ................................................................................................................. 5 Bild 2 — Verbindungsstift ............................................................................................................................. 9 Bild 3 — O-Ring . (9)TabellenTabelle 1 — Steckschlüsseleinsätze mit in DIN ISO 272 enthaltenen Schlüsselweiten ......................... 6 Tabelle 2 — Steckschlüsseleinsätze mit in DIN ISO 272 nicht enthaltenen Schlüsselweiten ............... 8 Tabelle 3 — Verbindungsstifte und O-Ringe für Steckschlüsseleinsätze ............................................. 10 Tabelle 4 — Rockwellhärten für maschinenbetriebene Steckschlüsseleinsätze in Abhängigkeitvon den Antriebsvierkanten und Schlüsselweiten s ................................................................... 10 Tabelle 5 — Prüfdrehmoment und Höhe des Prüfbolzens ...................................................................... 11 Tabelle A.1 — Antriebszuschläge .. (13)DIN 3129:2012-033VorwortDiese Norm wurde vom Arbeitsausschuss NA 121-05-01 AA …Schraubwerkzeuge, Fügewerkzeuge“ des Normenausschusses Werkzeuge und Spannzeuge (FWS) erarbeitet. ÄnderungenGegenüber DIN 3129:1987-02 wurden folgende Änderungen vorgenommen:a) im Anwendungsbereich wurde die Referenznummer 2 2 02 01 0 nach ISO 1703 ergänzt; b) Innenvierkant der Nenngröße 16 mm wurde gestrichen;c) nicht in DIN ISO 272 enthaltene Schlüsselweiten aus Tabelle 1 wurden in eine neue Tabelle 2 überführt; d) Anforderungen an die Härte wurden neu aufgenommen;e) Festlegungen für eine Drehmomentprüfung wurden neu aufgenommen; f)die Norm wurde redaktionell überarbeitet.Frühere AusgabenDIN 3129: 1968-04, 1972-05, 1976-04, 1982-11, 1987-02DIN 3129:2012-0341 AnwendungsbereichDiese Norm gilt für maschinenbetätigte Steckschlüsseleinsätze mit Innenvierkant für Sechskantschrauben, die in ISO 1703 unter der Referenznummer 2 2 02 01 0 gelistet sind.2 Normative VerweisungenDie folgenden zitierten Dokumente sind für die Anwendung dieses Dokuments erforderlich. Bei datierten Verweisungen gilt nur die in Bezug genommene Ausgabe. Bei undatierten Verweisungen gilt die letzte Ausgabe des in Bezug genommenen Dokuments (einschließlich aller Änderungen). DIN 3121, Verbindungsvierkante für maschinenbetätigte SchraubwerkzeugeDIN ISO 691, Schraubwerkzeuge — Schlüsselweiten-Toleranzen für Schrauben - und SteckschlüsselDIN EN ISO 6508-1, Metallische Werkstoffe — Härteprüfung nach Rockwell — Teil 1: Prüfverfahren (Skalen A, B, C, D, E, F, G, H, K, N, T)ISO 1703, Assembly tools for screws and nuts — Designation and nomenclatureISO 1711-2, Assembly tools for screws and nuts — Technical specifications — Part 2: Machine-operated sockets ("impact")DIN 3129:2012-0353 Maße, Bezeichnung3.1 SteckschlüsseleinsätzeBild 1 zeigt Beispiele für Steckschlüsseleinsätze. Die Maße der Steckschlüsseleinsätze müssen Tabelle 1 oder Tabelle 2 entsprechen. Nicht angegebene Einzelheiten sind zweckentsprechend auszuführen. Die dargestellten unterschiedlichen Formen ergeben sich aus den Durchmessern d 1 und d 2.Maße in MillimeterLegende 1 Innenvierkant nach DIN 3121aEinsenkung mindestens bis auf EckenmaßBild 1 — SteckschlüsseleinsätzeBezeichnung eines Steckschlüsseleinsatzes mit Doppelsechskant (D) von Schlüsselweite 18=s mm und Innenvierkant von Nenngröße 12,5:Einsatz DIN 3129 — D 18 × 12,5Bezeichnung eines Steckschlüsseleinsatzes mit Sechskant (S) von Schlüsselweite 18=s mm und Innenvierkant von Nenngröße 12,5:Einsatz DIN 3129 — S 18 × 12,5DIN 3129:2012-036 Tabelle 1 — Steckschlüsseleinsätze mit in DIN ISO 272 enthaltenen SchlüsselweitenMaße in MillimeterDIN 3129:2012-03 Tabelle 1(fortgesetzt)7DIN 3129:2012-038 Tabelle 2 — Steckschlüsseleinsätze mit in DIN ISO 272 nicht enthaltenen SchlüsselweitenMaße in Millimeter3.2 Verbindungsstifte und O-RingeVerbindungsstifte dienen in Kombination mit O-Ringen der sicheren Verbindung der Steckschlüsseleinsätze mit dem Antriebsvierkant des motorischen Schraubers, siehe Bild 2 und Bild 3 sowie Tabelle 3.DIN 3129:2012-039Maße in Millimetera) Verbindungsstift bei Innenvierkant 6,3 bis 25b) Verbindungsstift bei Innenvierkant 40 und 63LegendeaAuflageebene des Stiftes im Einsatz am Grund der Ringnut 1 außermittiger SchwerpunktBild 2 — VerbindungsstiftSchwerpunktabstand mindestens:()()() − ⋅−+− ⋅−+⋅− ⋅−−+=141242122667267167126672112d b d d d d l d d l d b d d l l l (1)Bezeichnung eines Verbindungsstiftes von Durchmesser 37=d mm und Länge 301=l mm:Stift DIN 3129 — 3 × 30Maße in MillimeterBild 3 — O-RingBezeichnung eines O-Ringes von Durchmesser 284=d mm:Ring DIN 3129 — 28DIN 3129:2012-0310Tabelle 3 — Verbindungsstifte und O-Ringe für Steckschlüsseleinsätze4 WerkstoffSteckschlüsseleinsatz und Verbindungsstift: Legierter Stahl; Sorte nach Wahl des Herstellers. O-Ring: gummielastischer ölbeständiger Werkstoff (nach Wahl des Herstellers).5 HärteDie Härteprüfung muss nach DIN EN ISO 6508-1 durchgeführt werden.Die Steckschlüsseleinsätze und die Verbindungsstifte müssen gehärtet und vergütet sein und eine Rockwellhärte HRC nach Tabelle 4 aufweisen.Tabelle 4 — Rockwellhärten für maschinenbetriebene Steckschlüsseleinsätze in Abhängigkeit von denAntriebsvierkanten und Schlüsselweiten s6 AusführungMit Sechskant oder Doppelsechskant.7 DrehmomentprüfungDie Drehmomentprüfung muss nach ISO 1711-2 durchgeführt werden.Für Steckschlüsseleinsätze nach Tabelle 1 gelten die Prüfdrehmomente und die Höhe des Prüfbolzens nach Tabelle 5.Tabelle 5 — Prüfdrehmoment und Höhe des PrüfbolzensTabelle 5 (fortgesetzt)Für Steckschlüsseleinsätze nach Tabelle 2 sind die Prüfdrehmomente zu interpolieren und die Maße des Prüfbolzens nach Gleichung (2) bzw. nach Gleichung (3) zu berechnen.min 1,1t h ×=(2)Dabei isthdie Höhe des Prüfbolzens nach Tabelle 5;t min die Tiefe des Schlüsselprofils nach Tabelle 1 bzw. Tabelle 2.1,13min ×=s e(3)Dabei iste min das Eckenmaß des Prüfbolzens nach Tabelle 5; sdie Schlüsselweite (Nennwert nach Tabelle 1 bzw. Tabelle 2).8 KennzeichnungDie Steckschlüsseleinsätze sind mit der Schlüsselweite, der Nenngröße des Innenvierkantes und dem Namen oder Zeichen des Herstellers zu kennzeichnen. Die DIN-Nummer muss, sofern sie nicht auf dem Steckschlüsseleinsatz angebracht ist, zumindest auf der handelsüblichen kleinsten Verpackung angegeben sein.Anhang A(informativ)ErläuterungenDie maschinenbetätigten Steckschlüsseleinsätze haben ein durchgehendes Stiftloch und eine Ringnut, die einen O-Ring zum Festhalten des Verbindungsstiftes aufnehmen kann. Die Maße und Werkstoffanforderungen der Verbindungsstifte und O-Ringe wurden aus sicherheitstechnischen Gründen in diese Norm aufgenommen.Tabelle 1 dieser Norm stimmt mit Ausnahme des zusätzlich aufgenommenen Maßes d3mit der Internationalen Norm ISO 2725-2 überein. Die Innenvierkante entsprechen zugleich der Internationalen Norm ISO 1174-2 über Verbindungsvierkante für maschinenbetriebene Steckschlüsseleinsätze.Die Steckschlüsseleinsätze dürfen am Betätigungsende nur einen bestimmten Raum beanspruchen. Die Durchmesser d1sind übereinstimmend mit der internationalen Norm ISO 2725-2 rechnerisch wie folgt festgelegt:=25⋅,1asd+1(A.1) maxDabei istd1max der maximale Durchmesser d1 des Steckschlüsseleinsatzes;s die Schlüsselweite des Steckschlüsseleinsatzes;a der Antriebszuschlag nach Tabelle A.1.Tabelle A.1 — AntriebszuschlägeDie Durchmesser d2am Antriebsende sind weitgehend grob gestuft und abweichend von ISO 2725-2 als Maße mit Toleranz angegeben, um die Anzahl der zum Befestigen benötigten Verbindungsstifte und O-Ringe so klein wie möglich zu halten.Für Steckschlüsseleinsätze mit Vierkant 40 mm und 63 mm ist die Verwendung von Verbindungsstiften mit Bund und außermittiger Schwerpunktlage erforderlich, um sicherzustellen, dass sich der Stift beim Betätigen nicht löst.Die Werte für die Sechskantiefe t der Steckschlüsseleinsätze betragen dt7,0min= (d = Gewinde-Nenndurch-messer) um einer Beschädigung der Schraubenauflagefläche vorzubeugen.LiteraturhinweiseDIN 3124, Steckschlüsseleinsätze mit Innenvierkant für Schrauben mit Sechskant, handbetätigtDIN ISO 272, Mechanische Verbindungselemente — Schlüsselweiten für Sechskantschrauben und -mutternISO 1174-2, Assembly tools for screws and nuts — Driving squares — Part 2: Driving squares for power socket toolsISO 2725-2, Assembly tools for screws and nuts — Square drive sockets — Part 2: Machine-operated sockets ("impact")。
置很简单,即使是设置较大的5.1和7.1系统也只需几分钟。
(来自W i SA协会)儒卓力提供具有最高功率密度和效率的英飞凌O pti M O S T M功率M O S F ET英飞凌O p ti M O S TM3和5同类最佳(Bi C)功率M O S F E T采用节省空间的S up e r S O8封装,与先前型号相比,具有更高的功率密度和稳健性,从而降低系统成本和提升整体性能。
由于具有最低的导通电阻,这些Bi C M O S F E T 能够以良好的性价比降低损耗。
此外,通管壳的较低热阻提供了出色的散热性能,从而带来更低的满载运作温度。
较低的反向恢复电荷通过显著减小电压过冲来提高系统可靠性,从而最大限度地减少对缓冲电路的需求,同时也减少了工程成本和工作量。
(来自儒卓力)市场要闻英特尔联合百度,共同开发N e r v ana神经网络训练处理器在近日举行的百度AI开发者大会上,英特尔公司副总裁兼人工智能产品事业部总经理Na v ee n Ra o 宣布,英特尔正与百度合作开发英特尔誖Ne r v a n a TM 神经网络训练处理器(NNP-T)。
这一合作包括全新定制化加速器,以实现极速训练深度学习模型的目的。
英特尔在人工智能方面提供优越的硬件选择,并通过软件来最大化释放硬件的性能,从而帮助客户无论数据多么复杂或位于哪里都可以自如运行AI应用。
此次NNP-T是一类全新开发的高效深度学习系统硬件,能够加速大规模的分散训练。
与百度的密切合作能够确保英特尔开发部门始终紧跟客户对训练硬件的最新需求。
此外,鉴于数据安全对于用户极其重要,英特尔还与百度共同致力于打造基于英特尔软件保护扩展(SG X)技术的M esa T EE*———内存安全功能即服务(F aa S)计算框架。
(来自英特尔)Arm与中国联通成功部署物联网设备管理平台解决方案日前,A r m宣布与中国联通旗下联通物联网有限责任公司(物联网公司)的合作取得了最新进展,A r m已成功部署基于A r m Peli on设备管理平台与M b ed O S操作系统所打造的全新物联网平台,加速推进和完善中国物联网生态发展。
DIRECTIVE NUMBER: CPL 02-00-150 EFFECTIVE DATE: April 22, 2011 SUBJECT: Field Operations Manual (FOM)ABSTRACTPurpose: This instruction cancels and replaces OSHA Instruction CPL 02-00-148,Field Operations Manual (FOM), issued November 9, 2009, whichreplaced the September 26, 1994 Instruction that implemented the FieldInspection Reference Manual (FIRM). The FOM is a revision of OSHA’senforcement policies and procedures manual that provides the field officesa reference document for identifying the responsibilities associated withthe majority of their inspection duties. This Instruction also cancels OSHAInstruction FAP 01-00-003 Federal Agency Safety and Health Programs,May 17, 1996 and Chapter 13 of OSHA Instruction CPL 02-00-045,Revised Field Operations Manual, June 15, 1989.Scope: OSHA-wide.References: Title 29 Code of Federal Regulations §1903.6, Advance Notice ofInspections; 29 Code of Federal Regulations §1903.14, Policy RegardingEmployee Rescue Activities; 29 Code of Federal Regulations §1903.19,Abatement Verification; 29 Code of Federal Regulations §1904.39,Reporting Fatalities and Multiple Hospitalizations to OSHA; and Housingfor Agricultural Workers: Final Rule, Federal Register, March 4, 1980 (45FR 14180).Cancellations: OSHA Instruction CPL 02-00-148, Field Operations Manual, November9, 2009.OSHA Instruction FAP 01-00-003, Federal Agency Safety and HealthPrograms, May 17, 1996.Chapter 13 of OSHA Instruction CPL 02-00-045, Revised FieldOperations Manual, June 15, 1989.State Impact: Notice of Intent and Adoption required. See paragraph VI.Action Offices: National, Regional, and Area OfficesOriginating Office: Directorate of Enforcement Programs Contact: Directorate of Enforcement ProgramsOffice of General Industry Enforcement200 Constitution Avenue, NW, N3 119Washington, DC 20210202-693-1850By and Under the Authority ofDavid Michaels, PhD, MPHAssistant SecretaryExecutive SummaryThis instruction cancels and replaces OSHA Instruction CPL 02-00-148, Field Operations Manual (FOM), issued November 9, 2009. The one remaining part of the prior Field Operations Manual, the chapter on Disclosure, will be added at a later date. This Instruction also cancels OSHA Instruction FAP 01-00-003 Federal Agency Safety and Health Programs, May 17, 1996 and Chapter 13 of OSHA Instruction CPL 02-00-045, Revised Field Operations Manual, June 15, 1989. This Instruction constitutes OSHA’s general enforcement policies and procedures manual for use by the field offices in conducting inspections, issuing citations and proposing penalties.Significant Changes∙A new Table of Contents for the entire FOM is added.∙ A new References section for the entire FOM is added∙ A new Cancellations section for the entire FOM is added.∙Adds a Maritime Industry Sector to Section III of Chapter 10, Industry Sectors.∙Revises sections referring to the Enhanced Enforcement Program (EEP) replacing the information with the Severe Violator Enforcement Program (SVEP).∙Adds Chapter 13, Federal Agency Field Activities.∙Cancels OSHA Instruction FAP 01-00-003, Federal Agency Safety and Health Programs, May 17, 1996.DisclaimerThis manual is intended to provide instruction regarding some of the internal operations of the Occupational Safety and Health Administration (OSHA), and is solely for the benefit of the Government. No duties, rights, or benefits, substantive or procedural, are created or implied by this manual. The contents of this manual are not enforceable by any person or entity against the Department of Labor or the United States. Statements which reflect current Occupational Safety and Health Review Commission or court precedents do not necessarily indicate acquiescence with those precedents.Table of ContentsCHAPTER 1INTRODUCTIONI.PURPOSE. ........................................................................................................... 1-1 II.SCOPE. ................................................................................................................ 1-1 III.REFERENCES .................................................................................................... 1-1 IV.CANCELLATIONS............................................................................................. 1-8 V. ACTION INFORMATION ................................................................................. 1-8A.R ESPONSIBLE O FFICE.......................................................................................................................................... 1-8B.A CTION O FFICES. .................................................................................................................... 1-8C. I NFORMATION O FFICES............................................................................................................ 1-8 VI. STATE IMPACT. ................................................................................................ 1-8 VII.SIGNIFICANT CHANGES. ............................................................................... 1-9 VIII.BACKGROUND. ................................................................................................. 1-9 IX. DEFINITIONS AND TERMINOLOGY. ........................................................ 1-10A.T HE A CT................................................................................................................................................................. 1-10B. C OMPLIANCE S AFETY AND H EALTH O FFICER (CSHO). ...........................................................1-10B.H E/S HE AND H IS/H ERS ..................................................................................................................................... 1-10C.P ROFESSIONAL J UDGMENT............................................................................................................................... 1-10E. W ORKPLACE AND W ORKSITE ......................................................................................................................... 1-10CHAPTER 2PROGRAM PLANNINGI.INTRODUCTION ............................................................................................... 2-1 II.AREA OFFICE RESPONSIBILITIES. .............................................................. 2-1A.P ROVIDING A SSISTANCE TO S MALL E MPLOYERS. ...................................................................................... 2-1B.A REA O FFICE O UTREACH P ROGRAM. ............................................................................................................. 2-1C. R ESPONDING TO R EQUESTS FOR A SSISTANCE. ............................................................................................ 2-2 III. OSHA COOPERATIVE PROGRAMS OVERVIEW. ...................................... 2-2A.V OLUNTARY P ROTECTION P ROGRAM (VPP). ........................................................................... 2-2B.O NSITE C ONSULTATION P ROGRAM. ................................................................................................................ 2-2C.S TRATEGIC P ARTNERSHIPS................................................................................................................................. 2-3D.A LLIANCE P ROGRAM ........................................................................................................................................... 2-3 IV. ENFORCEMENT PROGRAM SCHEDULING. ................................................ 2-4A.G ENERAL ................................................................................................................................................................. 2-4B.I NSPECTION P RIORITY C RITERIA. ..................................................................................................................... 2-4C.E FFECT OF C ONTEST ............................................................................................................................................ 2-5D.E NFORCEMENT E XEMPTIONS AND L IMITATIONS. ....................................................................................... 2-6E.P REEMPTION BY A NOTHER F EDERAL A GENCY ........................................................................................... 2-6F.U NITED S TATES P OSTAL S ERVICE. .................................................................................................................. 2-7G.H OME-B ASED W ORKSITES. ................................................................................................................................ 2-8H.I NSPECTION/I NVESTIGATION T YPES. ............................................................................................................... 2-8 V.UNPROGRAMMED ACTIVITY – HAZARD EVALUATION AND INSPECTION SCHEDULING ............................................................................ 2-9 VI.PROGRAMMED INSPECTIONS. ................................................................... 2-10A.S ITE-S PECIFIC T ARGETING (SST) P ROGRAM. ............................................................................................. 2-10B.S CHEDULING FOR C ONSTRUCTION I NSPECTIONS. ..................................................................................... 2-10C.S CHEDULING FOR M ARITIME I NSPECTIONS. ............................................................................. 2-11D.S PECIAL E MPHASIS P ROGRAMS (SEP S). ................................................................................... 2-12E.N ATIONAL E MPHASIS P ROGRAMS (NEP S) ............................................................................... 2-13F.L OCAL E MPHASIS P ROGRAMS (LEP S) AND R EGIONAL E MPHASIS P ROGRAMS (REP S) ............ 2-13G.O THER S PECIAL P ROGRAMS. ............................................................................................................................ 2-13H.I NSPECTION S CHEDULING AND I NTERFACE WITH C OOPERATIVE P ROGRAM P ARTICIPANTS ....... 2-13CHAPTER 3INSPECTION PROCEDURESI.INSPECTION PREPARATION. .......................................................................... 3-1 II.INSPECTION PLANNING. .................................................................................. 3-1A.R EVIEW OF I NSPECTION H ISTORY .................................................................................................................... 3-1B.R EVIEW OF C OOPERATIVE P ROGRAM P ARTICIPATION .............................................................................. 3-1C.OSHA D ATA I NITIATIVE (ODI) D ATA R EVIEW .......................................................................................... 3-2D.S AFETY AND H EALTH I SSUES R ELATING TO CSHO S.................................................................. 3-2E.A DVANCE N OTICE. ................................................................................................................................................ 3-3F.P RE-I NSPECTION C OMPULSORY P ROCESS ...................................................................................................... 3-5G.P ERSONAL S ECURITY C LEARANCE. ................................................................................................................. 3-5H.E XPERT A SSISTANCE. ........................................................................................................................................... 3-5 III. INSPECTION SCOPE. ......................................................................................... 3-6A.C OMPREHENSIVE ................................................................................................................................................... 3-6B.P ARTIAL. ................................................................................................................................................................... 3-6 IV. CONDUCT OF INSPECTION .............................................................................. 3-6A.T IME OF I NSPECTION............................................................................................................................................. 3-6B.P RESENTING C REDENTIALS. ............................................................................................................................... 3-6C.R EFUSAL TO P ERMIT I NSPECTION AND I NTERFERENCE ............................................................................. 3-7D.E MPLOYEE P ARTICIPATION. ............................................................................................................................... 3-9E.R ELEASE FOR E NTRY ............................................................................................................................................ 3-9F.B ANKRUPT OR O UT OF B USINESS. .................................................................................................................... 3-9G.E MPLOYEE R ESPONSIBILITIES. ................................................................................................. 3-10H.S TRIKE OR L ABOR D ISPUTE ............................................................................................................................. 3-10I. V ARIANCES. .......................................................................................................................................................... 3-11 V. OPENING CONFERENCE. ................................................................................ 3-11A.G ENERAL ................................................................................................................................................................ 3-11B.R EVIEW OF A PPROPRIATION A CT E XEMPTIONS AND L IMITATION. ..................................................... 3-13C.R EVIEW S CREENING FOR P ROCESS S AFETY M ANAGEMENT (PSM) C OVERAGE............................. 3-13D.R EVIEW OF V OLUNTARY C OMPLIANCE P ROGRAMS. ................................................................................ 3-14E.D ISRUPTIVE C ONDUCT. ...................................................................................................................................... 3-15F.C LASSIFIED A REAS ............................................................................................................................................. 3-16VI. REVIEW OF RECORDS. ................................................................................... 3-16A.I NJURY AND I LLNESS R ECORDS...................................................................................................................... 3-16B.R ECORDING C RITERIA. ...................................................................................................................................... 3-18C. R ECORDKEEPING D EFICIENCIES. .................................................................................................................. 3-18 VII. WALKAROUND INSPECTION. ....................................................................... 3-19A.W ALKAROUND R EPRESENTATIVES ............................................................................................................... 3-19B.E VALUATION OF S AFETY AND H EALTH M ANAGEMENT S YSTEM. ....................................................... 3-20C.R ECORD A LL F ACTS P ERTINENT TO A V IOLATION. ................................................................................. 3-20D.T ESTIFYING IN H EARINGS ................................................................................................................................ 3-21E.T RADE S ECRETS. ................................................................................................................................................. 3-21F.C OLLECTING S AMPLES. ..................................................................................................................................... 3-22G.P HOTOGRAPHS AND V IDEOTAPES.................................................................................................................. 3-22H.V IOLATIONS OF O THER L AWS. ....................................................................................................................... 3-23I.I NTERVIEWS OF N ON-M ANAGERIAL E MPLOYEES .................................................................................... 3-23J.M ULTI-E MPLOYER W ORKSITES ..................................................................................................................... 3-27 K.A DMINISTRATIVE S UBPOENA.......................................................................................................................... 3-27 L.E MPLOYER A BATEMENT A SSISTANCE. ........................................................................................................ 3-27 VIII. CLOSING CONFERENCE. .............................................................................. 3-28A.P ARTICIPANTS. ..................................................................................................................................................... 3-28B.D ISCUSSION I TEMS. ............................................................................................................................................ 3-28C.A DVICE TO A TTENDEES .................................................................................................................................... 3-29D.P ENALTIES............................................................................................................................................................. 3-30E.F EASIBLE A DMINISTRATIVE, W ORK P RACTICE AND E NGINEERING C ONTROLS. ............................ 3-30F.R EDUCING E MPLOYEE E XPOSURE. ................................................................................................................ 3-32G.A BATEMENT V ERIFICATION. ........................................................................................................................... 3-32H.E MPLOYEE D ISCRIMINATION .......................................................................................................................... 3-33 IX. SPECIAL INSPECTION PROCEDURES. ...................................................... 3-33A.F OLLOW-UP AND M ONITORING I NSPECTIONS............................................................................................ 3-33B.C ONSTRUCTION I NSPECTIONS ......................................................................................................................... 3-34C. F EDERAL A GENCY I NSPECTIONS. ................................................................................................................. 3-35CHAPTER 4VIOLATIONSI. BASIS OF VIOLATIONS ..................................................................................... 4-1A.S TANDARDS AND R EGULATIONS. .................................................................................................................... 4-1B.E MPLOYEE E XPOSURE. ........................................................................................................................................ 4-3C.R EGULATORY R EQUIREMENTS. ........................................................................................................................ 4-6D.H AZARD C OMMUNICATION. .............................................................................................................................. 4-6E. E MPLOYER/E MPLOYEE R ESPONSIBILITIES ................................................................................................... 4-6 II. SERIOUS VIOLATIONS. .................................................................................... 4-8A.S ECTION 17(K). ......................................................................................................................... 4-8B.E STABLISHING S ERIOUS V IOLATIONS ............................................................................................................ 4-8C. F OUR S TEPS TO BE D OCUMENTED. ................................................................................................................... 4-8 III. GENERAL DUTY REQUIREMENTS ............................................................. 4-14A.E VALUATION OF G ENERAL D UTY R EQUIREMENTS ................................................................................. 4-14B.E LEMENTS OF A G ENERAL D UTY R EQUIREMENT V IOLATION.............................................................. 4-14C. U SE OF THE G ENERAL D UTY C LAUSE ........................................................................................................ 4-23D.L IMITATIONS OF U SE OF THE G ENERAL D UTY C LAUSE. ..............................................................E.C LASSIFICATION OF V IOLATIONS C ITED U NDER THE G ENERAL D UTY C LAUSE. ..................F. P ROCEDURES FOR I MPLEMENTATION OF S ECTION 5(A)(1) E NFORCEMENT ............................ 4-25 4-27 4-27IV.OTHER-THAN-SERIOUS VIOLATIONS ............................................... 4-28 V.WILLFUL VIOLATIONS. ......................................................................... 4-28A.I NTENTIONAL D ISREGARD V IOLATIONS. ..........................................................................................4-28B.P LAIN I NDIFFERENCE V IOLATIONS. ...................................................................................................4-29 VI. CRIMINAL/WILLFUL VIOLATIONS. ................................................... 4-30A.A REA D IRECTOR C OORDINATION ....................................................................................................... 4-31B.C RITERIA FOR I NVESTIGATING P OSSIBLE C RIMINAL/W ILLFUL V IOLATIONS ........................ 4-31C. W ILLFUL V IOLATIONS R ELATED TO A F ATALITY .......................................................................... 4-32 VII. REPEATED VIOLATIONS. ...................................................................... 4-32A.F EDERAL AND S TATE P LAN V IOLATIONS. ........................................................................................4-32B.I DENTICAL S TANDARDS. .......................................................................................................................4-32C.D IFFERENT S TANDARDS. .......................................................................................................................4-33D.O BTAINING I NSPECTION H ISTORY. .....................................................................................................4-33E.T IME L IMITATIONS..................................................................................................................................4-34F.R EPEATED V. F AILURE TO A BATE....................................................................................................... 4-34G. A REA D IRECTOR R ESPONSIBILITIES. .............................................................................. 4-35 VIII. DE MINIMIS CONDITIONS. ................................................................... 4-36A.C RITERIA ................................................................................................................................................... 4-36B.P ROFESSIONAL J UDGMENT. ..................................................................................................................4-37C. A REA D IRECTOR R ESPONSIBILITIES. .............................................................................. 4-37 IX. CITING IN THE ALTERNATIVE ............................................................ 4-37 X. COMBINING AND GROUPING VIOLATIONS. ................................... 4-37A.C OMBINING. ..............................................................................................................................................4-37B.G ROUPING. ................................................................................................................................................4-38C. W HEN N OT TO G ROUP OR C OMBINE. ................................................................................................4-38 XI. HEALTH STANDARD VIOLATIONS ....................................................... 4-39A.C ITATION OF V ENTILATION S TANDARDS ......................................................................................... 4-39B.V IOLATIONS OF THE N OISE S TANDARD. ...........................................................................................4-40 XII. VIOLATIONS OF THE RESPIRATORY PROTECTION STANDARD(§1910.134). ....................................................................................................... XIII. VIOLATIONS OF AIR CONTAMINANT STANDARDS (§1910.1000) ... 4-43 4-43A.R EQUIREMENTS UNDER THE STANDARD: .................................................................................................. 4-43B.C LASSIFICATION OF V IOLATIONS OF A IR C ONTAMINANT S TANDARDS. ......................................... 4-43 XIV. CITING IMPROPER PERSONAL HYGIENE PRACTICES. ................... 4-45A.I NGESTION H AZARDS. .................................................................................................................................... 4-45B.A BSORPTION H AZARDS. ................................................................................................................................ 4-46C.W IPE S AMPLING. ............................................................................................................................................. 4-46D.C ITATION P OLICY ............................................................................................................................................ 4-46 XV. BIOLOGICAL MONITORING. ...................................................................... 4-47CHAPTER 5CASE FILE PREPARATION AND DOCUMENTATIONI.INTRODUCTION ............................................................................................... 5-1 II.INSPECTION CONDUCTED, CITATIONS BEING ISSUED. .................... 5-1A.OSHA-1 ................................................................................................................................... 5-1B.OSHA-1A. ............................................................................................................................... 5-1C. OSHA-1B. ................................................................................................................................ 5-2 III.INSPECTION CONDUCTED BUT NO CITATIONS ISSUED .................... 5-5 IV.NO INSPECTION ............................................................................................... 5-5 V. HEALTH INSPECTIONS. ................................................................................. 5-6A.D OCUMENT P OTENTIAL E XPOSURE. ............................................................................................................... 5-6B.E MPLOYER’S O CCUPATIONAL S AFETY AND H EALTH S YSTEM. ............................................................. 5-6 VI. AFFIRMATIVE DEFENSES............................................................................. 5-8A.B URDEN OF P ROOF. .............................................................................................................................................. 5-8B.E XPLANATIONS. ..................................................................................................................................................... 5-8 VII. INTERVIEW STATEMENTS. ........................................................................ 5-10A.G ENERALLY. ......................................................................................................................................................... 5-10B.CSHO S SHALL OBTAIN WRITTEN STATEMENTS WHEN: .......................................................................... 5-10C.L ANGUAGE AND W ORDING OF S TATEMENT. ............................................................................................. 5-11D.R EFUSAL TO S IGN S TATEMENT ...................................................................................................................... 5-11E.V IDEO AND A UDIOTAPED S TATEMENTS. ..................................................................................................... 5-11F.A DMINISTRATIVE D EPOSITIONS. .............................................................................................5-11 VIII. PAPERWORK AND WRITTEN PROGRAM REQUIREMENTS. .......... 5-12 IX.GUIDELINES FOR CASE FILE DOCUMENTATION FOR USE WITH VIDEOTAPES AND AUDIOTAPES .............................................................. 5-12 X.CASE FILE ACTIVITY DIARY SHEET. ..................................................... 5-12 XI. CITATIONS. ..................................................................................................... 5-12A.S TATUTE OF L IMITATIONS. .............................................................................................................................. 5-13B.I SSUING C ITATIONS. ........................................................................................................................................... 5-13C.A MENDING/W ITHDRAWING C ITATIONS AND N OTIFICATION OF P ENALTIES. .................................. 5-13D.P ROCEDURES FOR A MENDING OR W ITHDRAWING C ITATIONS ............................................................ 5-14 XII. INSPECTION RECORDS. ............................................................................... 5-15A.G ENERALLY. ......................................................................................................................................................... 5-15B.R ELEASE OF I NSPECTION I NFORMATION ..................................................................................................... 5-15C. C LASSIFIED AND T RADE S ECRET I NFORMATION ...................................................................................... 5-16。
Impact of JIT JVM Optimizations on Java Application Performance¢K. Shiv , R. Iyer , C. Newburn , J. Dahlstedt , M. Lagergren and O. Lindholm Intel Corporation BEA Systems¢ ¢ ¢ ¡ ¡ ¡ ¡AbstractWith the promise of machine independence and efficient portability, JAVA has gained widespread popularity in the industry. Along with this promise comes the need for designing an efficient runtime environment that can provide high-end performance for Java-based applications. In other words, the performance of Java applications depends heavily on the design and optimization of the Java Virtual Machine (JVM). In this paper, we start by evaluating the performance of a Java server application (SPECjbb2000 ) on an Intel platform running a rudimentary JVM. We present a measurement-based methodology for identifying areas of potential improvement and subsequently evaluating the effect of JVM optimizations and other platform optimizations. The compiler optimizations presented and discussed in this paper include peephole optimizations and Java specific optimizations. In addition, we also study the effect of optimizing the garbage collection mechanism and the effect of improved locking strategies. The identification and analysis of these optimizations are guided by the detailed knowledge of the micro-architecture and the use of performance measurement and profiling tools (EMON and VTune) on Intel platforms.1 IntroductionThe performance of Java client/server applications has been the topic of significant interest in the recent years. The attraction that Java offers is the promise of portability across all hardware platforms. This is accomplished by using a managed runtime engine called the Java Virtual Machine (JVM) that runs a machine-independent representation of Java applications called bytecodes. The most common mode of application execution is based on a Just-InTime (JIT) compiler that compiles the bytecodes into native machine instructions. These native machine instructions are also cached in order to allow for fast re-use of the frequently executed code sequences. Apart from the JIT compilation, the JVM also performs several functions including thread management and garbage collection. This brings us to the reason for our study i.e. Java application performance depends very heavily on the efficient execution of the Java Virtual Machine (JVM). Our goal in this paper is to characterize, optimize and evaluate a JVM while running a representative Java application. Over the last few years, several projects (from academia as well as in the industry) [1,2,4,7,8,9,10,15,16,21] have studied various aspects of Java applications, compilers and interpreters. We found that R. Radhakrishnan et al. [16] cover a brief description of much of the recent work on this subject. In addition, they also provide insights on architectural implications of Java client workloads based on SPECjvm98 [18]. Overall, the published work can be classified into the following general areas of focus: (1) presenting the design of a compiler, JVM or interpreter, (2) optimizing a certain aspect of Java code execution, and (3) discussing the application performance and architectural characterization. In this paper, we take a somewhat different approach touching upon all the three aspects listed above. We present the software architecture of a commercial JVM, identify several optimizations and characterize the performance of a representative Java server benchmark through several phases of code generation optimizations carried out on a JVM. Our contributions in this paper are as follows. We start by characterizing SPECjbb2000 [17] performance on Intel platforms running an early version of BEA’s JRockit JVM [3]. We then identify various possible optimizations (borrowing ideas from literature wherever possible), present the implementation details of these optimizations in the JVM and analyze the effect of each optimization on the execution characteristics and overall performance. Our performance characterization and evaluation methodology is based on hardware measurements on Intel platforms - using performance counters (EMON) and a sophisticated profiler (VTune [11]) that allows us to characterize various regions of software execution. The code generation enhancements that we implement and evaluate include (1) code quality improvements such as peephole optimizations, (2) dynamic code optimizations, (3) parallel garbage collection and (4) fine-grained locks. The outcome of our work is the detailed analysis and breakdown of benefits based on these individual optimizations added to the JVM. The rest of this paper is organized as follows. Section 2 covers a detailed overview of the BEA JRockit JVM, the measurement-based characterization methodology and the SPECjbb2000 benchmark. Section 3 discusses the opti-¤ ¥£2 Background and MethodologyIn this section, we present a detailed overview of JRockit (the commercial JVM used) [3], SPECjbb2000 (the Java server benchmark) [17] and the optimization and performance evaluation methodology and tools.Figure 1. The SPECjbb2000 Benchmark Process2.1 Architecture of the JRockit JVMThe goal of the JRockit project is to build a fast and efficient JVM for server applications. The virtual machine should be made as platform independent as possible without sacrificing platform specific advantages. Some of the considerations included reliability, scalability, nondisruptiveness and of course, high performance. JRockit starts up differently from most ordinary Java JVMs by first JIT-compiling the methods it encounters during startup. When the Java application is running, JRockit has a bottleneck detector active in the background to collect runtime statistics. If a method is executed frequently and found to be a bottleneck, it is sent to the Optimization Manager subsystem for aggressive optimization. The old method is replaced by the optimized one while the program is running. In this way, JRockit is using adaptive optimization to improve code performance. JRockit relies upon a fast JIT-compiler for unoptimized methods, as opposed to interpretative byte-code execution. Other JVMs such as Jalapeno/Jikes [23] have used similar approaches. It is important to optimize the garbage collection mechanism in any JVM in order to avoid disruption and provide maximum performance to the Java application. JRockit provides several alternatives for garbage collection. The ”parallel collector” utilizes all available processors on the host computer when doing a garbage collection. This means that the garbage collector runs on all processors, but not concurrently with the Java program. JRockit also has a concurrent collector which is designed to run without ”stopping the world”, if non-disruptiveness is the most important factor. To complete the server side design, JRockit also contains an advanced thread model, that makes it possible to run several thousands of Java threads as light weight tasks in a very scalable fashion.The database component requirement common to three-tier workloads is emulated using binary trees of objects. The clients are similarly replaced by driver threads. Thus, the whole benchmark runs on a single computer system, and all the three tiers run within the same JVM. The benchmark process is illustrated in Figure 1. The SPECjbb2000 application is somewhat loosely based on the TPC-C [20] specification for its schema, input generation, and operation profile. However, the SPECjbb2000 benchmark only stresses the server-side Java execution of incoming requests and replaces all database tables with Java classes and all data records with Java objects. Unlike TPC-C where the database execution requires disk I/O to retrieve tables, in SPECjbb2000 disk I/O is completely avoided by holding the objects in memory. Since users do not reside on external client systems, there is no network IO in SPECjbb2000 [17]. SPECjbb2000 measures the throughput of the underlying Java platform, which is the rate at which business operations are performed per second. A full benchmark run consists of a sequence of measurement points with an increasing number of warehouses (and thus an increasing number of threads), and each measurement point is work done during a 2-minute run at a given number of warehouses. The number of warehouses is increased from 1 until at least 8. The throughputs for all the points from N warehouses to 2*N inclusive warehouses are averaged, where N is the number of warehouses with best performance. This average is the SPECjbb2000 metric.2.3 Performance Optimization and Evaluation MethodologyThe approach that we have taken is evolutionary. Beginning with an early version of JRockit, performance was analyzed and potential improvements were identified. Appropriate changes were made to the JVM and the new version of the JVM was then tested to verify that the modifications did deliver the expected improvements. The new version of the JVM was then analyzed in its turn for the next stage of performance optimizations. The types of performance optimizations that we investigated were two-fold. Changes were made to the JIT so that the quality of the generated code was superior, and changes were made to other parts of the JVM, particularly to the Garbage Collector, Object Allo-2.2 Overview of the SPECjbb2000 BenchmarkSPECjbb2000 is Java Business Benchmark from SPEC that evaluates the performance of Server Side Java. It emulates a three-tier system, with business logic and object manipulation, the work of the middle layer predominating.$ ' i7 f 8"" h g`e V (d"cb #! '6 § a X & !! ' § $ '$1 Y "! (#© W" %$V T S RE BIHG FE D A U(QP 3(#3 BC 3 9@ &! ¦ &6 4 $ $© #! (87 #(¨5$ #! 321 " # 0 $ ' ' ' "#) ( & $ #! %© $ #" ! © § ¨¦mizations - how they were identified, implemented and their performance evaluation. Section 4 summarizes the breakdown of the performance benefits and where they came from. Section 5 concludes this paper with some direction on future work in this area.Figure 2. Processor Scaling on an early JRockit JVM version2.4.2 VTune Performance Monitoring Tool Intel’s VTune performance tools provide a rich set of features to aid in performance analysis and tuning: (1) Timebased and event-based sampling, (2) Attribution of events to code locations, viewed in source and/or assembly, (3) Call graph analysis and (4) Hot spot analysis with the AHA tool, which indicates how the measured ranges of event count values compare with other applications, and which provides some guidance on what other events to collect and how to address common performance issues. One of the key tools provides the means for providing the percentage contribution of a small instruction address range to the overall program performance, and for highlighting differences in performance among versions of applications and different hardware platforms.2.4 Overview of Performance Tools - EMON and VTuneThis section describes the rich set of event monitoring facilities available in many of Intel’s processors, commonly called EMON, and a powerful performance analysis tool based on those facilities, called VTune [11].2.4.1 EMON Hardware and Events Used The event monitoring hardware provides several facilities including simple event counting, time-based sampling, event sampling and branch tracing. A detailed explanation of these techniques is not within the scope of this paper. Some of th key EMON events leveraged in our performance analysis include (1) Instructions – the number of instructions architecturally retired, (2) Unhalted cycles – the number of processor cycles that the application took to execute, not counting when that processor was halted, (3) Branches – the number of branches architecturally retired which are useful for noting reductions in branches due to optimizations, (4) Branch Mispredictions – the number of branches that experienced a performance penalty on the order of 50 clocks, due to a misprediction, (5)Locks – the number of locked cmpxchg instructions, or instructions with a lock prefix and (6) Cache misses – the number of misses and its breakdown at each level of the cache hierarchy. The reader is referred to the Pentium 4 Processor Optimization Guide [24] for more details on these events.3 JVM Optimizations and Performance ImpactIn this section we describe the various JVM improvements that we studied and document their impact on performance. We also show the analysis of JVM behavior and the identification of performance inhibitors that informed the improvements that were made.3.1 Performance Characteristics of an early JVMThe version of JRockit with which we began our experiments was a complete JVM in the sense that all of the required JVM components were functional. Unlike several other commercial JVMs though, JRockit does not include an interpreter. Instead, all application code is compiled before execution. This could slow down the start of an application slightly, but this approach enables greater performance. JRockit also included a selection of Garbage Collectors and two threading models. Figure 3 shows the performance for increasing numbers of warehouses for a 1-processor and a 4-processor system.d q y y t w q s v v u t sq cY¨ u ¥Y xYdg"rp d d ( Q Q Q Q hf ge kf ij fm l f hqr omp ncator and synchronization, to enhance the processor scaling of the system. Our experiments were conducted on a 4 processor, 1.6 GHz, Xeon platform with 4GB of memory. The processors had a 1M level-3 cache along with a 256K level-2 cache. The processors accessed memory through a shared 100 MHz, quad-pumped, front side bus. The network and disk I/O components of our system were not relevant to studying the performance of SPECjbb2000, since this benchmark does not require any I/O. Several performance tools assisted us in our experiments. Perfmon, a tool supplied with Microsoft’s operating systems, was useful in identifying problems at a higher level, and allowed us to look at processor utilization patterns, context switch rates, frequency of system calls and so on. EMON gave us insight into the impact of the workload on the underlying micro-architecture and into the types of processor stalls that were occurring, and that we could target for optimizations. VTune permitted us to dig deeper by identifying precisely the regions of the code where various processor micro-architecture events were happening. This tool was also used to study the generated assembly code. The next section describes the performance tools – EMON and VTune – in some more detail.Table 1. System Performance Characteristics for early JVMFigure 3. Performance Scaling with Increasing WarehousesThere is a marked roll-off in performance from the peak at 3 warehouses in the 4-processor case. The JVM can thus be seen to be having some difficulty with increasing numbers of threads. Data obtained using Perfmon is shown in Table 1. While the utilization of the 1 processor is quite good at 94%, the processor utilization in the 4-processor case is only 56%. It is clear that improvements are needed to increase the processor utilization. The context switch and system call rates are two orders of magnitude larger in the 4P than in the 1P. The small processor queue length indicates the absence of work in the system. These aspects along with the sharp performance roll-off with increased threads, all point to a probable issue related to synchronization. It appears likely that one or more locks are being highly contended, resulting in a large number of the threads being in a state of suspension waiting for the lock. While being fully functional, this version of JRockit (we call it early JVM) had not been optimized for performance. It thus served as an excellent test-bed for our studies. The processor scaling seen with the initial, nonoptimized early JVM is shown in Figure 2. It is obvious that we can do much better on scaling. Many other statically compiled workloads exhibit scaling of 3X or better from 1 processor to 4 processors, for instance. Q 8 ¥ ww¦ ¦ 53wQ ¦ wQQ3 ¥ w 8w ¥ Q 8 w 8w s 3u Ö º w QQ 8QwQ w8w ¥ ¦ 5 ¨w w 8¦ 5 ¨w ¨Î Í Ì Ë Ê ÈÄ Å Ç À Æ Å Ä Ã¿ ¼ Á ¿ ¾ ¼ u8É ½ W½ Y"ÂÀ 8 ¼½ g»â ÛâØ Ýá Ü à ß ÞÝ Ü ÛÚ ÙØ Q"#¨3¨Û ¨ÂQ38¨¤w× Õ Ñ w ¨ ¨# ¥ Qw¦ ¥ wQQ ¥ Q 8Q# ¦ w 8 Q 8Q# s wuw 5 8 ¦ Q ¥ Q 53¦ ¥ ¦ w¥ 8w Q¥ 8 8 ¨gw Ô¯ 3¢#¨##8xQ83#Q(w ©¹ ªª ¸ µ « 5# # #w ª 8 ( ¨© µ¯ · ¶ © Q (Q#w ¨± #3 #5° µ ° ³ ² w ( 5´5 ( ± (Qª5|35 QgQ 33wgQ¬ © ° £ ¯ ® 3 ª ¨ (© 8 « ¨ § 5¨#8 £¢ ¤wQw Q¡ #8 853#w#w #8 v x v ~}{ vt wwt 8t w Y5| xyz wus Ò Ï ÒÑ ÐwÏ Ò Ó ä Ñ QÓ Ð æä åã éä çè äë ê ÐÑ æïðíëî ì3.2 Granularity of Heap LocksThe early version of JRockit performed almost all object allocation globally with all allocating threads increasing a pointer atomically to allocate. In order to avoid this contention, thread local allocation and Thread Local Areas (TLAs) were introduced. In that scheme, each thread has its own TLA to allocate from and the atomic operation for increasing the current pointer could be removed, only the allocation of TLAs required synchronizations. A chain is never stronger than its weakest link, once a contention on a lock or an atomic operation is removed, the problem usually pops up somewhere else. The next problem to solve was the allocation of the TLAs. For each TLA that was allocated, the allocating thread had to take a ”heap lock”, find a large enough block on a free list and release the lock. The phase of object allocation that requires space to be allocated from a free list requires a lock. This lock acquisition and release showed up on all our measurements with VTune as a hot spot, marking it as a highly contended lock. One attempt was made to reduce the contention of this lock by letting the allocating thread allocate a couple of TLAs and putting them in a smaller temporary storage where they could be allocated using only atomic operations by other threads. This attempt was a dead end. Even if the thread that had the heap lock put a large amount of TLAs in the temporary storage, all threads still ended up waiting most of the time, either for the heap lock or that the holder of the heap lock would give away TLAs. The final solution was to create several TLA free lists. Each thread has a randomly allotted ”home” free list from which to allocate the TLAs it needs. If the chosen list was empty, the allocating thread tried to take the heap lock and fill that particular free list with several TLAs. After this, the thread would choose another ”home” free list randomly to allocate from. By having several lists, usually only one thread would try to take the heap lock at the same time and the contention of the heap lock was reduced dramatically. Contention was further reduced by providing a TLA cache; the thread that acquires the heap lock moves 1MB of memory into the cache. A thread that finds its TLA free list empty checks for TLAs in the cache before taking the heap lock. Figure 4 shows the marked improvement in processor scaling in the modified JVM, the JVM with the heap lock contention reduction. Scaling at 2 processors has increased from 1.08X to 1.70X, and the scaling at 4 processors has improved to 2.46X from 1.29X. The perfmon data with these changes is interesting, and is shown in Table 2. The increase in processor utilization and the decrease in system calls and context switches are all very dramatic.3.3 Garbage Collection OptimizationsThe early version of JRockit included both a single and multi generational concurrent garbage collector, designed toFigure 4. Improvement of Processor Scaling with Heap Lock contention scaling ¦ S¡ S¡ ¡¦© ¡ ¡ ¦ ¡ ¡ c h ` FHg Y 9 X W i @ f¡pF¦R y F ¦¡ ¦ ¡ F FS ¡ ¡ FS c b a dSF` Y 9 X W i @ f¡pF¦R ¡ F¡ ¡¦ ¡¡¦F © ¦ ¡ ¡ ¦ ¡ c h ` FHg Y 9 X W T f¦HV eG y F ¦¦ S¦¡¦ ¡S¡ © S¡ ¡ S¡ c b a dSF` Y 9 X W T 4¡HV UGFigure 5. Impact of Parallel Garbage Collection on Processor Scalinghave really short pause times and fair throughput. Throughput in a concurrent collector is usually not a problem since a full collection is rarely noticed, even less on a multiprocessor system. The problem occurs when objects are allocated in such a fast rate that even if the garbage collector collects all the time on one processor and lets the other processors run the program, the collector still doesn’t manage to keep up the pace. This problem started to hurt performance badly in JRockit when running 8 warehouses on 8-way systems. To solve this, the so-called ”parallel collector” was developed. The base was a normal Mark and Sweep [13] collector with one marking thread per processor. Each thread had its own marking stack, and if a stack is empty the thread could work-steal references from other stacks [5]. Normal pushing and popping required no synchronization or atomic operations, only the work-stealing required one atomic operation. Each thread also had an expandable local stack to handle overflow in the exposed marking stack. Sweeping is also done in parallel by splitting the heap in N sections and letting each thread allocate a section, sweep it, allocate a new section and so forth until all sections were swept. The sweeping algorithm focused on performance more than accuracy, creating room for fragmentation if we were unlucky. A partial compaction scheme was employed to reduce this fragmentation. These GC optimizations resulted in an increase in the re-Figure 6. Impact of Parallel Garbage Collection on SPECjbb2000 Performanceported SPECjbb2000 result in a 4P system, and improved processor scaling from 2.46 to 2.92, as illustrated in Figure 5. The benefits of this were more noticeable at higher numbers of warehouses and therefore lead to a much flatter roll-off from the peak, as shown in Figure 6.3.4 Code Quality ImprovementsSeveral code quality improvements were made during the benchmarking process. A new code generation pipeline was developed and merged into the product. This enabled us to do a lot more versatile and low-level optimizations on code than previously was possible. Based on the SPECjbb2000 characteristics measured and analyzed in the previous section, we were able to identify several patterns at the native code level that were suboptimal. The JRockit team replaced these with better code through peephole optimizations (commonly used for compiler optimizations as in [6, 14]) or more efficient code generation methodologies. While the compiler optimizations listed below are well-known and understood, the requirement here is¾ Àà ¿Ð É Ð Æ Ë Ï Ê Î Í Ì Ë Ê É È Ç Æ ¦¦¡edÉ d8U¦ed¦½S4ž ©¿Ã¾Table 2. System Performance Characteristics after Heap Lock Improvements S¼ » º º ¹ ¸ · ² ¶ µ ´ ³ ² ½S³ º ¸ |QSe2² f± ~ { } ~ { Q fdm4eUU} |z S °¤ £ ¦¡ ¡ ¯ £ £ ¢ ®y¡¬Fm¦F ¨ « ª © ¡ § ¦ ¥ ¢ ¡ 4y| £¤ ¡ ¦ ¦ © ¿ à Á ¾ Ä À Â × ÕÖ Ò Ù ØÒ ÞÒ ÜÔÝ Ù ÚÛ ÓÔ ÑÒ 3428¤©5 1) 7 6 3 1) ' 420( ¦& ¨ þ ûúõùòôøø÷õ ò Y üý YuóYöô óñ¨ ¥ k s¡©S©x©Sv¦S¡¤¡r ev e y w y f y f u w r y x w x y r s g w s ¦© © ¡f w dy ©e d x y r y k ¤ ©¡©¡x s dn Fy ©4l t s s s e r x y r u t l p s o ¦ ¤ u 4qy © dn s©e¡fS4mFy ¦Q¡s FF¡Q¦h r l t w k q j x r i ¡f w Sy ©e ©t ¡Uq s g w s d u s r w y u s r SSe©t ¡Uq y w v u s r ¡¡¡ uy ¦ e©t ¡Uq w r y x w v u s r S¡©¡r e©t ¡Uq A E B A P I G E A @ ¡¡@ S@ ¡R QHFD C B ¡09 ¨ ©£ £ §ÿ¥ ¢¡ ¢ ¤£ ¢ ¦¥ ÿ $% # !"Figure 8. An Example of Copy PropagationP £b`I00D qik jbD ih IiUtiqQ(V @ H 8 k 8 m A l G A @ A 8 H 8 X 8 h DFigure 7. A Simple Example of Peep-Hole Optimization 5Ii U5y d "y u y w v u s %iixItr @ P A @ RQ8 I8 0H G F D B A @ 8 0EC 097that the compile time overhead be kept to a minimum since it is a part of the execution time; and as such, not all known optimizations and techniques could be added. These are by no means a complete list of improvements, but give some perspective on things that were done to enhance code quality. 1. Peephole Optimizations: The new JRockit code generator made it possible to work with native code just before emission, i.e. there would be IR operations for each native code operation. Several small peephole optimizations were implemented on this. We present one example of this kind of pattern matching here: Java contains a lot of load/store patterns, where a field is loaded from memory, modified and then rewritten. Literal translation of a Java getfield/putfield sequence would result in three instructions on IA32 as shown in Figure 7(left). IA32 allows most operations to operate directly on addresses, so the above sequence could be collapsed to a single instruction as shown in Figure 7(right). 2. Better use of IA32 FPU instructions: Java has precise floating-point semantics, and works either in 32bit or 64-bit precision. This is usually a problem if one wants to use fast 80-bit floating points that there is hardware support for on IA32, but in some cases we don’t need fp-strict calculations and can use built in FPU instructions. JRockit was modified to determine when this is possible. 3. Better SSA reverse transform: Most code optimizations take place in SSA form. There were some problems with artifacts in the form of useless copies not being removed from the code when transforming back to normal form. The transform was modified to get rid of these, with good results. Register pressure dropped significantly for optimized code. 4. Faster checks: The implementation of several Java runtime checks was speeded up. Some Java runtime checks are quite complicated, such as the non-trivial case of an array store check. These were treated as special native calls, but without using all available registers. Special interference information for these simplified methods was passed to the register allocator, enabling less saves and restores of volatile registers.Table 3. Impact of Better Code Generation on Application Performance5. Specializations for common operations: Array allocation was re-implemented with specialized allocation policies for individual array element sizes. The Java ”arraycopy” function was also specialized, depending on if it was operating on primitives or references and on elements of specific sizes. Other common operations were also specialized. 6. Better Copy Propagation: The copy propagation algorithm was improved and also changed to work on the new low level IR, with all its addressing modes and operations. An example of better copy propagation is shown in Figure 8. These improvements to the JIT were undertaken to reduce the code required to execute an application. It is possible that the techniques used to lower the path length could increase the CPI of the workload, and end up hurting throughput. One example of this would be the usage of a complex instruction to replace a set of simpler instructions. However, Table 3 shows that while the efforts to reduce the path length were well rewarded with a 27% improvement for SPECjbb2000, these optimizations did not hurt the CPI in any significant way. The path length improvement resulted in a 34% boost to the reported SPECjbb2000 result.3.5 Dynamic OptimizationThe initial compile time that is tolerable limits the extent to which compiler optimizations can be applied. This implies that while JRockit provides better code in general than an interpreter, for the few functions that other JITs do choose to compile, there is a risk of under-performance. JRockit has chosen to handle this issue by providing a secondary compilation phase that can include more sophisticated optimizations, and using this secondary compilation during the application run to compile a few frequently used hot functions.© 3 § 1 ' ) ' ! © $ # ! § ¥ 24 6% ¡0%("&%"¡© ¨5¤ § ¥ © 2© ¨5¤ 8 p H h g V q0IiYfe e e ¡ e i G X V A F dcbI8 @ 5a gU q £ QG X V A DF `YW8 UT08 SfI0qQ e U § ¥ © © ¨¦¤ © 3 § 1 ' ) ' ! © $ # ! § ¥ 4 2 ¡&0("&%"¡© ¨¦¤ § ¥ © © ¨¦¤ü ¢ ÿ ú ô ó þ ý û úù ÷ õ ô 0£¡U0F|ü 2øöóë ð ï î é ä à å ä 4S©Qíß ã Qà ä ì ê è æ å ä fë dé yç pß ã áß â à ä òß ã ©fë dé yç ñáß å ì ê è æ â à。
FRAX 150Sweep Frequency Response AnalyzernHighest dynamic range and accuracy in the industrynBuilt-in PC with powerful backlit screen for use in direct sunlightnHighest possible repeatability by using reliable cable practice and high-performance instrumentation nFulfills all international standards for SFRA measurementsnAdvanced analysis and decision support built into the softwarenImports data from other FRA test setsFRAX 150Sweep Frequency Response AnalyzerDESCRIPTIONPower transformers are some of the most vital components in today’s transmission and distribution infrastructure. Transformer failures cost enormous amounts of money in unexpected outages and unscheduled maintenance. It is important to avoid these failures and make testing and diagnostics reliable and efficient.The FRAX 150 Sweep Frequency Response Analyzer (SFRA) detects potential mechanical and electricalproblems that other methods are unable to detect. Major utilities and service companies have used the FRA method for more than a decade. The measurement is easy to perform and will capture a unique “fingerprint” of the transformer. The measurement is compared to a reference “fingerprint” and gives a direct answer if the mechanical parts of the transformer are unchanged or not. Deviations indicate geometrical and/or electrical changes within the transformer.FRAX 150 detects problems such as:n Winding deformations and displacements n Shorted turns and open windings n Loosened clamping structures n Broken clamping structures n Core connection problems n Partial winding collapse n Faulty core grounds n Core movementsAPPLICATIONPower transformers are specified to withstand mechanical forces from both transportation and in-service events, such as faults and lightning. However, mechanical forces may exceed specified limits during severe incidents or when the insulation’s mechanical strength has weakened due to aging. A relatively quick test where the fingerprint response is compared to a post event response allows for a reliable decision on whether the transformer safely can be put backinto service or if further diagnostics is required.Collecting fingerprint data using Frequency Response Analysis (FRA) is an easy way to detect electro-mechanical problems in power transformers and an investment that will save time and money.1981Method BasicsA transformer consists of multiple capacitances, inductances and resistors, a very complex circuit that generates a unique fingerprint or signature when test signals are injected at discrete frequencies and responses are plotted as a curve.Capacitance is affected by the distance between conductors. Movements in the winding will consequently affect capacitances and change the shape of the curve.The SFRA method is based on comparisons between measured curves where variations are detected. One SFRA test consists of multiple sweeps and reveals if the transformer’s mechanical or electrical integrity has been jeopardized.Practical Application In its standard application, a “finger print” reference curve for each winding is captured when the transformer is newor when it is in a known good condition. These curves can later be used as reference during maintenance tests or when there is reason to suspect a problem.The most reliable method is the time based comparison where curves are compared over time on measurements from the same transformer. Another method utilizes type based comparisons between “sister transformers” with the same design. Lastly, a construction based comparison can, under certain conditions, be used when comparing measurements between windings in the same transformer.These comparative tests can be performed 1) before and after transportation, 2) after severe through faults 3) before and after overhaul and 4) as diagnostic test if you suspect potential problems. One SFRA test can detect windingproblems that requires multiple tests with different kinds of test equipment or problems that cannot be detected with other techniques at all. The SFRA test presents a quick and cost effective way to assess if damages have occurred or if the transformer can safely be energized again. If there is a problem, the test result provides valuable information that can be used as decision support when determining further action.Having a reference measurement on a mission critical transformer when an incident has occurred is, therefore, a valuable investment as it will allow for an easier and more reliable analysis.Analysis and SoftwareAs a general guideline, shorted turns, magnetization and other problems related to the core alter the shape of the curve in the lowest frequencies. Medium frequencies represent axial or radial movements in the windings and high frequencies indicate problems involving the cables from the windings, to bushings and tap changers.FRAX 150Sweep Frequency Response AnalyzerAn example of low, medium and high frequenciesThe figure above shows a single phase transformer after a serviceoverhaul where, by mistake, the core ground never got connected (red), and after the core ground was properly connected (green). This potential problem clearly showed up at frequencies between 1 kHz and 10 kHz and a noticeable change is also visible in the 10 kHz - 200 kHz range.The FRAX Software provides numerous features to allow for efficient data analysis. Unlimited tests can be open at the same time and the user has full control on which sweeps to compare. The response can be viewed in traditional Magnitude vs. Frequency and/or Phase vs. Frequency view. The user can also choose to present the data in an Impedance or Admittance vs. Frequency view for powerful analysis on certain transformer types.Test Object Browser — Unlimited number of tests and sweeps. Full user control.Quick Select Tabs — Quickly change presentation view for differentperspectives and analysis tools.Quick Graph Buttons — Programmablegraph setting lets you change views quickly and easily.Sweep/Curve Settings — Every sweep can be individually turned on or off, change color, thickness and position.Dynamic Zoom — Zoom in and move your focus to any part of the curve.Operation Buttons — All essential functions at your fingertips; select appropriate function keys on screen with mouse.Automated analysis compares two curves using an algorithm that compare amplitude as well as frequency shift and lets you know if the difference is severe, obvious, or light.Built-in-decision support is provided by using a built-inanalysis tool based on the international standard DL/T 911-2004.Considerations When Performing SFRA MeasurementsSFRA measurements are compared over time or between different test objects. This accentuates the need to perform the test with the highest repeatability and eliminates the influence from external parameters such as cables, connections and instrument performance. FRAX offers all the necessary tools to ensure that the measured curve represents the internal condition of the transformer.Good ConnectionsBad connections can compromise the test results which is why FRAX offers a rugged test clamp that ensures good connection to the bushings and solid connections to the instrument.Import and ExportThe FRAX software can import data files from other FRA instruments making it possible to compare data obtained using another FRA unit. FRAX can import and export data according to the international XFRA standard format as well as standard CSV and TXT formats.Optimized Sweep SettingThe software offers the user an unmatched feature that allows for fast and efficient testing. Traditional SFRAsystems use a logarithmic spacing of measurement points. This results in as many test points between 20Hz and200Hz as between 200KHz and 2MHz and a relatively long measurement time.The frequency response from the transformer contains a few resonances in the low frequency range but a lot of resonances at higher frequencies. FRAX allows the user to specify less measurement points at lower frequencies and high measurement point density at higher frequencies. The result is a much faster sweep with greater detail where it is needed.Variable VoltageThe applied test voltage may affect the response at lower frequencies. Some FRA instruments do not use the 10 V peak-to-peak used by major manufacturers and this may complicate comparisons between tests. FRAX standardvoltage is 10 V peak-to-peak but FRAX also allows the user to adjust the applied voltage to match the voltage used in a different test.FTB 101Several international FRA guides recommends to verify the integrity of cables and instrument before and after a test using a test circuit with a known FRA response supplied by the equipment manufacturer. FRAX comes with a field testbox FTB101 as a standard accessory and allows the user toperform this important validation in the field at any time and secure measurement quality. FRAX 150 has a built-in computer with high contrast and powerful backlit screen suitable for use in direct sunlight.Solid connections using the C-clamps and the shortest braid method to connect the shield to ground makes it possible to eliminate connection problems and cable loops that otherwise affect the measurement.Contacts made with the C-clamp guarantee good connectionsShortest Braid ConceptThe connection from the cable shield to ground has to be the same for every measurement on a given transformer. Traditional ground connections techniques have issues when it comes to providing repeatable conditions. Thiscauses unwanted variations in the measured response for the highest frequencies that makes analysis difficult.The FRAX braid drops down from the connection clamp next to the insulating discs to the ground connection atthe base of the bushing. This creates near identicalconditions every time you connect to a bushing whether it is tall or short.FRAX 150 with Built-in PCFRAX 150 has a built-in PC with a high contrast, powerful backlit screen suitable for work in direct sunlight. The cursor is controlled via the built-in joystick or using an external USB mouse and the built-in keyboard makes data entry easy.All data is stored on the built-in hard drive. The data can bemoved to any other computer using a USB memory stick.FTB 101 Field Test BoxOPTIONAL ACCESSORIESThe FRAX Demo box FDB 101 is a transformer kit that can be used for in-house training and demonstrations. The small transformer is a single-phase unit with capability to simulate normal as well as fault conditions. Open as well as shorted measurements can be performed. The unit also contains two test impedances, one of them the same as used in the FTB101 field test box.FRAX 150Sweep Frequency Response AnalyzerDYNAMIC RANGEMaking accurate measurements in a wide frequency range with high dynamics puts great demands on test equipment, test leads, and test set up. FRAX 150 is designed with these requirements in mind. It is rugged, able to filter induced interference and has the highest dynamic range and accuracy in the industry. FRAX 150 internal noise level is shown in red below with a normal transformer measurement in black. A wide dynamic range, i.e. low internal noise level, allows for accurate measurements in every transformer. A margin of about 20 dB from the lowest response to the internal noise level of the instrument must be maintained to obtain ±1 dB accuracy.SPECIFICATIONSGeneral FRA Method: Sweep frequency (SFRA) Frequency Range: 0.1 Hz - 25 MHz, user selectable Number of Points: Default 1046, User selectable up to 32,000Measurement time: Default 64 s, fast setting, 37 s (20 Hz - 2 MHz) Points Spacing: Log., linear or both Dynamic Range/Noise Floor: >130dB Accuracy: ±0.5 dB down to -100 dB (10 Hz - 10 MHz)IF Bandwidth/Integration Time: User selectable (10% default) Software: FRAX for Windows Calibration Interval: Max 3 years Standards/guides: Fulfill requirements in CigréBrochure 342, 2008Mechanical condition assessment oftransformer windings using FRA and Chinese standard DL/T 911-2004, FRA on windingdeformation of power transformers, as well as other international standards and recommendations Input Power90 - 264 V ac, 47 - 63 Hz Analog Output Channels:1Compliance Voltage: Output voltage 0.2 - 24 V p-p(open circuit)Measurement Voltage at 50 Ω: 10 V (adjustable 0.1-12 V) Output Impedance: 50 ΩProtection: Short-circuit protected Analog Input Channels: 2Sampling:Simultaneously Input Impedance: 50 Ω Sampling Rate: 100 MS/sOperating System Windows ® basedMemory1000 records in internal memory. External storage on USB stick Physical Dimensions: 305 mm x 194 mm x 360 mm(12 in. x 7.6 in. x 14.2 in.)Weight:6 kg (13 lb)EnvironmentalOperating Ambient Temp: 0° C to +50° C / +32° F to +122° F Operating Relative Humidity: < 90% non-condensingStorage Ambient Temp: -20° C to 70° C / -4° F to +158° F Storage Relative Humidity: < 90% non-condensingCE Standards:IEC61010 (LVD) EN61326 (EMC) An example of FRAX 150’s dynamic limit (red) and transformer measurement (black)FRAX 150Sweep Frequency Response AnalyzerUKArchcliffe Road, Dover CT17 9EN EnglandT +44 (0) 1 304 502101 F +44 (0) 1 304 207342******************UNITED STATES 4271 Bronze WayDallas, TX 75237-1019 USA T 1 800 723 2861 (USA only) T +1 214 333 3201 F +1 214 331 7399******************Registered to ISO 9001:2000 Cert. no. 10006.01FRAX150_DS_en_V04Megger is a registered trademark.Specifications are subject to change without notice.OTHER TECHNICAL SALES OFFICES Valley Forge USA, College Station USA, Sydney AUSTRALIA, TäbySWEDEN, Ontario CANADA, Trappes FRANCE, Oberursel GERMANY, Aargau SWITZERLAND, Kingdom of BAHRAIN, Mumbai INDIA, Johannesburg SOUTHAFRICA, and Chonburi THAILANDIncluded accessories shown above: Mains cable, ground cable, (2) ground braid sets, (2) earth/ground braid leads (insulated), (2) C-clamps, generator cable, measure cable, field test box, nylon accessory pouch, (2) earth/ground braids with clamp, and canvas carrying bag for test leadsbuttons CLOSE-UP OF FRAX 150 CONTROL PANELEnter keyINCLUDED ACCESSORIES。
开启片剂完整性的窗户日本东芝公司,剑桥大学摘要:由日本东芝公司和剑桥大学合作成立的公司向《医药技术》解释了FDA支持的技术如何在不损坏片剂的情况下测定其完整性。
太赫脉冲成像的一个应用是检查肠溶制剂的完整性,以确保它们在到达肠溶之前不会溶解。
关键词:片剂完整性,太赫脉冲成像。
能够检测片剂的结构完整性和化学成分而无需将它们打碎的一种技术,已经通过了概念验证阶段,正在进行法规申请。
由英国私募Teraview公司研发并且以太赫光(介于无线电波和光波之间)为基础。
该成像技术为配方研发和质量控制中的湿溶出试验提供了一个更好的选择。
该技术还可以缩短新产品的研发时间,并且根据厂商的情况,随时间推移甚至可能发展成为一个用于制药生产线的实时片剂检测系统。
TPI技术通过发射太赫射线绘制出片剂和涂层厚度的三维差异图谱,在有结构或化学变化时太赫射线被反射回。
反射脉冲的时间延迟累加成该片剂的三维图像。
该系统使用太赫发射极,采用一个机器臂捡起片剂并且使其通过太赫光束,用一个扫描仪收集反射光并且建成三维图像(见图)。
技术研发太赫技术发源于二十世纪九十年代中期13本东芝公司位于英国的东芝欧洲研究中心,该中心与剑桥大学的物理学系有着密切的联系。
日本东芝公司当时正在研究新一代的半导体,研究的副产品是发现了这些半导体实际上是太赫光非常好的发射源和检测器。
二十世纪九十年代后期,日本东芝公司授权研究小组寻求该技术可能的应用,包括成像和化学传感光谱学,并与葛兰素史克和辉瑞以及其它公司建立了关系,以探讨其在制药业的应用。
虽然早期的结果表明该技术有前景,但日本东芝公司却不愿深入研究下去,原因是此应用与日本东芝公司在消费电子行业的任何业务兴趣都没有交叉。
这一决定的结果是研究中心的首席执行官DonArnone和剑桥桥大学物理学系的教授Michael Pepper先生于2001年成立了Teraview公司一作为研究中心的子公司。
TPI imaga 2000是第一个商品化太赫成像系统,该系统经优化用于成品片剂及其核心完整性和性能的无破坏检测。
V2426A SeriesCompact,fanless,vibration-proof railway computersFeatures and Benefits•Intel Celeron/Core i7processor•2peripheral expansion slots for various I/O,WLAN,mini-PCIe expansionmodule cards•Dual independent DVI-I displays•2Gigabit Ethernet ports with M12X-coded connectors•1SATA connector and1CFast socket for storage expansion•M12A-coded power connector•Compliant with EN50121-4•Complies with all EN50155mandatory test items1•Ready-to-run Debian7,Windows Embedded Standard7,and Windows10Embedded IoT Enterprise2016LTSB platforms•-40to70°C wide-temperature models available•Supports SNMP-based system configuration,control,and monitoring(Windows only)CertificationsIntroductionThe V2426A Series embedded computers are based on the Intel3rd Gen processor,and feature4RS-232/422/485serial ports,dual LAN ports,3 USB2.0hosts,and dual DVI-I outputs.In addition,the V2426A Series computers comply with the mandatory test items of the EN50155standard, making them suitable for a variety of industrial applications.The dual megabit/Gigabit Ethernet ports with M12X-coded connectors offer a reliable solution for network redundancy,promising continuous operation for data communication and management.As an added convenience,the V2426A computers have6DIs and2DOs for connecting digital input/output devices.In addition,the CFast socket,SATA connector,and USB sockets provide the V2426A computers with the reliability needed for industrial applications that require data buffering and storage expansion.Moreover,the V2426A computers come with2peripheral expansion slots for inserting different communication modules(2-port CAN module,or HSDPA,GPS,or WLAN module),an8+8-port digital input/output module,and a2-port serial module,giving greater flexibility for setting up different industrial applications at field sites.Preinstalled with Linux Debian7or Windows Embedded Standard7,the V2426A Series provides programmers with a friendly environment for developing sophisticated,bug-free application software at a low cost.Wide-temperature models of the V2426A Series that operate reliably in a-40 to70°C operating temperature range are also available,offering an optimal solution for applications subjected to harsh environments.1.This product is suitable for rolling stock railway applications,as defined by the EN50155standard.For a more detailed statement,click here:/doc/specs/EN_50155_Compliance.pdfAppearanceFront View Rear ViewSpecificationsComputerCPU V2426A-C2Series:Intel®Celeron®Processor1047UE(2M cache,1.40GHz)V2426A-C7Series:Intel®Core™i7-3517UE Processor(4M cache,up to2.80GHz) System Chipset Mobile Intel®HM65Express ChipsetGraphics Controller Intel®HD Graphics4000(integrated)System Memory Pre-installed4GB DDR3System Memory Slot SODIMM DDR3/DDR3L slot x1Supported OS Linux Debian7Windows Embedded Standard7(WS7E)32-bitWindows Embedded Standard7(WS7E)64-bitStorage Slot 2.5-inch HDD/SSD slots x1CFast slot x2Computer InterfaceEthernet Ports Auto-sensing10/100/1000Mbps ports(M12X-coded)x2Serial Ports RS-232/422/485ports x4,software selectable(DB9male)USB2.0USB2.0hosts x1,M12D-coded connectorUSB2.0hosts x2,type-A connectorsAudio Input/Output Line in x1,Line out x1,M12D-codedDigital Input DIs x6Digital Output DOs x2Video Output DVI-I x2,29-pin DVI-I connectors(female)Expansion Slots2peripheral expansion slotsDigital InputsIsolation3k VDCConnector Screw-fastened Euroblock terminalDry Contact On:short to GNDOff:openI/O Mode DISensor Type Dry contactWet Contact(NPN or PNP)Wet Contact(DI to COM)On:10to30VDCOff:0to3VDCDigital OutputsConnector Screw-fastened Euroblock terminalCurrent Rating200mA per channelI/O Type SinkVoltage24to30VDCLED IndicatorsSystem Power x1Storage x1LAN2per port(10/100/1000Mbps)Serial2per port(Tx,Rx)Serial InterfaceBaudrate50bps to921.6kbpsFlow Control RTS/CTS,XON/XOFF,ADDC®(automatic data direction control)for RS-485,RTSToggle(RS-232only)Isolation N/AParity None,Even,Odd,Space,MarkData Bits5,6,7,8Stop Bits1,1.5,2Serial SignalsRS-232TxD,RxD,RTS,CTS,DTR,DSR,DCD,GNDRS-422Tx+,Tx-,Rx+,Rx-,GNDRS-485-2w Data+,Data-,GNDRS-485-4w Tx+,Tx-,Rx+,Rx-,GNDPower ParametersInput Voltage12to48VDCPower Connector M12A-coded male connectorPower Consumption 3.78A@12VDC0.96A@48VDCPower Consumption(Max.)47W(max.)Physical CharacteristicsHousing AluminumIP Rating IP30Dimensions(with ears)275x92x154mm(10.83x3.62x6.06in)Dimensions(without ears)250x86x154mm(9.84x3.38x6.06in)Weight3,000g(6.67lb)Installation DIN-rail mounting(optional),Wall mounting(standard) Protection-CT models:PCB conformal coating Environmental LimitsOperating Temperature Standard Models:-25to55°C(-13to131°F)Wide Temp.Models:-40to70°C(-40to158°F) Storage Temperature(package included)-40to85°C(-40to185°F)Ambient Relative Humidity5to95%(non-condensing)Standards and CertificationsEMC EN55032/24EMI CISPR32,FCC Part15B Class AEMS IEC61000-4-2ESD:Contact:6kV;Air:8kVIEC61000-4-3RS:80MHz to1GHz:20V/mIEC61000-4-4EFT:Power:2kV;Signal:2kVIEC61000-4-5Surge:Power:2kVIEC61000-4-6CS:10VIEC61000-4-8PFMFRailway EN50121-4,IEC60571Railway Fire Protection EN45545-2Safety EN60950-1,UL60950-1Shock IEC60068-2-27,IEC61373,EN50155Vibration IEC60068-2-64,IEC61373,EN50155DeclarationGreen Product RoHS,CRoHS,WEEEMTBFTime304,998hrsStandards Telcordia(Bellcore),GBWarrantyWarranty Period3yearsDetails See /warrantyPackage ContentsDevice1x V2426A Series computerInstallation Kit1x wall-mounting kitDocumentation1x document and software CD1x quick installation guide1x warranty cardDimensionsOrdering InformationModel Name CPU Memory(Default)OS CFast(CTO)Backup CFast(CTO)SSD/HDD Tray(CTO)PeripheralExpansionSlotsOperatingTemp.ConformalCoatingV2426A-C2Celeron1047UE4GB1(Optional)1(Optional)1(Optional)2-25to55°C–V2426A-C2-T Celeron1047UE4GB1(Optional)1(Optional)1(Optional)2-40to70°C–V2426A-C2-CT-T Celeron1047UE4GB1(Optional)1(Optional)1(Optional)2-40to70°C✓V2426A-C7Core i7-3517UE4GB1(Optional)1(Optional)1(Optional)2-25to55°C–V2426A-C7-T Core i7-3517UE4GB1(Optional)1(Optional)1(Optional)2-40to70°C–V2426A-C7-CT-T i7-3517UE4GB1(Optional)1(Optional)1(Optional)2-40to70°C✓V2426A-C2-W7E Celeron1047UE4GB8GB1(Optional)1(Optional)2-25to55°C–V2426A-C2-T-W7E Celeron1047UE4GB8GB1(Optional)1(Optional)2-40to70°C–V2426A-C7-T-W7E i7-3517UE4GB8GB1(Optional)1(Optional)2-40to70°C–Accessories(sold separately)Battery KitsRTC Battery Kit Lithium battery with built-in connectorCablesCBL-M12XMM8PRJ45-BK-100-IP67M12-to-RJ45Cat-5E UTP gigabit Ethernet cable,8-pin X-coded male connector,IP67,1mCBL-M12(FF5P)/Open-100IP67A-coded M12-to-5-pin power cable,IP67-rated5-pin female M12connector,1mA-CRF-RFQMAM-R2-50Wi-Fi Extension Cable QMA(male)to SMA(male)adapter with50cm cable x1A-CRF-QMAMSF-R2-50Cellular Extension Cable QMA(male)to SMA(female)adapter with50cm cable x1A-CRF-CTPSF-R2-50GPS Extension Cable TNC to SMA(female)adapter with50cm cable x1ConnectorsM12A-5PMM-IP685-pin male circular threaded D-coded M12USB connector,IP68M12X-8PMM-IP678-pin male X-coded circular threaded gigabit Ethernet connector,IP67M12A-5P-IP68A-coded screw-in sensor connector,female,IP68,4.05cmM12A-8PMM-IP678-pin male circular threaded A-codes M12connector,IP67-rated(for field-installation)Power AdaptersPWR-24270-DT-S1Power adapter,input voltage90to264VAC,output voltage24V with2.5A DC loadPower CordsPWC-C7AU-2B-183Power cord with Australian(AU)plug,2.5A/250V,1.83mPWC-C7CN-2B-183Power cord with two-prong China(CN)plug,1.83mPWC-C7EU-2B-183Power cord with Continental Europe(EU)plug,2.5A/250V,1.83mPWC-C7UK-2B-183Power cord with United Kingdom(UK)plug,2.5A/250V,1.83mPWC-C7US-2B-183Power cord with United States(US)plug,10A/125V,1.83mWall-Mounting KitsV2400Isolated Wall Mount Kit Wall-mounting kit with isolation protection,2wall-mounting brackets,4screwsStorage KitsFK-75125-02Storage bracket,4large silver screws,4soft washers,4small sliver bronze screws,1SATA powercable,4golden spacers(only for the V2406and V2426)Expansion ModulesEPM-DK022mini PCIe slots for wireless modules,-25to55°C operating temperatureEPM-DK03GPS receiver with2mini PCIe slots for wireless modules,-25to55°C operating temperatureEPM-30322isolated RS-232/422/485ports with DB9connectors,-40to70°C operating temperatureEPM-31122isolated CAN ports with DB9connectors,-25to55°C operating temperatureEPM-34388DIs and8DOs,with3kV digital isolation protection,2kHz counter,-40to70°C operating AntennasANT-WDB-ARM-02 2.4/5GHz,omni-directional rubber duck antenna,2dBi,RP-SMA(male)ANT-LTE-ASM-02GPRS/EDGE/UMTS/HSPA/LTE,omni-directional rubber duck antenna,2dBiANT-WCDMA-AHSM-04-2.5m GSM/GPRS/EDGE/UMTS/HSPA,omni-directional magnetic base antenna,4dBiANT-GPS-OSM-05-3M Active GPS antenna,26dBi,1572MHz,L1band antenna for GPSANT-LTEUS-ASM-01GSM/GPRS/EDGE/UMTS/HSPA/LTE,omni-directional rubber duck antenna,1dBiWireless Antenna CableA-CRF-MHFQMAF-D1.13-14.2Digital Interface Mini card internal antenna with QMA connector x1,locking washer x1,O-ring x1,nutx1Din Rail Mounting kitDK-DC50131DIN-rail mounting kit,6screwsWireless PackagesEPM-DK3G Package Gemalto PHS8-P3G mini card with digital interface,internal antenna,installation bracket,screws,locking washers,O-rings,nuts,and thermal padEPM-DK Wi-Fi Package SprakLAN WPEA-121N Wi-Fi mini card with digital interface,internal antenna,installation bracket,screws,locking washers,O-rings,nuts,and thermal padEPM-DK LTE-EU Package Gemalto PLS8-E LTE mini card with digital interface,internal antenna,installation bracket,screws,locking washers,O-rings,nuts,and thermal padEPM-DK LTE-US Package Gemalto PLS8-X LTE mini card with digital interface,internal antenna,installation bracket,screws,locking washers,O-rings,nuts,and thermal padWireless Antenna Packages3G Antenna Package3G external antenna with QMA(male)to SMA(female)adapter and50-cm cables x2,3G externalantenna with SMA connectors x2,cellular extension cableLTE-US Antenna Package LTE-US external antenna with QMA(male)to SMA(female)adapter and50-cm cables x2,LTE-USexternal antenna with SMA connector x2,cellular extension cableLTE-EU Antenna Package LTE-EU external antenna with QMA(male)to SMA(female)adapter with50-cm cables x2,LTE-EUexternal antenna with SMA connectors x2,cellular extension cableWi-Fi Antenna Package External antenna with QMA internal cable,Wi-Fi extension cableGPS Antenna Package External antenna with TNC to SMA(female)adapter and a50-cm cable,SMA antenna(26dBi,1572MHz,L1band),GPS extension cable©Moxa Inc.All rights reserved.Updated Apr16,2019.This document and any portion thereof may not be reproduced or used in any manner whatsoever without the express written permission of Moxa Inc.Product specifications subject to change without notice.Visit our website for the most up-to-date product information.。
Engineering DrawingsDrawing Frames and Text MacrosPrevious issuesVW 01014: 1971-05, 1984-03, 1992-08, 1998-04, 1998-10, 2000-09, 2001-03, 2002-06, 2003-11,2006-01, 2007-01, 2008-03, 2009-04, 2010-05, 2010-12, 2011-05, 2011-12ChangesThe following changes have been made compared with VW 01014: 2011-12:–Technical responsibility changes–Section 1 "Scope of application": the note concerning the application in section 6 has been re‐moved. It now appears as NOTE 3 in section 1–Section 2.3 "PDM drawing frame": English legal notice updated and table of existing PDM draw‐ing frame formats in KVS added.–Section 3.7 "Volkswagen AG Know-How Protection": text macro NO-A12 added ContentsPageScope .........................................................................................................................4Drawing frames ..........................................................................................................5Drawing frame for Design Engineering (series-production drawing), see Figure 1....................................................................................................................................5Type approval drawing frame, see Figure 2 ...............................................................6PDM drawing frame, see Figure 3 .............................................................................7Drawing frames for operating equipment ...................................................................8Basic drawing frame for operating equipment, see Figure 4 ......................................8Drawing frame for method plan, see Figure 5 ............................................................9Text macros .............................................................................................................10Basic title block .. (10)122.12.22.32.42.4.12.4.233.1Group StandardVW 01014Issue 2012-09Class. No.:02115Descriptors:drawing frames, text macro, standard frame, drawingVerify that you have the latest issue of the Standard before relying on it.This electronically generated Standard is authentic and valid without signature.The English translation is believed to be accurate. In case of discrepancies, the German version is alone authoritative and controlling.Page 1 of 43Confidential. All rights reserved. No part of this document may be provided to third parties or reproduced without the prior consent of the Standards Department of a Volkswagen Group member.This Standard is available to contracting parties solely via the B2B supplier platform .© Volkswagen AktiengesellschaftVWNORM-2011-08gTitle blocks for drawings with restrictions on use .....................................................11Title block for layout drawings (ENT) > A0 ...............................................................12Symbol for European projection method ..................................................................13Change block for formats > A0 .................................................................................13Tolerancing principle as per VW 01054 ...................................................................13Volkswagen Group know-how protection .................................................................13Drawing field ............................................................................................................14Lower left corner of drawing for formats > A0 ..........................................................14Left drawing edge for formats > A0 ..........................................................................14Explanation of parenthesized dimensions for formats > A0 (lower left corner ofdrawing field) ............................................................................................................14References for formats > A0 ....................................................................................15Migration from CATIA V4 to CATIA V5 ....................................................................15Parts marking ...........................................................................................................15Part number assignment drawn / symmetrically opposite ........................................15Note on utilization of scrap material .........................................................................16NO-F1 Drawings with multiple sheets ......................................................................16Repeating and unchanging notes, mostly on body components ..............................16Drawing only for the company stated .......................................................................16Note on parts which are subject to build sample approval (BMG) ...........................17Notes on testing as per Technical Supply Specifications (TL) .................................17Note on type approval ..............................................................................................17Note on undimensioned design models in the data record ......................................17Note on open-air weathering ....................................................................................17Note on model approval ...........................................................................................17Note on master model ..............................................................................................18Note on second original, font size 7 mm ..................................................................18Note on second original, font size 3,5 mm ...............................................................18Note on heavy-duty component ...............................................................................18Note on mandatory type approval ............................................................................19Note on avoidance of hazardous substances ..........................................................19Note on other relevant drawings ..............................................................................19Note on undimensioned bend and trim radii ............................................................19Note on simplified representation .............................................................................19Note on flawless condition of surfaces .....................................................................19Note on material for form tool in grain area ..............................................................20Table for RPS ...........................................................................................................20Note on emission behavior .......................................................................................20Note on length dimensions to be measured up to relevant functional datum plane ..................................................................................................................................20Note on related tolerances for nominal dimension ranges up to relevant functional datum plane .............................................................................................................21Note on tolerances of surfaces as compared to the data record and defined RPS..................................................................................................................................21Note on tolerances of marked surfaces as compared to the data record anddefined RPS .............................................................................................................21Note on tolerances of marked and limited surfaces as compared to the datarecord and defined RPS ...........................................................................................21Note on tolerances of marked edges as compared to the data record and defined RPS ..........................................................................................................................21Note on alternative materials and surface protection types .....................................22Note on color and grain .. (22)3.23.33.43.53.63.744.14.24.34.44.54.64.74.84.94.104.114.124.134.144.154.164.174.184.194.204.214.224.234.244.254.264.274.284.294.304.314.324.334.344.354.364.374.38Page 2VW 01014: 2012-09Note on temperature resistance ...............................................................................22Note on color consistency ........................................................................................22Note on lightfastness ................................................................................................22Note on fixing, clamping and contact surface ..........................................................23Note on related finished part drawing ......................................................................23Note on material specifications, complete ................................................................23Note on material specifications, subdivided .............................................................24Note on optional welding technology .......................................................................24Note on flammability features ...................................................................................24Note on table containing gear tooth data .................................................................25Note on weight indication .........................................................................................25Note on amine emission of foam parts .....................................................................25Note on cleanliness requirements for engine components ......................................25Countersinks for internal threads .............................................................................26Testing of rolled bushings ........................................................................................26Table for limit dimensions ........................................................................................26Detail drawing for radius under screw head, mostly for standard part drawings (27)Test specification for disk wheels .............................................................................27Test specification for brake drums ...........................................................................28General tolerances for castings ...............................................................................28General tolerances for forgings ................................................................................29Coordinate dimensioning for tubes and bars ...........................................................30Bill of materials for layout drawings (ENT) ...............................................................30Distribution list for layout drawings (ENT) ................................................................31Text macros for operating equipment ......................................................................31Title block for individual part .....................................................................................31Note on pass direction, left .......................................................................................32Note on pass direction, right ....................................................................................32Title block for operating equipment label .................................................................32General tolerances for nominal dimensions without tolerance specification ............32Note on simplified drawing specifications on surface roughnesses .........................33Permissible deviations for nominal sizes without tolerance specification onweldments ................................................................................................................33Permissible deviations for nominal dimensions without tolerance specificationson flame-cut parts ....................................................................................................33Note on parts used ...................................................................................................34Note on rolled flame-cutting template plots ..............................................................34Note on "Add ½ kerf" ................................................................................................34Note on "designed" and "symmetrical opposite" ......................................................34Text macros for the "3D drawingless process" (3DZP – German abbreviation) ......35VW copyright ............................................................................................................35Note on restriction on use ........................................................................................35Note on type approval documentation and type approval number ...........................35Draft number ............................................................................................................36Note on engineering project number ........................................................................36Note on safety documentation .................................................................................36Recycling requirements as per VW 91102 ...............................................................36All dimensions apply to the finished part including surface protection .....................36Surface roughness as per VW 13705 and VDA 2005 ..............................................36Surface roughness as per VW 13705 and VDA 2005 (reference without symbol) (37)4.394.404.414.424.434.444.454.464.474.484.494.504.514.524.534.544.554.564.574.584.594.604.614.6255.15.25.35.45.55.65.75.85.95.105.115.1266.16.26.36.46.56.66.76.86.96.9.1Page 3VW 01014: 2012-09Surface roughness as per VW 13705 and VDA 2005 (reference with symbol) .......37Surface roughness as per VW 13705 and VDA 2005 (reference with symbol,collective specification 1) .........................................................................................38Surface roughness as per VW 13705 and VDA 2005 (reference with symbol,collective specification 2) .........................................................................................39Workpiece edges as per VW 01088 .........................................................................39Workpiece edges as per VW 01088 (reference without symbol) .............................40Workpiece edges as per VW 01088 (reference with symbol) ..................................40Workpiece edges as per VW 01088 (reference with symbol, collectivespecification 1) .........................................................................................................41Workpiece edges as per VW 01088 (reference with symbol, collectivespecification 2) .........................................................................................................42Applicable documents ..............................................................................................426.9.26.9.36.9.46.106.10.16.10.26.10.36.10.47ScopeThis standard applies to the computer-aided graphical representation and presentation of drawing templates, standard frames and text macros for drawings within the Volkswagen Group.NOTE 1 The standardized text macros are subject to drawing standard regulations and are centrally managed by the "Virtual Systems and Standardization" department.NOTE 2 All drawing frames and text macros shown here are available in the appropriate standard system environment of the CAD systems CATIA and Creo Elements/Pro (formerly PRO/E). The PDM drawing frames are also available as IsoDraw and Excel templates in the KVS, and also as Catia V5templates.NOTE 3 The text macros shown in section 6 are for the drawingless process only. The creator and the user of the data must agree whether their process chain allows for the use of documents as per the 3DZP method, and whether this is permissible.1Page 4VW 01014: 2012-09Drawing framesDrawing frame for Design Engineering (series-production drawing), see Figure 1Figure 1 – Drawing frame for Design Engineering (series-production drawing)2 2.1Page 5VW 01014: 2012-09Type approval drawing frame, see Figure 2Figure 2 – Type approval drawing frame2.2 Page 6VW 01014: 2012-09PDM drawing frame, see Figure 3Figure 3 – PDM drawing frame2.3 Page 7VW 01014: 2012-09Drawing frames for operating equipmentBasic drawing frame for operating equipment, see Figure 4Figure 4 – Basic drawing frame for operating equipment2.4 2.4.1Page 8VW 01014: 2012-09Drawing frame for method plan, see Figure 5Figure 5 – Drawing frame for method plan2.4.2 Page 9VW 01014: 2012-09Text macrosBasic title blockFigure 6 – Code no: NO-A1Basic title block for formats > A03 3.1Page 10VW 01014: 2012-09Title blocks for drawings with restrictions on useFigure 7 – Code no: NO-A7 A3The title block may only be used if supplier original drawings are used as modified finished part drawings.Notes on the usage of these title blocks see VW 01058.3.2Title block for layout drawings (ENT) > A0Figure 8 – Code no: NO-A3ENT = Draft3.3Symbol for European projection methodFigure 9 – Code no: NO-A5Change block for formats > A0Figure 10 – Code no: NO-A6Tolerancing principle as per VW 01054Figure 11 – Code no: NO-A11Volkswagen Group know-how protectionFigure 12 – Code no: NO-A123.4 3.5 3.6 3.7Drawing fieldLower left corner of drawing for formats > A0Figure 13 – Code no: NO-B1Left drawing edge for formats > A0Figure 14 – Code no: NO-B3Explanation of parenthesized dimensions for formats > A0 (lower left corner of drawingfield)Figure 15 – Code no: NO-B644.1 4.2 4.3References for formats > A0Figure 16 – Code no: NO-B7Migration from CATIA V4 to CATIA V5Figure 17 – Code no: NO-B8Parts markingFigure 18 – Code no: NO-E2Part number assignment drawn / symmetrically oppositeFigure 19 – Code no: NO-E54.4 4.5 4.6 4.7Note on utilization of scrap materialFigure 20 – Code no.:NO-F1 Drawings with multiple sheetsFigure 21 – Code no: NO-F2Repeating and unchanging notes, mostly on body componentsFigure 22 – Code no: NO-F3Drawing only for the company statedFigure 23 – Code no: NO-F4 (do not use for new designs!)4.8 4.9 4.10 4.11Note on parts which are subject to build sample approval (BMG)Figure 24 – Code no: NO-F5Notes on testing as per Technical Supply Specifications (TL)Figure 25 – Code no: NO-F6Note on type approvalFigure 26 – Code no: NO-F7Note on undimensioned design models in the data recordFigure 27 – Code no: NO-F8Note on open-air weatheringFigure 28 – Code no: NO-F9Note on model approvalFigure 29 – Code no: NO-F104.12 4.13 4.14 4.15 4.16 4.17Note on master modelFigure 30 – Code no: NO-F11Note on second original, font size 7 mmFigure 31 – Code no: NO-F12Note on second original, font size 3,5 mmFigure 32 – Code no: NO-F13Note on heavy-duty componentFigure 33 – Code no: NO-F144.18 4.19 4.20 4.21Note on mandatory type approvalFigure 34 – Code no: NO-F15Note on avoidance of hazardous substancesFigure 35 – Code no: NO-F16Note on other relevant drawingsFigure 36 – Code no: NO-F17Note on undimensioned bend and trim radiiFigure 37 – Code no: NO-F18Note on simplified representationFigure 38 – Code no: NO-F19Note on flawless condition of surfacesFigure 39 – Code no: NO-F204.22 4.23 4.24 4.25 4.26 4.27Note on material for form tool in grain areaFigure 40 – Code no: NO-F22Table for RPSFigure 41 – Code no: NO-F23Note on emission behaviorFigure 42 – Code no: NO-F24Note on length dimensions to be measured up to relevant functional datum planeFigure 43 – Code no: NO-F254.28 4.29 4.30 4.31Note on related tolerances for nominal dimension ranges up to relevant functional datumplaneFigure 44 – Code no: NO-F26Note on tolerances of surfaces as compared to the data record and defined RPSFigure 45 – Code no: NO-F27Note on tolerances of marked surfaces as compared to the data record and defined RPSFigure 46 – Code no: NO-F28Note on tolerances of marked and limited surfaces as compared to the data record anddefined RPSFigure 47 – Code no: NO-F29Note on tolerances of marked edges as compared to the data record and defined RPSFigure 48 – Code no: NO-F304.32 4.33 4.34 4.35 4.36Note on alternative materials and surface protection typesFigure 49 – Code no: NO-F31Note on color and grainFigure 50 – Code no: NO-F32Note on temperature resistanceFigure 51 – Code no: NO-F33Note on color consistencyFigure 52 – Code no: NO-F35Note on lightfastnessFigure 53 – Code no: NO-F364.37 4.38 4.39 4.40 4.41Note on fixing, clamping and contact surfaceFigure 54 – Code no: NO-F37Note on related finished part drawingFigure 55 – Code no: NO-F38Note on material specifications, completeFigure 56 – Code no: NO-F394.42 4.43 4.44Note on material specifications, subdividedFigure 57 – Code no: NO-F40Note on optional welding technologyFigure 58 – Code no: NO-F41Note on flammability featuresFigure 59 – Code no: NO-F424.45 4.46 4.47Note on table containing gear tooth dataFigure 60 – Code no: NO-F43Note on weight indicationFigure 61 – Code no: NO-F44Note on amine emission of foam partsFigure 62 – Code no: NO-F45Note on cleanliness requirements for engine componentsFigure 63 – Code no: NO-F464.48 4.49 4.50 4.51Countersinks for internal threadsFigure 64 – Code no: NO-G0Testing of rolled bushingsFigure 65 – Code no: NO-G1Table for limit dimensionsFigure 66 – Code no: NO-G24.52 4.53 4.54Detail drawing for radius under screw head, mostly for standard part drawingsFigure 67 – Code no: NO-G4Test specification for disk wheelsFigure 68 – Code no: NO-G64.55 4.56Test specification for brake drumsFigure 69 – Code no: NO-G7General tolerances for castingsFigure 70 – Code no: NO-G84.57 4.58General tolerances for forgingsFigure 71 – Code no: NO-G94.59Coordinate dimensioning for tubes and barsFigure 72 – Code no: NO-G10Bill of materials for layout drawings (ENT)Figure 73 – Code no: NO-H14.60 4.61Distribution list for layout drawings (ENT)Figure 74 – Code no: NO-H2Text macros for operating equipmentTitle block for individual partFigure 75 – Code no: R001 individual part4.62 55.1Note on pass direction, leftFigure 76 – Code no: R002 pass direction, leftNote on pass direction, rightFigure 77 – Code no: R003 pass direction, rightTitle block for operating equipment labelFigure 78 – Code no: R004 operating equipment labelGeneral tolerances for nominal dimensions without tolerance specificationFigure 79 – Code no: R005 machining operation5.2 5.35.45.5Note on simplified drawing specifications on surface roughnessesFigure 80 – Code no: R006 surfacesPermissible deviations for nominal sizes without tolerance specification on weldmentsFigure 81 – Code no: R007 welded partsPermissible deviations for nominal dimensions without tolerance specifications on flame-cut partsFigure 82 – Code no: R008 flame-cut parts5.6 5.75.8Note on parts usedFigure 83 – Code no: R009 parts usedNote on rolled flame-cutting template plotsFigure 84 – Code no: R010 flame-cutting templateNote on "Add ½ kerf"Figure 85 – Code no: R011 kerfNote on "designed" and "symmetrical opposite"Figure 86 – Code no: R012 symmetrical opposite5.9 5.105.115.12Text macros for the "3D drawingless process" (3DZP – German abbreviation)The following text macros are not created in CAD systems, but only in the PDM system KVS.The design engineer must add the necessary parameters to the text macros.VW copyrightFigure 87 – Code no: NOZ-01Note on restriction on useLegend P01Company nameFigure 88 – Code no: NOZ-02Note on type approval documentation and type approval numberLegend P01Type approval doc. and type approval numberFigure 89 – Code no: NOZ-036 6.16.26.3Draft numberLegend P01Draft numberFigure 90 – Code no: NOZ-04Note on engineering project numberLegend P01Engineering project numberFigure 91 – Code no: NOZ-05Note on safety documentationLegend P01TLD number (technical guideline for documentation – German abbreviation)Figure 92 – Code no: NOZ-06Recycling requirements as per VW 91102Figure 93 – Code no: NOZ-07All dimensions apply to the finished part including surface protectionFigure 94 – Code no: NOZ-08Surface roughness as per VW 13705 and VDA 2005The design engineer must add the required parameters to the symbols shown here (e.g., Rz value).Two types of text macros (with and without graphical representation) have been defined. Variant NOZ-09 is a reference to Standard VW 13705, additional information possible, but restricted. Variants NOZ-09-01 a to f are reserved for the main surface roughness value. Due to system restrictions,identical symbols cannot be used more than once. For this reason, the symbols in section 6.9.3 and6.4 6.56.66.76.86.9section 6.9.4 must be used for cases of multiple use. If surface roughness values are added as a note, the text macros are placed beneath each other instead of beside each other. This deviating representation has been released for the 3DZP drawingless process.Surface roughness as per VW 13705 and VDA 2005 (reference without symbol)Figure 95 – Code no: NOZ-09Surface roughness as per VW 13705 and VDA 2005 (reference with symbol)Figure 96 – Code no: NOZ-09-01-aFigure 97 – Code no: NOZ-09-01-bFigure 98 – Code no.: NOZ-09-01-cLegend P01Machining allowance (numerical value in mm)P02Production processP03Surface parameter and numerical valueP04if applicable, additional requirement as per VDA 2005P05if applicable, additional requirement as per VDA 2005P06if applicable, second requirement on surface texture (surface parameter,numerical value)P07Specification of the surface groovesLegend P01Letter for simplified drawing specification. Method defined in section "simplified specifi‐cation" in VDA 2005Figure 99 – Code no: NOZ-09-01-d6.9.16.9.2Figure 100 – Code no: NOZ-09-01-e Figure 101 – Code no: NOZ-09-01-fSurface roughness as per VW 13705 and VDA 2005 (reference with symbol, collectivespecification 1)Figure 102 – Code no: NOZ-09-02-aFigure 103 – Code no: NOZ-09-02-bFigure 104 – Code no: NOZ-09-02-cLegend P01Machining allowance (numerical value in mm)P02Production processP03Surface parameter and numerical valueP04if applicable, additional requirement as per VDA 2005P05if applicable, additional requirement as per VDA 2005P06if applicable, second requirement onsurface texture (surface parameter,numerical value) P07Specifica‐tion of thesurface groovesCode no.:Legend P01Letter for simplified drawing specification. Method defined in section "simplified specifi‐cation" in VDA 2005Figure 105 – NOZ-09-02-dFigure 106 – Code no: NOZ-09-02-e Figure 107 – Code no: NOZ-09-02-f6.9.3。
蒙特利尔认知评估替代版(MoCA v7.2)使用指导和评分手册蒙特利尔认知评估(the Montreal Cognitive Assessment, MoCA)旨在快速筛查轻度认知障碍。
MoCA评估不同的认知领域:注意与专注、执行功能、记忆、语言、视结构技能、概念思维、计算和定向。
完成MoCA测试约需要10分钟。
测试最高得分为30分,大于等于26分为正常水平。
[译者注:使用MoCA原版v7.1对同一受试者进行多次评定时,可能因学习效应而难以准确评估受试者认知功能的变化。
因此,MoCA作者设计了两个替代版本v7.2和v7.3。
本手册配合MoCA 替代版本v7.2使用。
除v7.1、v7.2和v7.3中不同的测试内容外,本版译文将3个评分手册中的其他部分保持一致,以保证良好的测试基准。
考虑到文化和语言差异,本次对原版中的5词记忆、词语流畅性、抽象三个部分做了少量调整。
本译文参考了之前各中译本,并由北京(国家康复辅具研究中心)-广州(中山大学附属第三医院)两地协作完成,以更好地符合不同地区的语言习惯。
]1.交替连线:使用指导:检查者对受试者介绍如下:“数字和汉字都有自己的顺序。
请您画一条线,从一个数字连到一个汉字,要按照数字和汉字的顺序来连接。
从这里开始[指着数字①],画一条线连到汉字(甲),然后连到数字②,这样连下去。
最后连到这里[指着汉字(戊)]。
”评分:如果受试者能够按照“①−甲−②−乙−③−丙−④−丁−⑤−戊”的次序顺利完成画线,而且连线之间没有交叉,则评为1分。
如果出现任何错误,并且没有立即自行改正,则评为0分。
2.视结构技能(长方体):使用指导:检查者指着MoCA测试表上的长方体图案,并进行如下介绍:“把这个图案画下来,要尽量画得一模一样。
请画在这个图案下面空白的地方。
”评分:如能正确完成图案绘制,则评为1分。
• 画出的图案必须是三维图形• 要画出长方体所有的线• 没有增加额外的线• 所有水平的线基本平行• 画出的图案必须明显地是一个长方体,也就是说,较短的垂直边的长度不能超过较长的水平边长度的3/4如果违反了上述标准的任何一条,则评为0分。
NER V:A Parallel Processor for Standard Genetic Algorithms∗R.Hauser,R.M¨a nner M.MakhaniokLehrstuhl f¨u r Informatik V Institute for Engineering Cybernetics Universit¨a t Mannheim Belarus Academy of SciencesD–68131Mannheim220012Minsk,Rep.BelarusAbstractThis paper describes the implementation of a stan-dard genetic algorithm(GA)on the MIMD multipro-cessor system NERV.It discusses the special features of the NERV hardware which can be utilized for an ef-ficient implementation of a GA without changing the structure of the algorithm.1IntroductionIn recent years genetic algorithms(GAs)[1,2]have found considerable interest as a means of solving op-timization problems.They do this by exploiting ideas which are drawn from natural evolution.The basic idea is tofirst choose a representation for a solution to a given optimization problem.In the following we will assume that the representation will be in the form of a bit string although other representations are pos-sible.Then several operators are iteratively applied to a set of solutions.This improves their quality.The terminology of GAs is mostly drawn from biology,so the set of solutions is called the population,the quality of a given solution thefitness,one solution is called an individual etc.The basic operators of GA are modeled after their natural counterpart and consist of selection, crossover and mutation.1.1SelectionThefitness of each individual in the population is evaluated.Thefitness of each individual relative to the mean value of all other individuals gives the prob-ability with which this individual is reproduced in the next generation.Therefore the frequency h i of an in-dividual in the next generation is given by∗This work has been supported by the Deutsche Forschungs-gemeinschaft(DFG)under grants Ma1150/8–1and436WER 113–1–3.h i∝f i¯f(1) where f i is thefitness of individual i and¯f the average over allfitness values.The effect of this procedure is that individuals with a higher–than–averagefitness become more frequent in the population.Individuals with worsefitness will be reproduced with a smaller probability and therefore vanish from the population.1.2CrossoverThe crossover operator takes two individuals from the population and combines them to a new one.The most general form is uniform crossover from which the so called one–point crossover and two–point crossover can be derived.First two individuals are selected.The strategy for this selection can again vary.A popular one is to select thefirst individual according to itsfit-ness and the second one by random.Then a crossover mask M j,j=1,...,L,where L is the length of the chromosome,is generated randomly.A new individual is generated which takes its value at position j from thefirst individual if M j=1and from the second one if M j=0.One gets,e.g.,the usual one–point crossover operator if M j=1for j=1,...,k,and M j=0for j=k+1,...,L.The crossover operator is applied with probability P C.The old individual is simply copied to the new population if no crossover happens.The reasoning behind this operator is that it might happen that two individuals each have found an optimum in different subspaces and the combina-tion of these solutions gives also a good solution in the combined subspaces.1.3MutationEach bit of an individual is changed(e.g.inverted) with probability P M.This probability is a parameter of the algorithm.One important motivation for thisoperator is the case where all individual of a popu-lation have bit k set to zero.Neither selection nor crossover is able to change to this bit to the other value.If the optimal solution happens to lie in the subspace of the configuration space where bit k=1 then this optimum can never be reached.All three steps above are iterated for a given num-ber of generations(or until one can no longer expect a better solution).2Parallel Genetic Algorithms It has long been noted that genetic algorithms are well suited for parallel execution.Several parallel implementations of GAs have been demonstrated on a variety of multiprocessor systems.This includes MIMD machines with global shared memory[11]as well as message passing systems like transputers[3,4] and hypercubes[8]as well as SIMD architectures[6,7] like the Connection Machine.It is easy to see that the following steps in the al-gorithm can be trivially parallelized:1.Evaluation offitness function Thefitness of eachindividual can be computed independently from all others.This could give a linear speedup with the number of processing elements.The maxi-mum speedup can be achieved if the number of processing elements is equal to the number of in-dividuals in the population.It might be possible, of course,that e.g.thefitness evaluation of each individual can itself be parallelized.2.Crossover If we choose to generate each individualof the next generation by applying the crossover operator,we can do this operation in parallel for each new individual.The alternative would be to apply crossover and to put the resulting in-dividual in the existing population where it re-places e.g.an individual with a badfitness.This would obviously introduce race conditions when applied concurrently to several individuals.One processing element may start with an old individ-ual which is replaced by a new one during pro-cessing.For these reasons we will use thefirst variant.Again a linear speedup with the number of processing elements can be achieved as long as the number of processing elements is less than or equal to the number of individuals.3.Mutation The mutation operation can be appliedto each bit of each individual independently.Be-sides from the bit value the only information needed is the global parameter P M.It should be noted that it is usually not possible to gain a larger speedup for steps1)and2)because of data dependencies between the different steps of the algorithm.This can be seen e.g.for step2:If the crossover operation selects one of the parents it does this according to its relativefitness.However this can only be done if thefitness values of all other individuals are already computed so that the mean value is available.Therefore the three steps will in general be done one after each other.Up to now we did not assume any concrete imple-mentation for our parallel processing system.We did not mention howfine grained our parallel processing can be and how the memory system is organized.Es-pecially the last point can have an enormous impact on the resulting performance of the system.The problem is that we always assumed that all data are available in a global shared memory.The crossover procedure e.g.takes two arbitrary individuals to create a new one.Therefore it must have access to the whole pop-ulation.If we have a multiprocessor system with local memory only,we mustfirst transfer the whole popula-tion to all processing elements before we can proceed. In the following we will point out what kind of data each processing element must access to perform the different steps of the algorithm:1.Fitness evaluation:Each processing element musthave access only to those individuals whosefit-ness it is going to compute.In the optimal case (number of processing elements=number of indi-viduals)this means one individual.However the result of this computation is needed by all other processing elements since it is used in computing the mean value of all function evaluations which is needed in step2.2.Crossover:Each processing element which createsa new individual must have access to all otherindividuals since each one may be selected as a parent.Furthermore to make this selection the procedure needs allfitness values from step1. 3.Mutation:As in step1each processing elementneed only the individual(s)it deals with.As men-tioned above the parallelization could be even morefine grained as in steps1and2,in which case each processing element would need only one bit of each individual.This could usually only be achieved by a SIMD style machine.Unfortunately a multiprocessor system with a glob-ally shared memory using e.g.a bus system is usually restricted to only a few processing elements.Other-wise the bus will become a serious bottleneck.Other interconnection networks like a crossbar switch do not scale up very well with the number of processing ele-ments.Therefore many systems with a large number of processors use only local memory instead and pro-vide communication e.g.via message passing.Algo-rithms where the processing elements are only loosely coupled perform well on these machines and the num-ber of processing elements can often be scaled to sev-eral hundreds of processors.The drawback of these systems when applied to genetic algorithms is that some of the data must be passed to all processors.This data transfer may take an enormous amount of time,especially if there is no direct connection between two arbitrary processors but the data must be passed on by several interme-diate nodes.This is the case e.g.for a transputer system where a processor may be connected to only four neighboring processing elements.This seems to be the reason that many implemen-tors of parallel genetic algorithms have decided to change the standard algorithm in several ways.One popular approach is the partitioning of the population into several subpopulations[5,9].The evo-lution of each subpopulation is handled independently from each other.From time to time there is however some interchange of genetic material between different subpopulations.Sometimes a topology is introduced on the population,so that individuals can only inter-act with nearby chromosomes in their neighborhood [3,6,10,12,13].All these methods obviously reduce the coupling between different processing elements. Therefore an efficient implementation on multiproces-sor systems with local memory is possible.Some authors argue that their changes to the origi-nal algorithm actually improve the performance.E.g. the splitting in different subpopulation allows each subpopulation to evolve to a different suboptimum without interference from other subpopulations.The danger that the whole population evolves into a sub-optimal solution is greatly reduced.The combination of two different subpopulation with good suboptimal solutions may result in further improvement.The restriction to use only individuals taken from a given neighborhood can be justified by biological reasons.Here an individual is obviously not able to choose an arbitrary individual from the whole popula-tion.Furthermore the separation of subpopulations is often considered as an essential point for the evolution of new species.As long as there is a constantflow of genetic material the two populations will not evolve in two different directions.Despite these arguments we consider it as a draw-back that not the original standard GA could be ef-ficiently implemented.The reason is that a genetic algorithm is often a computational intensive task.It often depends critically on the given parameters used for the simulation(e.g.P M and P C).There are some theoretical results about how to choose these parame-ters or the representation of a given problem,but most of them deal with the standard GA only.Even then one often has to try several possibilities to adjust the parameters optimally.Therefore it is desirable that the standard GA can be parallelized and simulated efficiently.If one changes the algorithm itself in the process of paral-lelization the theoretical assumptions will usually no longer apply.Such a simulation e.g.cannot be directly used to support some theoretical results.Of course this does not speak against the changed algorithms, it simply argues that for a fast simulation system it should be possible to parallelize the standard version of the algorithms so that the results can be directly compared to a single processor version.In the following we present some results of such a parallelization on the multiprocessor system NERV. We will show that only a small number of properties are required to get an efficient parallel program which implements the standard GA.3The NER V multiprocessor system The NERV multiprocessor[14]is a system which has been originally designed for the efficient simula-tion of neural networks.The general layout can be seen in Fig.1.It is based on a standard VMEbus system[15]which has been extended to support sev-eral special functions.Each processing elements con-sists of a MC68020processor with static local memory (currently512kB).Each VME board contains several processor boards.The NERV system can therefore be considered as a MIMD machine,since each proces-sor may run a different program in its local memory. However usually the system is run in a SIMD style mode,which means that the same program is down-loaded to each processing element,while the data to be processed are distributed among the boards.The whole multiprocessor is connected to a work-station via a parallel interface.Programs can be trans-parently downloaded and run from each workstationVMEboardsub-board Figure1:General layout of the NERV multiprocessorglobal() which modifies the address in such a way that it now is part of the broadcast address region.The pointer returned by this function can now be used to access the array.Whenever we read a value from the array we simply get the local value.No other processors or the bus are involved.However if we write into any element of this array a broadcast will automatically be initiated since the address is part of the broadcast region.Therefore this element will be updated on all other processors.Note however that there is no ex-plicit synchronization between the processors.If two processors update the same element,the last one will win.This will not happen if e.g.each processor is only allowed to update a certain range of array elements.Here is an example C fragment:int vector[100];int*p;int a;p=mk_global(vector);/*p is now pointer intothe broadcast addressregion*/a=p[10];/*this is a read fromlocal memory*/p[50]=5;/*this is an implicitbroadcast transfervector[50]on allprocessors will nowcontain the value5*/ If we can restrict our communication to broadcast transfers only,we have a very efficient way for updat-ing global information,although the information itself will be duplicated on each processor’s local memory. The last point solves the bottleneck problem usually associated with a single global shared memory.The first one reduces the communication time between the processing elements.Note that a broadcast transfer facility cannot be implemented with such efficiency in a system without a global bus.processor elements 1..N4 GB3 GB1 GB2 GB4 GB 3 GB 1 GB 02 GBFigure 2:Address space of a NERV processor moduleglobal()so that they both pointinto the broadcast region.The same holds for an ar-ray which contains the fitness values of all individuals.After each generation the two population pointers are simply exchanged.Let N be the number of processing elements in the system.The general strategy will be to distribute the computational load equally amongall processing elements by assigning PN individuals to each processor.Chromosome pop1[POP_SIZE],pop2[POP_SIZE];Chromosome *population,*newPopulation;population =mk_global(pop1);newPopulation =mk_global(pop2);The parallelization of each GA operator is now straightforward.4.1Fitness evaluationEach processor evaluates the fitness of the individ-uals it has been assigned.No interaction is required between different processors.The fitness values are simply written into the mentioned array which will automatically initiate a broadcast.Since each pro-cessor is responsible for another set of individuals no overlap will occur.After this step is finished P broad-cast transfers have occurred and the fitness array on each processor contains the up–to–date values.int fitness_values[POP_SIZE];int *fitness;fitness =mk_global(fitness_values);for (i ="first individual";i <="lastindividual",i++)fitness[i]=eval(i);synchronize();Note that the evaluation function uses only the lo-cal copy of the population.The access to fitness[i]is the(implicit)broadcast.After the synchronize() call the processors can continue,e.g.by computing the mean value of all function evaluations.The computa-tion of thefirst and last individual for each processor is simple if P mod N=0.Otherwise it may hap-pen that some processors have been assigned more chromosomes than the rest.Since these details are not important for the algorithm itself they have been omitted.4.2CrossoverAs already mentioned we decided to make the next generation by looping over all individuals of the new population and either copying an individual from the old one or create a new one by crossover from two parents.Again each processor will be responsible for a part of the population.In the case of one–point crossover,the general algorithm looks like this:for(i="first individual";i<="lastindividual";i++){ offspring=&newPopulation[i];parent1=select();parent2=random_select();if(random(CROSSOVER_PROB)<CROSSOVER_PROB){k=random(CHROM_LENGTH);for(j=0;j<k;j++)offspring[j]=parent1[j];for(j=k;j<CHROM_LENGTH;j++)offspring[j]=parent2[j];}else/*copy individual*/for(j=0;j<CHROM_LENGTH;j++)offspring[j]=parent1[j];}synchronize();The function select()selects an individual accord-ing to its relativefitness(ing a roulette wheel algorithm),random select()selects an individual by random.Each of these functions uses only local infor-mation.offspring is a pointer to the new individual. Since it gets its value from the newPopulation pointer it will also point into the broadcast region.This means that each access to offspring in the inner for–loops will be a broadcast.Again each element in the new-Population array will only be written by exactly one processor,so no conflicts will arise.After this step P·L elements will have been broad-casted(assuming that we encode e.g.each bit in a separate character)and each processing element will have a complete copy of the new population.4.3MutationThe mutation operator is parallelized in the same fashion as the other operators.Again each processor handles P chromosomes and broadcasts the results. for(i="first individual";i<="lastindividual";i++){ individual=&newPopulation[i];for(j=0;j<CHROM_LENGTH;j++)if(random(MUTATE_PROB)<MUTATE_PROB)individual[j]=!individual[j];}synchronize();Each bit changed by mutation must again be broad-casted to all other processors.This is done by the assignment to individual[j].Note that the right hand side of this assignment will only access local memory since it is a read access.After the synchronization the pointer to the old and the new population can be exchanged and the next generation can be computed.The program will transfer Pfitness values(from step1)and P·L bits for the new population(from step2)over the common bus.In addition it must transfer the bits which are changed during mutation which may vary in each generation.This is all commu-nication which will occur.All other values are usually fetched from local memory.A broadcast facility is the most efficient way to implement this since it does not depend on the number of processors.If we increase the number of processing elements we will decrease the time needed for each step while the communica-tion overhead will stay constant.From the consideration above we should expect a linear speedup with the number of processing ele-ments.However this is not entirely true since if sev-eral processors want to broadcast at the same time only one request can be satisfied.This is due to the one–at–a–time property of a single bus.In practice this will lead to a serialization of the program.How-ever the time for a single transfer is usually very small compared to the rest of the computations required, e.g.thefitness evaluation or the selection of a parent chromosome.One appealing property of this implementation is that it behaves exactly the same if it is run on a sin-gle processor or a multiprocessor system(if we assume that our random number generators are initialized ap-propriately).The synchronization points take care that no data will be used by any processor before it is generated by another one.In fact for most parts the program looks exactly like its serial counterpart and without explanation one would not expect that the program may run in several processors while implic-itly updating other processor’s memory with broad-casts.Is it indeed possible to write some dummy rou-tines for the special hardware procedures(mk global(), synchronize())and then run the same program on a workstation(although the actual implementation his-tory was the other way round:additions were made to a serial implementation to take care of the special NERV features).Therefore one can immediately compare the timing of the multiprocessor version with the single processor version.One implementation of the above algorithm tried tofind a solution for the Quadratic Assignment Problem(QAP)which is known to be NP–hard.This special problem required several changes to the algo-rithm which have been omitted for the sake of clarity.E.g.the chromosomes were not a string of bits but a given permutation of the natural numbers1to L. Each solution was required to be such a permutation which puts constraints on the crossover and mutation operators.However all of these modification were only local and did not change e.g.the communication be-havior of the program.The resulting program was run on a Macintosh IIci with A/UX as operating system. This machine uses an MC68030processor so that the comparison of the execution times should be reason-able.The NERV system was running with one,two, or six processors respectively.Since there was no pro-filing tool on the NERV side the output of the UNIX time command is given.A NERV system with one processor is used as a reference point.Only the real times are shown,since there is no meaningful interpretation of the user and sys times for the NERV system.Thefirst part is for a program version which outputs several information after each generation(best value,mean value etc.).This is often desirable if one wants to look at the behavior of the algorithm during the run.The NERV system is how-ever badly prepared for small outputs of data.Since it has no local storage it uses the mass storage of the workstation.Each printf()for example requires that the NERV system is stopped and waits for the host to handle the output transaction.One can see that the speedup is only a factor2.2in this case.If the output is either disabled or handled in a dif-ferent way,e.g.by collecting all data and outputting them at the end of the program with a single fwrite() command the NERV system performs much better.It achieves a speedup of a factor5.2.This is still less than the maximal speedup of6,partly due to the reasons explained above.Anyway one should keep in mind that two such different system are not directly com-parable.The point is that the single processor version of the program could be mostly taken unchanged and put on the multiprocessor system with a significant speedup in time.The measurements for the host above were taken on a workstation with a processor similar to the one in the NERV system.Today’s workstations however are usually equipped with much faster processors.A typical RISC workstation(e.g.a Sparcstation2)can easily outperform the NERV system with6processors. At the time of this writing a redesign of the NERV sys-tem is nearlyfinished.It uses a MC68040processor with25MHz and16MByte of dynamic RAM for each module.The total system may contain up to40pro-cessing elements.Therefore one can again expect a significant decrease in computing time when a GA is implemented on such a system.5ConclusionsWe have shown how to implement a standard ge-netic algorithm on a multiprocessor system.The speedup which has been achieved is proportional to the number of processors in the system.Putting in more processing elements reduces the computation time while the communication time remains constant.The system circumvents the problems of a global shared memory by using a copy of all relevant data on every processor.The update of data is implemented by a broadcast facility.This ensures that all processor will immediately get a copy of any changed data.By using the broadcast facility and a global bus this can be accomplished much faster than with any message passing system.Since each changeable datum is as-signed to a certain processor which is responsible for the update,no other hardware mechanisms are nec-essary to control exclusive access.Synchronization is only necessary after each application of an operator and is also efficiently supported by hardware.Table1:Absolute times for GA simulation on different processors.Problem:Quadratic Assignment Problem,Problem Size=30Population Size=120,Number of Generations=250 without output184.9s99.3s35.01s229.03sTable2:Relative performance compared to an one–processor NERV system.1processor2processors6processors Mac IIci with output 1.0 1.56 2.20.97without output 1.0 1.86 5.20.81References[1]J.H.Holland,Adaption in Natural and ArtificialSystems(The University of Michigan Press,Ann Arbor,1975)[2]D.E.Goldberg,Genetic Algorithms in Search,Optimization and Machine Learning,(Addison–Wesley,Reading,1988)[3]M.Gorges–Schleuter,ASPARAGOS:An Asyn-chronous Parallel Genetic Optimization Strat-egy,Proc.3rd Intl.Conf.on Genetic Algorithms (1989)422–427[4]T.Fogarty,Implementing the Genetic Algorithmon Transputer Based Parallel Processing Sys-tems,Parallel Problem Solving from Nature1 (1991)145–149[5]J.P.Cohoon,W.N.Martin, D.S.Richards,AMulti–population Genetic Algorithm for Solving the K–Partition Problem on Hyper–cubes,Proc.4th Intl.Conf.on Genetic Algorithms(1991)244–248[6]R.J.Collins, D.R.Jefferson,Selection in Mas-sively Parallel Genetic Algorithms,Proc.4th Intl.Conf.on Genetic Algorithms(1991)249–256 [7]P.Spiessens,B.Manderick,A Massively Paral-lel Genetic Algorithm,Proc.4th Intl.Conf.on Genetic Algorithms(1991)279–285[8]C.C.Pettey,M.R.Leuze,A Theoretical Investi-gation of a Parallel Genetic Algorithm,Proc.3rd Intl.Conf.on Genetic Algorithms(1989)398–405[9]R.Tanese,Distributed Genetic Algorithms,Proc.3rd Intl.Conf.on Genetic Algorithms(1989)434–439[10]M.G.A.Verhoeven,E.H.L.Aarts,E.van de Sluis,Parallel Local Search and the Travelling Salesman Problem,Parallel Problem Solving from Nature2 (1992)543–552[11]Tsutomu Maruyma,Akihiko Konagaya,KoichiKonishi:An Asynchronous Fine–Grained Paral-lel Genetic Algorithm,Parallel Problem Solving from Nature2(1992)563–572[12]Hisashi Tamaki,Yoshikazu Nishikawa:A Par-allel Genetic Algorithm based on a Neighbor-hood Model and Its Application to the Jobshop Scheduling,Parallel Problem Solving from Nature 2(1992)573–582[13]H.M¨u hlenbein,Parallel Genetic Algo-rithms,Population Genetics and Combinato-rial Optimization;in J.D.Becker,I.Eisele,F.W.M¨u ndemann(Eds.):Parallelism,Learn-ing,Evolution,Lect.Notes in Comp.Sci.565, (Springer,Berlin,1991)398–406[14]R.Hauser,H.Horner,R.M¨a nner,M.Makhan-iok,Architectural Considerations for NERV—A General Purpose Neural Network Simulation Sys-tem;in J.D.Becker,I.Eisele,F.W.M¨u ndemann (Eds.):Parallelism,Learning,Evolution,Lect.Notes in Comp.Sci.565,(Springer,Berlin,1991) 183–195[15]The VMEbus Specification,Rev.C,VMEbus In-ternational Trade Association(1987)[16]R.M.Stallman,Using and Porting GNU CC,FreeSoftware Foundation(1992)。