Near-optimal conversion of hardness into pseudo-randomness
- 格式:pdf
- 大小:185.30 KB
- 文档页数:10
Module 7-video 12What are particle-reinforced composites?什么是颗粒增强复合材料?Hello!Welcome to Introduction to Materials. Today, we are going to talk about particle-reinforced composites, also called particle or particulate composites.译文:大家好!欢迎走进《材料导论》课堂。
今天,我们来一起学习颗粒增强复合材料。
Particle composites containing reinforcing particles of one or more materials suspended in a matrix of a different materials. As with nearly all materials, structure determines properties, and so it is with particle composites.This Figure illustrates the geometrical and spatial characteristics of particles, such as the concentration, size, shape,distribution and orientation. They all contribute to the properties of these materials. 颗粒增强复合材料由基体和分散相构成,分散相粒子的几何和空间特性,如含量、大小、形状、分布、取向等结构因素都会影响颗粒复合材料的性能。
译文:颗粒增强复合材料是由一种或多种增强颗粒分散于另一种基体材料中构成的复合材料。
颗粒增强复合材料与其它几乎所有材料一样,其结构决定着性能。
不锈钢的硬度检测方法及相关标准Stainless steel hardness testing methods and related standards (GB, USA, Japan)Stainless steel products can be classified into stainless steel plate, stainless steel band, stainless steel tube, stainless steel bar and stainless steel wire according to the delivery form. In accordance with the classification of metallographic structure, they can be divided into the following five types: austenitic stainless steel, ferritic stainless steel, austenitic ferritic stainless steel, martensitic stainless steel and precipitation hardening stainless steel. All kinds of stainless steel materials are annealed, Quenched and tempered, solid solution, quenching or tempering and other different heat treatment state supply.Hardness test is a hard pressed head in accordance with the conditions of slow pressure into the surface of the sample, and then test the indentation depth or size, in order to determine the size of the material hardness. Hardness test is the simplest, fastest and easiest way to test mechanical properties of materials. Hardness tests are nondestructive, and there is an approximate conversion relationship between hardness and tensile strength.Since tensile testing is not easy to test and is easily converted from hardness to strength, people are increasingly testing only the hardness of the material and less testing its strength. Especially due to the continuous progress of hardness of manufacturing technology and innovation, some of the original can not directly test the hardness of the material,such as stainless steel plate, stainless steel pipe, stainless steel wire, stainless steel strip and thin, are now possible direct test of hardness. Therefore, there is a tendency that hardness tests will gradually take place of tensile tests.In the stainless steel standard, generally stipulated cloth, Luo, Wei three kinds of hardness test methods, determination of Hb, HRB (or HRC) and HV hardness value, stipulate three kinds of hardness value, only test one can stainless steel hardness testing methodThe metal material standards in the United States, a hardness test, a prominent feature is the priority of the Rockwell hardness test, complemented by Brinell hardness test, hardness test is rarely used for Vivtorinox, the United States believes that Vivtorinox should be mainly used for metal hardness testing of thin and small parts test.The standard of China and Japan is three kinds of hardness test at the same time, users can according to the thickness and condition of materials and their own conditions, choose one of them to test stainless steel materials.The standards for tensile test and hardness test in the Japanese stainless steel standard are the same as those of the corresponding standard form in China. The values are similar.In stainless steel hardness Rockwell hardness tester is a preferred instrument, it has the advantages of simple equipment, easy operation, without professional inspectors can directly read the hardness, high testing efficiency, very suitable forfactory use.On the use of Rockwell hardness tester for stainless steel hardness test, in the stainless steel standard, generally only specified HRC and HRB two rulers. For the annealing of stainless steel material, generally corresponding to each grade stainless steel provides hardness value should be less than a certain HRB value, generally in the range of 8896hrb. For quenching and tempering martensitic stainless steel, generally corresponds to each grade of stainless steel varieties, specified hardness value is not less than a certain HRC value, generally in the range of 32 - 46hrc.In stainless steel standards, only Rockwell hardness testers, HRB and HRC scales are specified.In fact, the surface Rockwell hardness tester can also be applied to the detection of stainless steel. Because its principle is exactly the same as Rockwell's hardness tester, but it has less testing power. And its hardness value can be easily converted into HRB, HRC or Brinell hardness Hb, Vivtorinox hardness hv. The corresponding conversion table can be found on our company's website. These conversion tables are derived from American Standard ASTM or international standard iso. For thin wall stainless steel tubes, thin stainless steel sheets, thin stainless steel strips, fine stainless steel wires, etc., it is very convenient to use surface Rockwell hardness tester. Especially the portable surface Rockwell hardness of the company's newly developed meter pipe, Rockwell hardness tester, stainless steel plate, can thin 0.05mm stainless steel band and fine stainless steel tube to 4.8mm fast and accuratehardness testing, made in China is difficult to solve the problem smoothly done or easily solved.Hardness test of stainless steel plate and stainless steel bandStainless steel plates include hot rolled plates and cold rolled sheets. Hardness test of stainless steel plate or stainless steel band greater than 1.2mm adopts Rockwell hardness tester to test the hardness of HRB and hrc. The stainless steel plate or stainless steel strip with a thickness of 0.2 ~ 1.2mm is tested with a surface Rockwell hardness tester, HRT or hrn. A stainless steel plate or stainless steel band of less than 0.2mm thickness. A surface Rockwell hardness tester is used to test the hr30tm hardness with a diamond anvil seat.For the thickness of 0.3 ~ 13mm annealing stainless steel plate, stainless steel band, you can also use webster hardness tester, this instrument test is very fast and simple, very suitable for annealing stainless steel material rapid conformity inspection.Hardness test of stainless steel tubeStainless steel pipe includes welding stainless steel tube and cold drawn stainless steel pipe. Stainless steel pipe with internal diameter greater than 30mm and wall thickness greater than 1.2mm, Rockwell hardness tester is used to test the hardness of HRB and hrc. Stainless steel pipe with an inner diameter greater than 30mm and a wall thickness of less than 1.2mm, and a surface Rockwell hardness tester is used to test the hardness of HRT or hrn. Stainless steel pipe with a diameterof less than 30mm and greater than 4.8mm is tested with a Rockwell hardness tester specially made of tubes. The hardness of hr15t is measured. When the inner diameter of the pipe is greater than 26mm, the hardness of the inner wall of the pipe can also be measured by Rockwell or surface Rockwell hardness tester.For more than 6.0mm in diameter, wall thickness in the annealed stainless steel pipe can be used under 13mm, W - B75 webster hardness test, it is very fast and simple, suitable for rapid and nondestructive inspection of stainless steel pipe.Hardness test of stainless steel barsFor stainless steel bars with a diameter of less than 50, Rockwell hardness tester can be used to test the hardness of HRB or hrc.Hardness test of stainless steel wireFor stainless steel wire with diameter greater than 2.0mm, the surface Rockwell hardness tester can be used to test the hardness of HRT or hrn.。
75%-Efficiency blue generation from an intracavityPPKTP frequency doublerR.Le Targat,J.-J.Zondy,P.Lemonde*BNM-SYRTE,Observatoire de Paris 61,Avenue de l Õobservatoire,75014Paris,France Received 23July 2004;received in revised form 8November 2004;accepted 22November 2004AbstractWe report on a high-efficiency 461nm blue light conversion from an external cavity-enhanced second-harmonic gen-eration of a 922nm diode laser with a quasi-phase-matched KTP crystal (PPKTP).By choosing a long crystal (L C =20mm)and twice looser focusing (w 0=43l m)than the ‘‘optimal’’one,thermal lensing effects due to the blue power absorption are minimized while still maintaining near-optimal conversion efficiency.A stable blue power of 234mW with a net conversion efficiency of g =75%at an input mode-matched power of 310mW is obtained.The intra-cavity measurements of the conversion efficiency and temperature tuning bandwidth yield an accurate value d 33(461nm)=15(±5%)pm/V for KTP and provide a stringent validation of some recently published linear and thermo-optic dispersion data of KTP.Ó2004Elsevier B.V.All rights reserved.PACS:42.65.Ky;42.79.Nv;42.70.MpKeywords:Second harmonic generation;PPKTP;Strontium;Thermal effects1.IntroductionContinuous-wave (CW)high-power blue light generation is a key issue for many applications such as laser printing,RGB color display or for spectroscopy and cooling of atomic species.Due to the limited power and tunability of gas lasers(Ar +,HeCd)or newly developed blue diode laser sources in the blue-UV spectrum [1],the usual pro-cedure is to upconvert near-IR solid-state or semi-conductor diode lasers either internal to the laser resonator [2]or in external enhancement resona-tors [3,4].In the latter scheme,the efficiency of the upconversion is usually measured in terms of the ratio g ¼P 2x =P in x of the generated second-harmonic (SH)power to the fundamental field (FF)power which is mode-matched to the resona-tor.Several nonlinear materials can upconvert0030-4018/$-see front matter Ó2004Elsevier B.V.All rights reserved.doi:10.1016/j.optcom.2004.11.081*Corresponding author.Tel.:+33140512224;fax:+33143255542.E-mail address:pierre.lemonde@obspm.fr (P.Lemonde).Optics Communications 247(2005)471–481/locate/optcomsuch lasers,using either temperature-tuned or angular birefringence phase-matching.The most widely used one is the large nonlinearity(d eff$18 pm/V)potassium niobate(KNbO3)crystal[5–9]. For temperature-tuned noncritical phase-match-ing,the major drawback of KNbO3is the occur-rence of a phase transition leading to repoling near T=185°C[5,6],which restricts upconversion to laser wavelengths longer than k$920nm.For laser cooling of the1S0–1P1strontium line at461 nm,which is the target of the present work,the use of a non-critically phase-matched KNbO3at T$150°C is hence not recommended.For short-er FF wavelengths,critical phase-matching at room-temperature is possible but results in a deleterious beam walk-off(q$1°)of the blue wave–leading to elliptical beam shape or even higher-order transverse patterns[9]–combined with a narrow temperature bandwidth(D T$0.5°C)[8].As an additional drawback,KNbO3is sub-ject to blue-induced photo-chromic damage known as BLIIRA(blue-induced infrared absorp-tion[10]).This nonlinear loss mechanism,together with the associated thermal lensing,has limited the highest reported conversion efficiency to g$80%, yielding500mW of blue power at473nm from aNd:YAG laser power P inx $800mW[8].Alternative widely used materials are LiB3O5 (LBO)or b-BaB2O4(BBO)[11–13]but the low nonlinear coefficients of oxoborate crystals (d eff61pm/V)are not suited to the frequency con-version of low power sources because it requires a tight control of the round-trip intracavity loss down to61%.Recently,we employed a critically phase-matched KTP in a doubly resonant sum-fre-quency generation(SFG)of a Nd:YAG laser and a low-power AlGaAs diode laser at813nm to pro-duce120mW of461nm light,but the conversion efficiency(g<30%)was limited by the strong power imbalance of the two pump sources[14]. To allow a more efficient cooling of atomic stron-tium,a new powerful and more convenient blue source from direct second-harmonic generation (SHG)of a master-oscillator-power-amplifier(Al-GaAs-MOPA)delivering an output power of 450mW was then constructed.But unlike in the experiment in[9]that uses a critically phase-matched KNbO3semi-monolithic resonator to generate the same wavelength,we made our choice on periodically-poled potassium titanyl phos-phate(PPKTP).Electric-field poled quasi-phase-matched(QPM)oxide ferroelectrics such as PPLN (periodically-poled lithium niobate)and PPKTP have recently super-seeded the previous birefrin-gence phase-matched materials for visible light generation[15–17]owing to their much higher effective nonlinearities(d eff(PPLN)$17–18pm/V and d eff(PPKTP)$7–9pm/V).Furthermore QPM materials are intrinsically free of walkoff. For blue generation,PPKTP is preferred to PPLN which exhibits strong photorefractive damage when used at room-temperature.In[15],an in-tra-cavity PPKTP frequency-doubled Nd:YAG la-ser yielded an output power of500mW at473nm with an internal efficiency of5.5%.Green light power of268mW was generated by Juwiler et al.[16]from a CW Nd:YAG laser with an efficiency of70%in a standing-wave resonator.In[17]a MOPA diode laser(0.5W)similar to ours gener-ated200mW of blue light at461nm in a ring enhancement cavity,a value comparable to that obtained using a similar semi-conductor laser,at identical wavelength,in a semi-monolithic stand-ing-wave KNbO3resonator[9].However,due to the lower UV bandgap energy of KTP,linear absorption becomes an issue at wavelengths shorter than500nm.A detailed absorption measurement offlux-grown or hydro-thermally-grown KTP in the spectral range from the bandgap wavelength(365nm)to600nm re-vealed a largefluctuation from sample to sample [18].At473nm for instance,values of a ranging from0.034to0.085cmÀ1were reported.In a re-cent closely related PPKTP–SHG experiment pumped at846nm,a value a(423nm)=0.10 cmÀ1has been measured[19].Strong thermal lens-ing effects[20]arising from the blue absorption was the main limitation of their power efficiency scaling in genuine CW operation(g=60%,corre-sponding to225mW of423nm power for375 mW of mode-matched Ti:sapphire laser).Higher blue power(400mW)could be obtained for the same circulating power of P c=5.5W only in pulsed fringe-scanning mode that allows more effi-cient heat dissipation within the PPKTP crystal. Such a severe limitation in CW operation actually472R.Le Targat et al./Optics Communications247(2005)471–481arised from the tight focusing used(w0=17l m, corresponding to the theoretical optimum of the single-pass efficiency for the PPKTP crystal length L C=10mm[23,24])and the associated thermallens power which scales as wÀ20[20].In true CWoperation,the thermal focal length(which can be as short as a few centimeter,see Section6)experi-enced by the circulating fundamental power im-pedes efficient mode-and-impedance matching of the input beam.For a symmetric linear resonator for instance,thermal lensing has been shown to be responsible for the clamping of the circulatingpower to a low critical value P critc correspondingto the collapse of the secondary thermally-induced waists[20].In the present experiment,we deliberately avoid optimal single-pass focusing to circumvent these thermal lensing effects.In Section2we show that owing to the large nonlinearity of PPKTP,one does not require extremely low intra-cavity linear losses to maintain the conversion efficiency con-stant over a wide range of focusing parameters. The latter is defined by L=L C/z R,where L C isthe PPKTP length and z R¼k x w20=2is the Ray-leigh range of the cavity mode.Wefind that loose focusing to w0=40l m in a20-mm long PPKTP crystal still results in a large single-pass efficiencyC¼P2x=P2c $2:3Â10À2WÀ1and in a stableCW operation of the resonant cavity at the maxi-mum available mode-matched power ofP in x ¼310mW with no evidence of serious lensingeffing such a strategy,blue power scaling to half a Watt should be possible with$0.7W of mode-matched input power.In Section3we briefly describe the experimental setup,highlighting some measurement procedures aimed at an accurate determination of important parameters such as the mode-matching factor j or the circulating power P c.Section4is devoted to the intracavity measurement of the conversion efficiency that determines thefinal enhancement efficiency,taking profit of the TEM00resonator modefiltering of the fundamental MOPA laser non-Gaussian beam and making use of the accu-rate evaluation of the Gaussian beam SHG focus-ing function h[24].From these measurements,we derive a consistent value of the d33nonlinear tensor element of KTP.The comparison of the recorded temperature tuning curve with the func-tional dependence given by two of the most re-cently published linear and thermo-optic dispersion relations of KTP[25,26]shows a perfect agreement between theory and experiment,provid-ing thus a stringent validation test of those disper-sion relations for PPKTP in the blue/near-IR spectrum.Having determined all the relevant parameters,we then present(Section5)the reso-nant enhancement results which are in good agree-ment with the theoretical expectation,with a record efficiency of g=75%.The paper ends with a brief thermal effects analysis(Section6)which supports that the experiment is indeed not limited by thermal effects.2.Analysis of singly resonant SHG efficiency versus focusingWe start by investigating the dependence of the power conversion efficiency on the focusing parameter L.At zero cavity detuning,the internal circulating FF power P c in a singly-resonant ring resonator is given by[3,27]P cP inx¼T11Àffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffið1ÀT1Þð1À Þð1ÀC P cÞpÂÃ2,ð1Þwhere T1=1ÀR1is the transmission factor of the input coupler, is the distributed round-trip passive fractional loss(excluding T1).C,ex-pressed in WÀ1,is the depletion due to non lin-ear effects.It can be written as the sum of two termsC¼C effþC abs,ð2Þwhere C effis the conversion efficiency,P2x¼C eff P2c and C abs is the efficiency of the Second Harmonic absorption process,which cannot be neglected here,P abs¼C abs P2c.The net power conversion efficiency g calculated from(1)obeys the implicit equation[7]ffiffiffig p2Àffiffiffiffiffiffiffiffiffiffiffiffiffiffi1ÀT1p2À ÀCffiffiffiffiffiffiffiffiffig P inxC effs@1A24352À4T1ffiffiffiffiffiffiffiffiffiffiffiffiffiC eff P inxq¼0:ð3ÞR.Le Targat et al./Optics Communications247(2005)471–481473Given C,which depends on the focusing and the crystal length,and given and the maximum avail-able mode-matched P inx ,g in Eq.(1)can be opti-mized against T1to yieldT opt 1¼2þffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi22þC P inxr:ð4ÞThe conversion efficiency C effcan be evaluated using the undepleted pump SHG theory taking lin-ear absorption into account[23,24].For a waist location at the centre of the crystal it can be writ-ten as[24,28]C eff¼P2x Pc¼2x2d2effp 0c3n2xn2xL C k x expðÀa2x L CÞhða,L,rÞ,ð5Þhða,L,rÞ¼12LZþL=2ÀL=2Zd s d s0Âexp½Àaðsþs0þLÞÀi rðsÀs0Þð1þi sÞð1Ài s0Þ:ð6ÞIn Eqs.(5)and(6),k x=2p n x/k x is the FF wave-vector internal to the medium,a n x(n=1,2)are the linear absorption coefficients,a=(a xÀa2x/2)z R, L=L C/z R is the focusing parameter(which differs by a factor2from the definition given by Boyd and Kleinman[23])and r=D kÆz R is the normal-ized wavevector mismatch given byD kðTÞ¼k2xðTÞÀ2k xðTÞÀ2p=KðTÞ:ð7ÞK(T)is the PPKTP grating period whose temper-ature dependence can be calculated with pub-lished thermal expansion coefficients of KTP [22,29].We can note that,when diffraction is con-sidered,the value of D k that optimizes the focus-ing function h versus r is not nil as for a plane wave[24].The Second Harmonic absorption efficiency C abs is more difficult to express,except in two limits:for a plane wave,one can easily deduce the pro-file P2x(z,a2x)for06z6L C from Eqs.(5)and(6),and thus evaluate the absorbed power:P abs¼a2xZ L CP2xðz,a2xÞd zð8Þ when the beam is tightly focused,the conver-sion occurs only at the center of the crystaland then:P abs¼e a2x L C=2À1ÀÁC eff P2c,ð9ÞWith our crystal we measured a2x=0.14cmÀ1 which leads to C abs/C eff=0.1(resp.0.13)in the plane wave(resp.tight focusing)limit.Gi-ven the small difference between both values and since we are mainly interested in focusing below optimal,we take for the whole analysis the plane wave value.1The other parameters of the model are also ta-ken from experimental measurements(see below). We have =0.02lumping the FF crystal linear loss(absorption+AR coating)and mirror reflec-tion loss,a crystal length of L C=20mm.Our measured value for the polar tensor element of KTP d33=15pm/V is slightly smaller than[30] and yields d eff=9.5pm/V with the refractive indi-ces n x”n Z(922nm)=1.8364,n2x”n Z(461 nm)=1.9188at T=30°C[26].The mode-matched power is P inx¼310mW.Fig.1displays the efficiency curve gðT opt1,LÞfor a beam waist range18l m6w06100l m.It cor-responds to perfect impedance matching(T1¼T opt1,see Eq.(4))for each C(L).It is clearly seen that g is practically constant over the range 20l m6w0650l m,meaning that it is not neces-sary to set the cavity waist at the optimal single-pass conversion as commonly believed,in spite of the twofold reduction of C for the loose focus-ing end range.At w opt¼23:6l m,the circulating optimal FF power is P c=2.56W(yielding P2x=236mW)whereas at w0=50l m,P c=3.4 W(P2x=220mW).For these nearly identical blue power,the thermal lens power is4times larger at optimal focusing than at the looser focusing. Hence to avoid thermal effects,a cavity waist be-tween40and50l m would be recommended (L62)with an expected efficiency g>70%.1The predictions of the model will then be slightly too optimistic for large L.474R.Le Targat et al./Optics Communications247(2005)471–481This insensitivity of g as a function of w0is due to the large nonlinear efficiency which dominates round trip passive losses,even for the not so small =0.02chosen here.Experimentally,we indeed found that nearly optimal g was measured over a broad range of cavity waist values,the best trade-offgiving tolerable thermal effects being at w0=43l m.In the same way,the value of T1is not critical.Our input coupler exhibits T1=0.12%,which causes a loss smaller than1% on the generated blue power in comparison to T opt1¼0:10%which would optimize g for w0=43 l m(Eq.(4)).3.Experimental setup and measurement proceduresThe frequency-doubling setup is sketched in Fig.2.A conventional unidirectional ring cavity is chosen.The commercial MOPA pump laser is made of a grating-tuned extended-cavity master diode laser in the Littrow configuration,injecting a tapered semiconductor amplifier(Toptica Pho-tonics AG).After aÀ70dB Faraday isolation stage,the MOPA provides a useful single-longitu-dinal-mode power P x=450mW at k x=922nm, with a short-term linewidth of less than1MHz. The transverse beam shape is far from a funda-mental Gaussian mode and strongly depends on the tapered amplifier injection current.A system of lenses mode-matches the input FF beam to the bow-tie ring resonator larger waist located be-tween the2plane mirrors M1and M2.The folding angle of the ring resonator is set to11°leading to a negligible astigmatism introduced by the two off-axis curved mirrors M3and M4(of radius-of-curvature100mm).The meniscus shape of M4 (M3)do not introduce any additional divergence of the transmitted FF or SH beam.This allows an accurate measurement(to±5%)of the smaller waist w0located at the center of the PPKTP crystal from a z-scan measurement of the TEM00diverg-ing FF beam leaking out from M4.The diverging beam diameter measurement is performed at dif-ferent z-location by use of a rotatingknife-edgeR.Le Targat et al./Optics Communications247(2005)471–481475commercial device,and the cavity waist is retrieved from standard Gaussian optics laws.The pump beam is coupled in through the par-tial reflector M1.Mirrors M2–M3are high-reflec-tors at922nm and M4is dichroically-coated at x, 2x with R x>99.9%and T2x=98%.Side2of all optics are dual-band AR-coated with R60.5%. The dual-band AR-coated PPKTP crystal(Raicol Crystals Ltd.)has dimension2·1·20mm3,with the z-propagation direction corresponding to the X-principal axis and the1mm thickness sides ori-ented along the Z-polar axis.Afirst-order(50%-duty-cycle)QPM periodic grating is patterned along the X-axis,with a period K0.5.5l m for temperature quasi-phase-matching around30°C. The PPKTP chip is mounted in a small copper holder attached to a thermo-electric Peltier ele-ment with which we servo the crystal temperature to better than±10mK.The crystal Z-axis is matched with the direction of the electric-field polarization of the MOPA laser.The blue absorp-tion coefficient of this crystal a2x at461nm was measured from a second blue source,yielding a2x=0.14(±5%)cmÀ1.This value is larger than the one(a2x=0.10cmÀ1)measured at423nm by Goudarzi et al.[19]on a1cm PPKTP sample from a different manufacturer,confirming thus the large dispersion of the values from one sample to another.Finally,because no suitable k/2plate was avail-able in front of the cavity for the input power var-iation at constant MOPA output power,P x is varied with the PA injection current.As men-tioned previously,such a power variation results in substantial transverse mode change and in turn in a variation of the mode-matching factor j.The coefficient j(P x)was hence calibrated in the ab-sence of blue conversion(by tuning the tempera-ture to a zero conversion regime)and used to rescale the mode-matched power.For the range 300<P x<450mW,the mode-matching factor is found practically constant and equal to j$0.7.In the following section,the conversion efficiency C eff(w0)(Eq.(5))will be measured inter-nal to the cavity.For the measurement of the cir-culating FF power P c,the transmissivity of mirror M4at922nm was accurately calibrated to 1.2·10À5±10%.The FF and the SH powers were calibrated with a thermal powermeter with an uncertainty below5%.4.Measurement of PPKTP effective nonlinearity and tuning curveTo model the experimental singly-resonant con-version efficiency,an accurate knowledge of the experimental C eff(w0)is needed.We choose not to use the standard single-pass method for the exper-imental measurement,since the poor beam quality of the MOPA output would contradict the Gauss-ian pump assumption of Eq.(6).By placing the PPKTP inside the resonator,the modefiltering ef-fect of the cavity provides instead a pure TEM00 pump beam with accurately known waists from the measurement method outlined in the previous section.Provided that pump depletion can be neglected,the intracavity power is considered as constant all along the crystal,and is defined as the solution of Eq.(1).Several values of C effwere measured for a range of P c and3waists values w0=56,43,36l m,to yield,respectively,C eff=0.017,0.023,0.028 (±10%)WÀ1.The data points of P2x versus P2c were excellentlyfitted by a linear function,mean-ing that the low depletion assumption holds for all achievable P c s,which is confirmed by an a pos-teriori consistency check of the upper bound value C P c<0.08(1.In Fig.3,the resulting C eff(w0)are plotted against the focusing parameter L along with the theoretical curve(Eq.(5)).The only adjusted parameter to match the experimental points to the curve is the effective nonlinear coefficient d eff=(2/p)d33for afirst-order QPM,which is found to be d eff=9.5(±5%)pm/V. Such a value is slightly higher than those measured elsewhere(5–8pm/V)from OPO threshold measurement or difference frequency generation [25,30],even when wavelength dispersion is ac-counted for.This value yields for KTP d33(461 nm)=15(±5%)pm/V,which is the commonly reported value of this polar v(2)tensor element [30–32]and matches exactly the value(14.8 pm/V)reported by the manufacturer[33].Acc-ounting for Miller dispersion law,this value is also in excellent agreement with the one measured at476R.Le Targat et al./Optics Communications247(2005)471–481541nm on a crystal from the same manufacturer [34].This result shows that the grating quality of our cristal is extremely high.We believe that the perfect match of d effwith its maximum theoretical value stems from the pure Gaussian beam mea-surement and analysis taking diffraction and absorption effects into account,which is not al-ways the case with some of the reported lower val-ues even accounting for grating periodicity defects.We have tried tight focusing close to the opti-mal waist shown in Fig.3without any improve-ment in the singly-resonant conversion efficiency, despite the larger C eff,confirming the prediction of Section2.At w0=36l m,the conversion effi-ciency is identical to the one at43l m.As ex-pected,increasing thermal effects occurred with smaller waists,which could be assessed from a broadened triangular bistable shape of the FF and blue fringes as the cavity length is swept on the contracting length side of the voltage ramp (see e.g.,Fig.10of[19]and the thermal effects analysis in Section6).On this side of the fringe, passive self-stabilization of the optical path length of the resonator tends to maintain the cavity in resonance with the incoming FF frequency.Such an opto-thermal dynamics of thermally-loaded resonators has been analyzed in detail in[20].Fur-thermore,the active locking of the cavity to the top of the distorted fringe–a bistable operating point[20]–becomes problematic as reported in [19].The temperature tuning curve measured inter-nal to the cavity is also shown in Fig.4for thefinal waist choice of w0=43l m,corresponding to L=1.73.The oscillatoryfine structure pattern at the wings is clearly seen.The solid line is the con-version efficiency computed from Eqs.5to7, where the wavevector mismatches D k(T)have been computed with the Sellmeier relations at room-temperature given in[25],the thermo-optic disper-sion relation of[26]and the thermal expansion coefficient a X=6.8·10À6/°C reported in[29].Gi-ven the short grating period,a strikingly excellent agreement is seen with the data points(sidelobe amplitudes and positions)when the h(r)focusing function is used instead of the plane-wave formula. Apart from a lateral temperature shift of the calcu-lated tuning curve in Fig.4,we stress again that no fit was made to get such an agreement,which not only provides a stringent validation of the used dispersion data,at least for PPKTP produced by our manufacturer,but also highlights the excellent quality of thefirst-order periodic grating over the whole PPKTP length.The FWHM temperature tuning bandwidth is D T=1.1°C.R.Le Targat et al./Optics Communications247(2005)471–4814775.Resonant enhancement efficiencyThefinal cavity dimensions yielding the least thermal effects while providing the best power conversion efficiency when the cavity length is ser-vo-controlled correspond to cavity waists (w0,w1)=(43,163l m)given by a M3–M4spacing of$130mm and a total ring cavity round-trip length L cav=569mm.For genuine CW stable operation,an active electronic servo based on an FM-to-AM fringe modulation technique was pre-ferred to the optically phase-sensitive Ha¨nsch–Couillaud method that was reported to fail when thermal effects arise[8,19].The round-trip intracavity fractional loss is measured byfitting,at the QPM temperature,the total losses p= +C P c of the cavity as a function of P c.Otherwise,p is related to the contrast C of the cavity reflection fringes:p¼CP xP c:ð10ÞThis relationship is very reliable since it does not depend on the mode matching coefficient j.The fit yields =0.021(±5%),which is further checked from the value of the cavityfinesse F’40in the absence of nonlinear conversion.This also gives a measurement of C which is consistent with the value quoted above.When the cavity is close to impedance match-ing,the reflection contrast becomes nearly con-stant and is then equal to the mode matchingcoefficient,we measured jðP maxx Þ¼73%.For inter-mediate values of the pump power,the evaluation is more difficult:j can be deduced from the con-trast of reflected fringes at zero conversion,once is known.The generated blue power and net power effi-ciency g are plotted against the mode-matched power in Figs.5and6.The solid lines are com-puted from Eq.(3)using the experimentally mea-sured parameters.Experimental dots match well this curve,meaning that thermal effects are not a problem up to the maximum available laser power. The small offset between experimental points and the theoretical curve from100to250mW being probably due to the less accurate evaluation of the mode matched power in this range.At maxi-mum power,thefinesse drops to F’30due to the nonlinear loss(see inset of Fig.5which shows the reflected FF fringe with a contrast of$73%). The conversion efficiency is independent on whether the measurement is made under pulsed scanning-mode or under cavity-locked operation. In the latter case however,a slight adjustment of the PPKTP temperature has to be performedto478R.Le Targat et al./Optics Communications247(2005)471–481cancel the temperature-induced phase-mismatch under CW operation(Section6).At the maximummode-matched input power P inx ¼310mW(P c=3.2W is the corresponding circulating power),one obtains P2x=234mW corresponding to g=75%.If one takes into account the infra-red laser power which is not the TEM00mode,the glo-bal doubling efficiency of the system is52%.6.Blue-induced thermal effects analysisThe inset of Fig.6shows a slight fringe asym-metry observed on both the FF and SH fringes, reminiscent of the onset of thermal bistability. On the contracting cavity length scan(solid line), the fringe is broadened because the opto-thermal dynamics on this side is characterized by a self-stabilizing effect of the optical path to the laser frequency whereas on the expanding length side (dotted curve)the feedback is positive[20,21].In the PPKTP experiment of[19],the thermal effects were so prominent that the fringe shape broadens over several cold cavity linewidths to acquire a tri-angular shape.In our case,the broadening is com-paratively very modest.It is expected that heating is due to both the residual FF absorption and the SH absorption.To quantify further the role of residual thermal effects,we use the radial heat dif-fusion model detailed in[20],assuming an equiva-lent crystal rod radius equal to the half-thickness of the PPKTP(r0=0.5mm).The temperature rise due to the FF absorption can then be written as D T TÀT0¼D T0À1q r2,where T0is the nomi-nal phase-matching temperature in the absence of heating and r is the radial coordinate.The uni-form temperature shift D T0expresses asD T0¼a x P c4p K C0:57þln2r2w!kP c:ð11ÞThe quantity a x P c denotes the absorbed power per unit length.The quadratic term coefficient in D T,responsible for lensing,is q¼a x P c=ðp K C w20Þwitha corresponding thermal lens powerp¼1th ¼P absxd n x=d TC,ð12Þwhere P absx¼a x L C P c is the total FF absorbedpower(P absx¼19:5mW at P c=3.25W fora x=0.3%cmÀ1).The quantity in parenthesis,f¼d n x=d TC,is the thermalfigure-of-merit of KTPwith K C=3.3W/(m°C)[22]and d n x/d T=1.53·10À5KÀ1[26].The normalized FF fringelineshape under adiabatic length scan y(d)=P c(d)/P cm,where P c(d=0)is the maximum intra-cavity power given by Eq.(1),is described byyðdÞ¼11þdÀD yðdÞ½,ð13Þwhere d=(m LÀm cav)/c is the normalized cavitydetuning,c being the cold fringe half-linewidth:c¼c=½L cavþL Cðn xÀ1Þ =2F¼8:5MHz.In Eq.(13),the Airy function has been approx-imated by a Lorentzian in the vicinity of the reso-nance.For D¼0,the cavity resonance is seen tomove adiabatically as the crystal is thermallyloaded.The FWFM of the thermally broadenedresonance is given in unit of c by D¼a x F L C P c f=2p k x[20].For a sufficiently strongthermal load(D)1)the fringe shape presentsan hysteresis behavior,the saddle-node point cor-responding to the top of the fringe.For P c=3.25W,onefinds D T0=0.14°C,andf th=65mm.Such a short focal length means thatsignificant FF lensing still occurs despite the looserfocusing used and the small value of FF absorp-tion coefficient.Only a rigorous hot ring cavitywaist analysis,similar to the one developed for astanding-wave symmetric resonator in[20],cangive an indication on the influence of this thermallens on the cold cavity waist.For a symmetric res-onator,the length must be reduced to the mini-mum allowed space in order to contain thethermal lens effect.Due to D T0,the FF hot fringeis shifted from the cold fringe position by D.0.47half-linewidth only,given F$30.To evaluate the heating due to the SH absorbedpower,Eqs.(11),(12)cannot be used as they are,because the blue power is not uniform along thecrystal(P2x=0at z=0and is maximum atz=L C).It is however possible to estimate the totalblue absorbed power P abs2x¼C abs P2cfrom Eq.(8),which yields P abs2x¼25mW,and C abs=2.4·10À3WÀ1.This heating power can be distributeduniformly if we define an effective absorption R.Le Targat et al./Optics Communications247(2005)471–481479。
SAE Technical Standards Board Rules provide that: “This report is published by SAE to advance the state of technical and engineering sciences. The use of this report is entirely voluntary, and its applicability and suitability for any particular use, including any patent infringement arising therefrom, is the sole responsibility of the user.”SAE reviews each technical report at least every five years at which time it may be reaffirmed, revised, or cancelled. SAE invites your written comments and suggestions.TO PLACE A DOCUMENT ORDER: +01 724-776-4970 FAX: +01 724-776-0790SAE WEB ADDRESS Copyright 2000 Society of Automotive Engineers, Inc.2.2Related Publications—The following publications are provided for information purposes only and are not arequired part of this document. Additional information concerning gray iron castings, their properties, and use can be obtained from:1.Metals Handbook, Vol. 1, 10th Edition, ASM International, Materials Park, OH2.Cast Metals Handbook, American Foundrymen's Society, Des Plaines, IL3.1981 Iron Castings Handbook, Iron Castings Society, Inc., Cleveland, OH4.H.D. Angus, “Physical and Engineering Properties of Cast Iron,” British Cast Iron ResearchAssociation, Birmingham, England, 2nd Edition, 19765.“Gray, Ductile, and Malleable Iron Castings Current Capabilities,” STP-455, American Society forTesting and Materials, 100 Barr Harbor Drive, West Conshohocken, PA 19428-29596.G.N.J. Gilbert, “Engineering Data on Grey Cast Iron,” BCIRA (1977), Alvechurch, Birmingham,England7.“Tables for Normal Tolerance Limits, Sampling Plans and Screening,” R.E. Odeh and D.B. Owen,Marcel Dekker, Inc., New York and Basel, 19808.“Fatigue Properties of Gray Cast Iron,” L.E. Tucker and D.R. Olberts, SAE Paper 6904713.Grade definition and Designation.3.1Iron Grade—Gray iron grades, defined by their minimum test bar t/h ratio, are designated by the letter Gfollowed by a number equaling the defining minimum test bar t/h ratio multiplied by 100. The units used for this purpose are MPa for both tensile strength and hardness. The t/h ratio is dimensionless.EXAMPLE—G10 designates a gray iron having minimum test bar t/h = 0.100.3.2Hardness Grade—Hardness grades, defined by minimum hardness exhibited in castings, are designated bythe letter H followed by a number equaling the minimum casting hardness divided by 100. The casting hardness unit used for this purpose is the MPa.EXAMPLE—H18 designates minimum casting hardness of 1800 MPa.3.3Casting Grade—SAE gray iron casting grades are defined and designated by combining the iron grade andthe hardness grade designations.EXAMPLE—G10H18 designates iron in castings with minimum test bar t/h of 0.100 MPa/MPa and minimum casting hardness of 1800 MPa.3.4Special Requirements—Special requirements, defined for special applications, are designated by alowercase suffix letter placed at the end of the casting grade designation.EXAMPLE—11H20b designates iron meeting special requirements of special service brakedrums.3.5Equivalency and Conversion—Equivalency information for engineering purposes, between this and otherstandards, is provided in A.4.1, A.4.6, and A.4.7. Grades of this document can have multiple equivalents with grades of previous SAE and most other standards as exemplified by grades G3000 and G4000. Determination of current grade equivalent for castings established in production under previous SAE or other documents, shall be by the producer, in accordance with 5.5.3, based on historical or current test data from the established process, and reported to and approved by the purchaser. When the producer does not have access to the applicable historical data, grade determination shall be based on samples provided by producer and approved by purchaser.4.Grades 4.1Iron Grades—Iron grades and their t/h lower limit requirements are shown in Table 1.4.2Hardness Grades—Hardness grades and their required lower hardness limits are shown in Table 2.4.3Special Requirements—Special additional requirements for particular applications and service conditionsand their lower case letter designators are shown in Table 3. Special additional requirements shall not change test bar t/h ratio or casting hardness requirements.TABLE 1—IRON GRADESGrade Test Bar t/hRatio Lower Limit (1)MPa/MPa (2)1.Statistically defined2.Both tensile and hardness in MPA unitsTest Bar t/h Ratio Lower Limit(1)psi/HB (3)(4)3.For reference only. The MPa/MPa SI metric values are primary.See Section 1.4.Units of HB are kgf per mm 2.G70.070100G90.090128G100.100142G110.110156G120.120171G130.130185TABLE 2—HARDNESS GRADESGrade Casting HardnessLower Limit (1)MPa (2)1.Statistically Defined.2.Hardness in MPa = HB multiplied by 9.80665.Casting Hardness Lower Limit (1)HB (3)3.Units of HB are kgf per mm 2.H101000102H111100112H121200122H131300133H141400143H151500153H161600163H171700173H181800184H191900194H202000204H212100214H222200224H232300235H2424002454.4Casting Grades—Combination of iron grade, hardness grade, and special requirement designation, if any,defines casting grade. A partial list of casting grades in common production and use, identified as reference grades and considered standard, is given in Table 4 with current and previous SAE designations. Other combinations of iron grade and hardness grade which are established in production and use or become so in the course of application development, or in accordance with 3.5 and 5.5.3, are also considered standard.NOTE—For castings successfully established in production and use under previous designations, the currentSAE casting grade shall be determined by the producer and approved by the purchaser (see 3.5).TABLE 3—SPECIAL REQUIREMENTSDesignatorApplication Requirements aBrake Drums and Discs and Clutch Plates for Special Service 1. Total Carbon 3.4% minimum.2. Microstructure: Lamellar Pearlite. Ferrite < 15%(1)1.See ASTM E 562.bBrake Drums and Discs and Clutch Plates for Special Service 1. Total Carbon 3.4% minimum.2. Microstructure: Lamellar Pearlite. Ferrite or carbide < 5%(1)cBrake Drums and Discs and Clutch Plates for Special Service 1. Total Carbon 3.5% minimum.2. Microstructure: Lamellar Pearlite. Ferrite or carbide < 5%(1)d Alloy Hardenable Gray IronAutomotive Camshafts (2)2.As-cast requirements. Camshafts may be flame or induction hardened to specified hardness and depth on cam sur-faces.1. Chromium shall be 0.85 to 1.50%(3)2. Molybdenum shall be 0.40 to 0.60%(3)3. Microstructure of cam nose:Extending to 45 degrees on both sides of cam nose centerlineand to minimum depth of 3.2 mm from the surface shall consistof primary carbide (cellular and/or acicular) and graphite in amatrix of fine pearlite.4. The amount of carbide in the cams and method of checking shallbe specified by the purchaser.5. Casting Hardness check location shall be on a bearing surface.3.Ranges for specific castings shall be within the ranges shown.TABLE 4—REFERENCE GRADES (1)1.Established in production and use and having near equivalents with previousSAE designations.SAE Casting GradePrevious SAE Designation (2)2.Equivalency based on tensile strength in 30 mm diameter test bars. SeeTable A4.G9H12G1800G9H17G2500G10H18G3000G11H18G3000G11H20G3500G12H21G4000G13H19G4000G7H16 cG1800 h (3)3.The h suffix was previously used to designate both t/h and carbon require-ments for this grade.G9H17 aG2500 a G10H21 cG3500 c G11H20 bG3500 b G11H24 d G4000 d5.Tensile Strength to Hardness Ratio, Hardness, and Casting Tensile Strength5.1Tensile strength values for the t/h ratio determination shall be obtained as shown in Figure 1 from separatelycast 30 mm test bars (type “B”) in accordance with ASTM A 48 except sampling frequency shall be as needed for statistical analysis to determine conformance of t/h ratio with requirements of this document. Test specimens shall be at room temperature, defined as between 10 and 35 °C, during tensile testing.FIGURE 1—TEST BAR HARDNESS LONGITUDINAL TEST ZONEIN RELATION TO TENSILE SPECIMEN5.2Test bar hardness for the t/h ratio determination shall be taken on the tensile test bar between bar center andmidpoint of the as-cast radius, and between 50 and 75 mm from the as-cast bar end as shown in Figures 1 and 2.FIGURE 2—TEST BAR HARDNESS RADIAL TEST ZONE5.3Brinell Hardness is considered standard for test bars and production castings and shall be determinedaccording to ASTM E 10 after sufficient material has been removed from the casting surface to insure representative hardness readings. The 10 mm ball and 3000 kgf load shall be used unless physically precluded by specimen dimensions as given in ASTM E 10. Test specimens shall be at room temperature, defined as between 10 and 35 °C, during hardness testing.5.3.1When a hardness test other than the Brinell test with 10 mm ball and 3000 kgf load must be used, conversionto the 3000 kgf 10 mm ball equivalent shall be by applicable conversion table in SAE J417 or by on-site calibration using Standard Brinell Bars.5.4 A non-destructive casting hardness test location on the casting for monitoring conformance to grade limits shallbe established by agreement between purchaser and producer or determined by producer. It should be readily accessible for convenience in performing the test to ensure adequate quantity, consistency, and accuracy of accumulated data for statistical validity in service of general variance control. Targeting of hardness measurement at service function related locations shall not be considered a requirement unless specified in accordance with 5.4.1.5.4.1In special cases, casting hardness at particular casting locations considered critical by the designer butdifficult to access or requiring casting destruction may be specified by the purchaser with producer agreement. In such cases, hardness grade conformance may be established directly by hardness readings so obtained or indirectly by hardness readings at an accessible location using an agreed method of correlation.5.5The foundry shall exercise the necessary controls and inspection techniques to ensure compliance with thespecified hardness and t/h ratio minimums. When samples exhibit normal variance patterns, conformance with grade requirements for t/h and casting hardness shall be determined by long term analysis of production samples using Normal Curve statistical methods. For sample sizes less than 30, the lower limit shall be taken as 3 standard deviations below the mean. For sample sizes larger than 30, the lower limits for t/h and casting hardness control may be optionally taken as the lower 3 standard deviation limit or the lower 99% population limit of the one-sided normal distribution at 95% confidence calculated by the confidence interval method (seeA.1.5).5.5.1Test bar samples to confirm test bar t/h ratio conformance shall be random samples. Frequency of samplingmay be specified by purchaser or determined by producer. Minimum frequency per grade shall be 1 per 8 h shift. Sample period may be any time interval or accumulation of time intervals in which the targeted mean t/h of producer’s process control specifications is unchanged.5.5.2Casting samples to confirm casting hardness conformance shall be random samples. Frequency ofsampling may be specified by purchaser or determined by producer. Minimum frequency shall be the least of 5 per 8 h shift or 100% of production. Sample period may be any time interval or accumulation of time intervals during which the targeted mean casting hardness of producer’s process control specifications is unchanged.5.5.3Parts successfully established in production and use under previous SAE or other Standards shall bereclassified under this document, without change in mean test bar t/h or mean casting hardness, by appropriate selection of iron grade from Table 1, casting hardness grade from Table 2, and casting hardness range under 5.6.5.5.4Casting t/h data obtained by casting hardness tests as described in 5.4 or 5.4.1 and casting tensile tests asdescribed in 5.7, shall be considered informational only and shall not be used for grade conformance assessment.5.5.5When casting hardness and/or test bar t/h variance patterns have too much skewness or otherwise do notsupport Normal Curve methods of analysis, an alternate method shall be established by agreement of purchaser and producer which achieves population limit control equivalent to that described in 5.5.5.6Casting hardness range may be specified by the purchaser to provide a non-statistical upper limit formachinability control. The standard range shall be 600 MPa or 60 HB, taken above the required grade minimum, and this shall be the assumed range when not specified. Purchasers shall not specify narrower ranges than this without prior agreement of the producer. Producers shall not exceed this range without prior agreement of the purchaser.5.7 A minimum value for tensile strength determined by destructive testing at specified locations in castings maybe specified as an additional, part number specific, conformance requirement by agreement between purchaser and producer on the applicable lower limit and statistical definition, sampling rate,and any special testing methods required. The agreed minimum shall be obtained with a standard grade as defined in this document. Information for estimating and experimentally determining the tensile minimum which can be expected for a given grade at specific locations in castings for purposes of design and development is given in Section A.4.5.8 A statistical lower limit for tensile/hardness ratio determined by destructive testing at specified locations incastings may be specified as an additional, part number specific, conformance requirement by agreement between purchaser and producer on the applicable lower limit and statistical definition, sampling rate, and any special testing methods required. The agreed minimum shall be obtained with a standard grade as defined in this document. Information for estimating and experimentally determining the tensile/hardness ratio minimum which can be expected for a given grade at specific locations in castings for purposes of design and development is given in Section A.4.6.Heat Treatment6.1Castings of hardness grades H10 through H17 may be annealed to meet hardness requirements. Castings ofgrades H21 through H24 may be quenched and tempered to meet hardness requirements.6.2Appropriate heat treatment for removal of residual stresses, or to improve machinability or wear resistance,may be specified. Heat treated castings must meet hardness requirements of the grade.7.Microstructure7.1Unless otherwise specified, gray iron covered by this document shall be substantially free of primary cementiteand/or massive steadite and shall consist of flake graphite in a matrix of ferrite or pearlite or mixtures thereof.7.2Unless otherwise specified, the graphite structure shall be primarily type A in accordance with ASTM A 247.8.Castings for Special Applications with Controlled Composition and Microstructure8.1Heavy-Duty Brake Drums and Clutch Plates8.1.1These castings are considered as special cases and are covered in Tables 3 and 4.8.2Alloy Iron Automotive Camshafts8.2.1These castings are considered as special cases and are covered in Table 3 and 4.9.General Requirements9.1Castings furnished to this document shall be representative of good foundry practice and shall conform todimensions and tolerances specified on the casting drawing.9.2Approval by purchaser of location on the casting and method to be used is required for any casting repair.9.3Additional casting requirements such as vendor identification, other casting information, and special testingmay be agreed upon by purchaser and supplier. These should appear as product specifications on the casting or part drawing.10.Notes10.1Marginal Indicia—The change bar (l) located in the left margin is for the convenience of the user in locatingareas where technical revisions have been made to the previous issue of the report. An (R) symbol to the left of the document title indicates a complete revision of the report.PREPARED BY THE SAE IRON AND STEEL TECHNICAL COMMITTEE DIVISION 9—AUTOMOTIVE IRON AND STEEL CASTINGS OF THESAE IRON AND STEEL TECHNICAL EXECUTIVE COMMITTEEAPPENDIX ANOTE—Information in the Appendix is for reference only and does not constitute requirements.A.1Definition and Control of Gray IronA.1.1Gray iron is a cast iron in which the graphite is present in flake form instead of nodules or spheroids as inmalleable or ductile iron. Because its graphite has this flake structure, gray iron exhibits much greater sensitivity of mechanical properties to carbon content than malleable or ductile. As in malleable and ductile, the metallic matrix in which the graphite of gray iron resides is normally either eutectoid or hypo-eutectoid silicon steel with a working range of hardness of about 150 to 600 HB (1.5 to 6 GPa). In special cases, the matrix may be martensitic or hyper-eutectoidal with working hardness up to about 800 HB (8 GPa)A.1.2Gray iron naturally divides into a family or series of grades having different tensile strength to hardness (t/h)ratios uniformly regulated by eutectic graphite content up to the eutectic composition as shown in Figure A1 with carbon equivalent(CE) as the graphite parameter. Decline in t/h ratio continues as CE increases above the eutectic, but at a much smaller and less predictable rate. Constant t/h lines of this figure are essentially lines of constant graphite effect on mechanical properties. Properties sensitive to both graphite and matrix, such as bulk tensile strength and bulk hardness, vary in constant proportionality to each other and to their matrix counterparts—matrix tensile strength and matrix hardness—along constant t/h lines. Elastic modulus and damping capacity vary mainly only with graphite and are therefore highly constant along the constant t/h lines. Since these lines are also lines of constant eutectic graphite and CE, the most important castability parameters, they are logical grade lines for foundry control as well as for mechanical property control.FIGURE A1—CHARACTERISTIC t/h RATIOS OF GRAY IRONSA.1.3Specification control of gray iron, since it is a composite material, requires joint classification by at least twoproperty parameters of which one should be mainly graphite microstructure related and the other mainly a function of the matrix microstructure. Limited effectiveness of control by a single bulk property is illustrated in Figures A2 and A3. Figure A2 exemplifies grading by tensile strength alone—any given grade so defined is seen to traverse a wide range of possible hardness minimums. Likewise, in Figure A3, hardness is used as a single defining property and a wide range of possibilities exists for the tensile minimum. In both cases, t/h ratio and therefore, elastic modulus, damping capacity and castability are undefined. Figure A4 illustrates improved control obtainable by jointly specifying two property parameters. In this example, t/h ratio and hardness are the joint control parameters. A tensile minimum is now defined and, in general, all properties including castability are effectively controlled.FIGURE A2—GRADING BY TENSILEFIGURE A3—GRADING BY HARDNESSFIGURE A4—GRADING BY t/h RATIO AND HARDNESSA.1.4The control parameters used to classify gray iron in this document are test bar t/h ratio and casting hardness,selected because they meet the criteria cited in A.1.3 and are well established, widely used tests. The t/h ratio in this document is dimensionless, reflecting long established practice in the metric countries, where identical units have historically been used for both tensile strength and hardness. Hardness units will be in kg/mm 2when reported as HB and are multiplied by g = 9.80665 to convert to MPa and form the dimensionless ratio with tensile strength in MPa units. For a number of purposes, it is useful to know the matrix hardness.Examples of its use are -- process control of the hardness property, simplification of bivariate statistical analysis of hardness and tensile strength, and engineering selection of iron grade for best wear resistance or fatigue life in strain limited loading. The matrix hardness can be estimated with sufficient accuracy for most purposes from the bulk hardness and t/h ratio with the relation:(Eq. A1)in which k is a graphite structure related constant with a usual range in sand cast gray iron of 0.60 to 0.65.A.1.5With continuous production processes used for automotive casting production conformance to specificationcontrol limits can be assessed by analysis of periodic samples using the Confidence Interval method. This method predicts population limits of parent production in standard deviation units, at various confidence levels,as multiples of the sample standard deviation measured from the sample mean. Tabulations of such multipliers versus sample size are widely published (one of many possible references is given in 2.2). The curve of Figure A5 is a plot of such a tabulation showing how the multiplier typically varies with sample size.The curve of Figure A5 is drawn for 99% population limits of a one-sided normal distribution at 95%confidence. For a sample size of about 300 bars, the –2.5 sigma limit of the sample would be the 99%population limit for the parent production.FIGURE A5—CONFIDENCE INTERVAL TOLERANCING MULTIPLIERS (NUMBER OF SIGMAS)H matrix H bulk 1k ∗1t h ratio 0.35⁄⁄–()–[]⁄=A.2Chemical CompositionA.2.1Typical base composition ranges generally employed for the iron grades are shown in Table A1. The basecomposition does not include alloys such as Cu, Cr, Mo, Ni, or others which may be added for hardness or t/h control, or to meet mandatory composition limits of special irons given in Table 3 of the main body of this document.A.2.2Typical base composition ranges may vary for specific grades depending on casting section size ormetallurgical factors such as trace element content, or to satisfy mandatory composition requirements of special irons as given in Table 3. A.2.3Typical composition ranges including typical alloy content for camshaft iron, grade G11H24d, are shown inTable A2.A.3MicrostructureA.3.1The as-cast microstructure of gray iron covered by this document consists of a mixture of flake graphite in amatrix consisting of ferrite, ferrite and pearlite, or pearlite, as described in Table A3. The quantity of flake graphite and size of the flakes vary with iron grade. The amount and fineness of pearlite vary with the hardness grade. The pearlite is usually lamellar but may be partially spheroidal in slowly cooled sections or where heat treatment has been applied.TABLE A1—TYPICAL BASE COMPOSITIONSIron Grade Previous Designation Carbon Silicon Manganese Sulfur Max.PhosphorusmaxC. E.(1)(Approx.)1. C. E. (Carbon Equivalent) = %C + (1/3) %Si.G7G1800h 3.50 - 3.70 2.30 - 2.800.60 - 0.900.140.25 4.35 - 4.55G9G2500 3.40 - 3.65 2.10 - 2.500.60 - 0.900.120.25 4.15 - 4.40G10G3000 3.35 - 3.60 1.90 - 2.300.60 - 0.900.120.20 4.05 - 4.30G11G3000 3.30 - 3.55 1.90 - 2.200.60 - 0.900.120.10 4.00 - 4.25G12G3500 3.25 - 3.50 1.90 - 2.200.60 - 0.900.120.10 3.95 - 4.20G13G40003.15 - 3.401.80 -2.100.70 - 1.000.120.083.80 -4.05TABLE A2—TYPICAL CHEMICAL COMPOSITION OF ALLOY GRAY IRONAUTOMOTIVE CAMSHAFTS, GRADE G11H24d (PREVIOUS 4000d)ConstituentWt %Total Carbon 3.10 to 3.60Silicon 1.95 to 2.40Manganese 0.60 to 0.90Phosphorus 0.10 max Sulfur 0.15 max Chromium 0.85 to 1.50Molybdenum 0.40 to 0.60Nickel 0.20 to 0.45CopperResidualA.3.2The size and distribution of graphite flakes in gray iron depend upon chemistry, liquid metal treatment(inoculation), and cooling rate during solidification. The primary, but not sole, chemical determinant is carbon equivalent, defined as C+Si/3. A.3.2.1Alloying elements used for pearlite hardness control have small but non-negligible effects on graphite size.Since some elements operate as coarsening and others as refining agents, combinations can be used for a neutral effect. A.3.2.2When alloying elements are used to produce a mixed structure of primary carbide and graphite, as in thecams of alloy hardenable gray iron automotive camshafts, eutectic graphite is reduced and significant flake refinement results. A.3.2.3The graphite microstructure of gray iron cannot be changed by heat treatment.A.3.3Hardness of the ferrite in the gray iron matrix is unaffected by cooling rate but is affected by alloy elements insolid solution, the most noticeable being silicon, which increases ferrite hardness about 35 HB for each 1% of Silicon present. Heat treatment is required to decompose all pearlite and produce a fully ferritic structure.A.3.4The amount and hardness of pearlite depend jointly on cooling rate and alloy chemistry, which are balanced inthe foundry to control pearlite amount and hardness and, consequentially, casting hardness. Both the amount and hardness of pearlite can be altered by heat treatment. A.3.5In special cases such as alloy hardenable iron camshafts, alloy is also used to obtain controlled percentages ofcarbides, detracting from graphite, in cam and valve lifter surfaces where maximum contact stress occurs. The as-cast matrix structure in these cases is pearlite; in the contact surfaces, the matrix is transformed to tempered martensite by surface heat treatment.A.3.6Gray iron castings can be through-hardened by liquid quenching or selectively surface-hardened by eitherflame or induction methods.TABLE A3—TYPICAL MICROSTRUCTURES OF REFERENCE GRADESSAE Casting Grade Previous Designation Microstructure Graphite(1)1.See ASTM A 247.MicrostructureMatrix G9H12G1800Type VII A & B Ferritic - Pearlitic G9H17G2500Type VII A & B Pearlitic - FerriticG10H18G3000Type VII A Pearlitic G11H18G3000Type VII A Pearlitic G11H20G3500Type VII A Pearlitic G12H21G4000Type VII A Pearlitic G13H19G4000Type VII APearlitic G7H16 c G1800 h Type VII A, B, & C size 1-3Lamellar Pearlite G9H17 a G2500 a Type VII A size 2-4Lamellar Pearlite G10H21 c G3500 c Type VII A size 3-5Lamellar Pearlite G11H20 b G3500 b Type VII A size 3-5Lamellar Pearlite G11H24 dG4000 dType VII A & E size 4-7(1)Pearlitic - Carbidic (2)2.In cam nose. As cast. matrix pearlite in cam may be transformed to tempered Martensite bysubsequent Flame or induction hardening.A.4Mechanical Properties of Castings For DesignA.4.1The calculated tensile strength minima shown in Table A4 for 30 mm diameter test bars assume Normal Curvestatistics with foundry industry typical variance levels and are in good agreement with typical production data.Values are also given in the table for a quantity called the Casting Strength Index which is defined as the multiple of the statistical grade minima of test bar t/h ratio and casting hardness. Since the iron grade number equals the t/h ratio times 100 and the hardness grade number equals the hardness (in MPa) divided by 100,the casting strength index also equals the product of iron grade number times hardness grade number and is also in MPa. Casting hardness is specified as a direct measure on the casting and controlled in common foundry practice by ladle alloy additions as needed to offset section size effects. The t/h ratio in castings is subject to section sensitivity but in a given section has a parallel relationship with t/h ratio in the test bar. For these reasons, with uniform statistical definition, the Casting Strength Index defined as the product of the statistical minima of casting hardness and test bar t/h is a valid relative measure of casting strength for design purposes. When section sensitivity of the t/h ratio is quantitatively known, this index can also be used to make a first working estimate of the absolute value of casting tensile strength. Both test bar tensile strength and Casting Strength Index values can be used to determine tensile equivalency with iron graded by other specifications and to optimize SAE grade choice. A.4.1.1Method of defining Casting Strength Index as minimum casting hardness multiplied by minimum test bar t/hand its relationship to the statistical limits of tensile strength and hardness are shown graphically in Figure A6.TABLE A4—TENSILE STRENGTH CHARACTERISTICS AND TENSILE EQUIVALENTS OFSAE REFERENCE GRADES (1)1.Established in production and use and having near equivalents in previous SAE standards and test bar tensile strength equivalents inother standards.SAECasting Grades Former SAE Grades (2)2.Former SAE grades having near equivalence with t/h and hardness requirements, and theoretical test bar tensile strength minimums ofthe current SAE casting grades.Non-SAE Tensile Grades (3)SI 3.Grades of standards based solely on test bar tensile strength such as ASTM A 48 and 48 M, ISO 185, EN 1561, and others, having nearequivalence with theoretical test bar tensile strength minimums of the current SAE casting grades.Non-SAE Tensile Grades (3)Inch-lbTheoretical Tensile Strength Minimums of SAE CastingGradesCasting StrengthIndex(4)MPa4.Multiple of test bar t/h ratio and casting hardness minimum of the current SAE casting grade. Numerically equal to multiple of iron num-ber multiplied by casting hardness grade number.Theoretical Tensile Strength Minimums of SAE Casting Grades Casting Strength Index (4)ksi Theoretical Tensile Strength Minimums of SAE Casting Grades 30 mm Dia.Test Bars (5)MPa 5.99% population lower limit of SAE casting grade at 95% confidence, one-sided normal distribution, 300 bar sample (–2.5 σ). Hardnessand t/h minimums at –3 σ, hardness range 500 MPa, t/h range 0.35 for iron grades 7 to 11 and 0.30 for iron grades 12 to 13.Theoretical Tensile Strength Minimumsof SAE Casting Grades 30 mm Dia.Test Bars (5)ksi G9H12G180010815.712418.0G9H17G25001752515322.217024.6G10H18G30002003018026.119828.7G11H18G30002253019828.721731.5G11H20G35002503522031.923934.7G12H21G40002754025236.527239.4G13H19G40002754024735.826838.9G7H16 c G1800 h (6)6.The h suffix was previously used to designate both t/h and carbon requirements of this grade.11216.212718.4G9H17 a G2500 a 1752515322.217024.6G10H21 c G3500 c 2253521030.522833.1G11H20 b G3500 b 2503522031.923934.7G11H24 dG4000 d2754026438.328441.2。
IntroductionSteel, an alloy primarily composed of iron and carbon, has been the backbone of modern civilization for centuries, playing a pivotal role in infrastructure development, transportation, energy production, and countless other industrial applications. However, not all steel is created equal. This essay delves into the multifaceted nature of premium steel, examining the stringent standards it must meet to be classified as such and the various attributes that distinguish it from its lower-grade counterparts. It is through these rigorous criteria and exceptional qualities that premium steel emerges as the material of choice for projects demanding utmost reliability, durability, and performance.1. Material Composition and MicrostructureAt the core of premium steel's superior characteristics lies its meticulously controlled composition and microstructure. The precise balance of elements, including carbon, manganese, silicon, chromium, nickel, molybdenum, and others, is crucial for achieving desired mechanical properties, corrosion resistance, and weldability. While carbon content is fundamental for hardness and strength, alloying elements serve specific purposes: chromium enhances corrosion resistance, nickel improves toughness and ductility, and molybdenum increases strength at high temperatures.Moreover, the microstructure of premium steel, which refers to the arrangement of its constituent phases (e.g., ferrite, pearlite, martensite, and austenite), is carefully tailored through controlled heating and cooling processes, such as annealing, quenching, and tempering. These treatments influence the grain size, phase distribution, and dislocation density within the steel, ultimately dictating its mechanical behavior, toughness, and fatigue resistance. A well-designed microstructure ensures that premium steel exhibits an optimal combination of strength, ductility, and toughness, enabling it to withstand diverse service conditions without failure.2. Mechanical Properties and PerformancePremium steel is characterized by exceptional mechanical properties thatsurpass those of standard grades. Key indicators include:a) Yield Strength: The minimum stress required to initiate permanent deformation in the material. Premium steels often exhibit yield strengths well above 500 MPa, providing robust structural integrity and minimizing deformation under load.b) Tensile Strength: The maximum stress the material can withstand before fracturing. High tensile strengths (exceeding 800 MPa or more in some cases) enable premium steel to endure heavy loads and dynamic stresses without failure.c) Ductility and Toughness: The ability of the material to deform plastically without fracturing and to absorb energy during impact or deformation. Enhanced ductility and toughness in premium steels reduce the risk of brittle fractures and ensure better performance under cyclic loading and in low-temperature environments.d) Fatigue Resistance: The capacity to withstand repeated or cyclic loads without cracking or fracturing. Premium steels, with their optimized microstructures and low residual stresses, demonstrate excellent fatigue resistance, ensuring long-term reliability in applications subjected to fluctuating stresses.3. Corrosion and Wear ResistanceIncorporating corrosion-resistant alloys like stainless steel or applying protective coatings, premium steels offer enhanced resistance to oxidation, pitting, crevice corrosion, and stress corrosion cracking. This attribute is particularly critical in harsh environments, such as marine, chemical processing, or food processing industries, where prolonged exposure to aggressive chemicals or moisture can rapidly degrade lower-grade materials.Additionally, premium steels may incorporate hardening elements or surface treatments to increase their wear resistance, making them suitable for applications involving friction, abrasion, or erosion, such as in mining equipment, cutting tools, or heavy machinery components.4. Weldability and FabricationPremium steels are designed with weldability in mind, ensuring that they can be joined using various welding techniques while maintaining their inherent mechanical properties and corrosion resistance. This is achieved through careful control of alloying elements, carbon equivalent, and microstructure. Good weldability reduces the risk of defects, such as cracks, porosity, or lack of fusion, during fabrication and contributes to the overall structural integrity of welded assemblies.5. Certifications, Standards, and TestingFor a steel product to be deemed 'premium,' it must adhere to stringent international or industry-specific standards, such as those set by ASTM, AISI, EN, DIN, JIS, or API. These standards encompass material composition, mechanical properties, fabrication processes, non-destructive testing (NDT), and quality control measures. Compliance with these standards provides assurance to end-users that the steel will perform as intended in its designated application.Furthermore, premium steel manufacturers typically subject their products to rigorous testing, including chemical analysis, mechanical property tests (tensile, impact, hardness), corrosion tests (salt spray, cyclic polarization), non-destructive examinations (ultrasonic testing, magnetic particle inspection), and sometimes even full-scale prototype testing. Such comprehensive evaluations verify that the steel meets or exceeds the specified requirements and performs reliably under real-world conditions.ConclusionPremium steel represents the epitome of metallurgical expertise, combining meticulously controlled composition, optimized microstructure, exceptional mechanical properties, enhanced corrosion and wear resistance, and excellent weldability. By adhering to stringent international and industry-specific standards and undergoing rigorous testing, premium steel guarantees unparalleled performance, durability, and reliability in even the most demanding applications. As industries continue to push the boundaries of innovation and efficiency, premium steel remains a cornerstone material, offering designersand engineers a trusted solution for realizing their visions while ensuring safety, longevity, and sustainability.。
INTERNATIONAL JOURNAL OF COMMUNICATION SYSTEMSmun.Syst.2013;26:1054–1073Published online27January2012in Wiley Online Library().DOI:10.1002/dac.1399A unified enhanced particle swarm optimization-based virtualnetwork embedding algorithmZhongbao Zhang1,Xiang Cheng1,Sen Su1,*,†,Yiwen Wang1,Kai Shuang1andYan Luo21State Key Laboratory of Networking and Switching Technology,Beijing University of Posts and Telecommunications,10Xi Tu Cheng Road,Beijing,China2Electrical and Computer Engineering,University of Massachusetts Lowell,One University Ave,Lowell,MA01854,USASUMMARYVirtual network(VN)embedding is a major challenge in network virtualization.In this paper,we aim to increase the acceptance ratio of VNs and the revenue of infrastructure providers by optimizing VN embed-ding costs.Wefirst establish two models for VN embedding:an integer linear programming model for a substrate network that does not support path splitting and a mixed integer programming model when path splitting is supported.Then we propose a unified enhanced particle swarm optimization-based VN embed-ding algorithm,called VNE-UEPSO,to solve these two models irrespective of the support for path splitting. In VNE-UEPSO,the parameters and operations of the particles are well redefined according to the VN embedding context.To reduce the time complexity of the link mapping stage,we use shortest path algorithm for link mapping when path splitting is unsupported and propose greedy k-shortest paths algorithm for the other case.Furthermore,a large to large and small to small preferred node mapping strategy is proposed to achieve better convergence and load balance of the substrate network.The simulation results show that our algorithm significantly outperforms previous approaches in terms of the VN acceptance ratio and long-term average revenue.Copyright©2012John Wiley&Sons,Ltd.Received24June2011;Revised12October2011;Accepted27November2011KEY WORDS:network virtualization;virtual network embedding;integer linear programming;mixed integer programming;metaheuristic;particle swarm optimization1.INTRODUCTIONThe Internet has only been improved incrementally since its inception.In the past,fundamental changes in network architectures have faced strong resistance from realistic experimentation and deployment[1–4].In recent years,network virtualization has emerged to serve as the foundation of the future Internet that allows multiple heterogeneous virtual networks(VNs)to coexist on a shared network substrate,providing adequateflexibility for network innovations.In the network virtualization environment,infrastructure providers(InPs),and service providers (SPs)play two decoupled roles:InPs manage the physical infrastructure,whereas SPs create VNs and offer end-to-end services[1,3,5].Embedding VN requests of the SPs,with both node and link constraints,into the substrate network(also known as VN embedding)is non-deterministic polynomial-time hard(NP-hard).Even if all the virtual nodes are mapped,it is still NP-hard to embed virtual links without violating the bandwidth constraints into the substrate paths[6].Thus,*Correspondence to:Sen Su,State Key Laboratory of Networking and Switching Technology,Beijing University of Posts and Telecommunications,10Xi Tu Cheng Road,Beijing,China.†E-mail:susen@UNIFIED EPSO-BASED VNE ALGORITHM1055 to reduce the hardness of the VN embedding problem and enable efficient heuristics,early studies restrict the VN embedding problem space as follows:Assume that the VN requests are known in advance(i.e.,an offline version)[7–9].Ignore one or more types of resource constraints of the VN request(e.g.,CPU,bandwidth,or location)[6–11].Perform no admission control when the resource of the substrate network is insufficient [7,8,10].Focus only on the backbone-star topology[8].Considering all these aforementioned issues,when path splitting is supported by the substrate network,the authors in[12]formulate a mixed integer programming(MIP)for the VN embedding problem and propose several MIP-based online VN embedding algorithms to coordinate node and link mapping stages.However,when path splitting is not supported by the substrate network,their MIP formulation would no longer be appropriate;consequently,the corresponding algorithm they proposed suffers from poor performance.Besides,the linear programming relaxation and round-ing techniques adopted by their algorithms would result in time-consuming and infeasible VN embedding solution.Even if a feasible solution can be obtained,it may still be far from being optimal[13].To address this issue,wefirst note that different VN embedding solutions could result in different total resource costs to the substrate network and reducing such cost may help increase the possi-bility of accepting more future VN requests;thus,we present an integer linear programming(ILP) formulation and a MIP formulation for the VN embedding problem to minimize such cost when path splitting is unsupported and supported by the substrate network,respectively.Solving ILP and MIP is a well-known NP-hard problem[13].Traditional exact algorithms such as branch and bound and cutting plane are guaranteed tofind an optimal solution.These algorithms,however,incur exponential running time,and so only instances of a moderate size could be practically solved[14].So we turn our attention tofinding a feasible solution that is near optimal.The technique of metaheuristics has been shown useful in practice,including genetic algorithm[15],simulated annealing[16],evolutionary programming[17],and particle swarm optimization(PSO)[18].These are iterative search techniques inspired from biological and physical phenomena,which have been successfully applied to a wide range of optimization problems.In particular,PSO is a population-based stochastic global optimizer that can generate better optimal solution in lesser computing time with stable convergence[19]than other population-based methods.It is also easy to implement with a smaller number of adjustable parameters. We are motivated to leverage the benefits of PSO to conceive an efficient online VN embedding algorithm.More specifically,if we consider the position of each particle in PSO as a possible VN embedding solution,and then each particle can adjust its position to achieve better posi-tion according to the individual and global optimal information,finally the approximate optimal solution of VN embedding can be obtained through the evolution process of the particles.How-ever,before we employ PSO to solve our problem,there are still three imperative challenges that must be conqueredfirst:(i)Standard PSO only deals with continuous optimization prob-lem,so it is not directly applicable to the optimal VN embedding problem,which is a discrete optimization problem.(ii)Because the formulations of the VN embedding problem are differ-ent,our PSO-based VN embedding algorithm needs to be well designed to deal with the optimal VN embedding problem irrespective of whether path splitting is supported by the substrate net-work or not.Besides,previous work[6]uses k-shortest path(KSP)for the virtual link mapping when path splitting is unsupported by the substrate;otherwise,it uses multicommodityflow(MCF) algorithm instead;however,if we adopt the same virtual link mapping algorithms,our algorithm may result in very time consuming because of the iteration processes of PSO.(iii)The random-ness of PSO may result in slow convergence for solving our problem.In addition,it may also make the substrate network resources fragmented and hinder the substrate network from accepting larger VN requests.1056Z.ZHANG ET AL.Toward these ends,we present a unified enhanced PSO-based VN embedding algorithm,referred as VNE-UEPSO,to solve the optimal VN embedding problem.In VNE-UEPSO,we conquer the aforementioned challenges as follows:We redefine the parameters and operations of the particles (such as position,velocity and updating operations,etc.)according to our problem.Moreover,because the virtual link mapping algorithm is coupled with the feature of the sub-strate network supported,we only encode the position vector as the virtual node mapping solution and left the virtual link mapping solution to be determined in a position feasibility check procedure.Therefore,this procedure can adopt proper virtual link mapping algorithm according to the feature of the substrate network supported.To reduce the time complexity of the virtual link mapping stage while maintaining the efficiency,we apply the shortest path algorithm for the link mapping when path splitting is unsupported and propose a novel link mapping algorithm called greedy k-shortest paths (GKSP)for the other case.Furthermore,we propose a large to large and small to small (L2S2)preferred local selection strategy for position initialization and update of the particles to achieve better convergence and load balance of the substrate network.The simulation results show that the algorithm we proposed can significantly outperforms the existing approaches in terms of the long-term average revenue and VN request acceptance ratio while decreasing the substrate resource cost irrespective of whether path splitting is supported by the substrate network or not.The rest of this paper is organized as follows.In Section 2,we present the network model and the definition of VN embedding problem and its common objectives.The ILP and MIP models of the VN embedding problem is presented in Section 3.In Section 4,we describe the details of the VNE-UEPSO algorithm.Our VN embedding algorithm is evaluated in Section 5.An overview of the related work is discussed in Section 6.Section 7concludes this paper.WORK MODEL AND PROBLEM DESCRIPTIONIn this section,we will first model the substrate network of InPs and VN of SPs and then give the VN embedding problem description,followed by the definition of objectives.work modelWe denote the topology of the substrate network by a weighted undirected graph G s D .N s ,L s ,A n s ,A l s /,where N s is the set of the substrate nodes and L s is the set of the substrate links.The notations A n s and A l s denote the attributes of the substrate nodes and links,respectively.Theattributes of a node could be processing capacity,storage,and location.The attribute of a link could be the bandwidth and delay.In this paper,we consider the available CPU capacity and location con-straint (e.g.,particular geographic regions)for the node attribute and the available bandwidth for the link attribute.All loop-free paths of the substrate network are denoted by P s .Similarly,the topology of the VN could also be denoted by a weighted undirected graph G v D .N v ,L v ,R n v ,R l v /,where N v is the set of the virtual nodes,L v is the set of the virtuallinks,and R n v and R l v denote CPU requirements and location constraints on virtual nodes and band-width requirements on virtual links,respectively.Each VN request can be denoted by the quad VNR .i/.G v ,t a ,t d ,W /,where the variables t a and t d denote the time of the VN request arriving and the duration of the VN staying in the substrate network,respectively.When the i th VN request arrives,the substrate network should allocate resources to the VN that satisfy the constraints of the virtual nodes and links.If there are no enough substrate resources,the VN request should be rejected or postponed.The allocated substrate resources are released when the VN departs.Similar to the work in [12],here W is a non-negative value expressing how far a virtual node n v 2N v can be placed from the location specified by Loc.n v /.UNIFIED EPSO-BASED VNE ALGORITHM1057 Figure1(b)presents a substrate network,where the numbers in rectangles are the available CPU resources at the nodes and the numbers over the links represent available bandwidths.Figure1(a) and1(c)presents two VN requests with node and link constraints.2.2.Virtual network embedding problem descriptionThe VN embedding problem is defined by a mapping M W G v.N v,L v/!G s.N0s,P0s/from G v to a subset of G s,where N0s N s and P0s P s.The mapping can be decomposed into two mapping steps:Node mapping places the virtual nodes to different substrate nodes that satisfy the node resource constraints.As shown in Figure1(a)and1(b),the node mapping solution of the VN request1is{a!B,b!C,c!F,d!E}.Link mapping assigns the virtual links to loop-free paths on the substrate that satisfy the link resource requirements.The link mapping solution is{.a,b/!.B,C/,.a,c/!.B,F/, .b,d/!.C,E/,.c,d/!.F,E/}in Figure1(a)and1(b).After the node and link mapping stage of the VN request1,the residual capacities of the sub-strate nodes and links are shown in Figure1(d).Figure1(c)and1(d)shows another VN embedding solution for VN request2.Note that the virtual nodes of different VN requests can be mapped onto the same substrate node but the virtual nodes in the same VN request cannot share the same substrate node.2.3.ObjectivesLong-term average revenue.From the InPs’point of view,an efficient and effective online VN embedding algorithm would maximize the revenue of InPs and accept more VN requests in the long run.Similar to the previous work in[6,7,12],wefirst give the revenue definition of accepting a VN request at time t by the following equation:R.G v,t/D X n v2N v CP U.n v/C X l v2L v BW.l v/,(1)where CP U.n v/and BW.L v/are the CPU and bandwidth requirements for the virtual node n v and link l v,respectively.Then like the previous work in[6],the long-term average revenue is given by the following:lim(2)T!1P T t D0R.G v,t/T.1058Z.ZHANG ET AL.VN request acceptance ratio.It can be defined by the following equation:lim T !1P TtD 0VNR s P T t D 0VNR ,(3)where VNR s denotes the VN request successfully accepted by the substrate network.Long-term revenue to cost (R/C)ratio.We first define the cost of accepting a VN request at time t as the sum of the total substrate resources allocated to that VN:C.G v ,t/D X n v 2N v CP U.n v /C X l v 2L v X l s 2L sBW.f l v l s,l v /,(4)where f l v l s2¹0,1ºand f l v l s =1if substrate link l s allocated bandwidth resource to virtual link l v ,otherwise f l v l s D 0.BW.f l v l s ,l v /is the amount of bandwidth l s allocated to l v .We use a modi-fied version of Equation (4)as the objective function of our ILP and MIP models,which will bepresented in the next section.Then we introduce the long-term R/C ratio to quantify the efficiency of substrate resource use,which can be defined as follows:lim T !1P Tt D 0R.G v ,t/P t D 0C.G v ,t/.(5)In this paper,we consider the long-term average revenue as the main objective of the online VN embedding algorithm,in addition to the VN request acceptance ratio,and the long-term R/C ratio.If the long-term average revenues of the VN embedding solutions are nearly the same,higher VN request acceptance ratio and long-term R/C ratio are preferred.3.ILP AND MIP FORMULATIONS FOR OPTIMAL VN EMBEDDINGIn this section,we first give the motivation behind our ILP and MIP formulations for the VN embedding problem and then provide the details of this formulation.3.1.MotivationFor one VN request,different VN embedding solutions may have different substrate resource costs.Let us reconsider the examples of VN embedding presented in Section 2(Figure 1).Assuming that the substrate node B and F can satisfy all the requirements of the virtual node b and c in the VN request 2,we can construct another VN embedding solution for VN request 2,which the node mapping is ¹a !A ,b !B ,c !F ºand the link mapping is {.a ,b/!.A ,B/,.a ,c/!.A ,F /}.Obviously,this VN embedding solution can consume less substrate network resources than the solu-tion proposed in Figure 1(d)and increase the possibility to accept more future VN requests.This observation motivates us to establish an optimal model for the VN embedding problem to minimize this cost.3.2.NOTATIONWe first summarize the notations that will be used throughout this paper in Table I.3.3.Resource cost modelingFor a VN request,because the CPU cost of different VN embedding solutions is a constant value,we only consider the bandwidth resource cost in Formula (6).X .u ,v/2L v X .i ,j /2L sf uv ij BW.l uv /.(6)UNIFIED EPSO-BASED VNE ALGORITHM 1059Table I.Notations.Notation Description i ,jSubstrate nodes u ,v Virtual nodes x u iA binary variable such that x u i D 1if virtual node u is mapped to the substrate node i and 0otherwise f uv ij A binary variable,where it is 1if virtuallink l uv is routed on physical link l ij and 0otherwiseCP U.u/The CPU value of nodes u .BW.l uv /The BW value of link l uv .3.4.Capacity constraints modelingThere are two kinds of capacity constraints:node constraints and link constraints.For the node con-straints,the CPU capacity of the substrate node i must satisfy the CPU request of the virtual node u ,and its location must be within the range of the virtual node specified by W ,which indicates how far the virtual node can be placed from the location specified by Loc.u/.The distance function Dis denotes the Euclidean distance of two nodes.For example,suppose node n 1is located at .x 1,y 1/and node n 2at .x 2,y 2/,then the value of Dis.Loc.i/,Loc.u//is equal to p .x 1 x 2/2C .y 1 y 2/2.For the virtual link constraints,the substrate link .i ,j /must meet the bandwidth requirement for the virtual link .u ,v/.Constraints (7)and (8)specify node constraints and link constraints,respectively.8u 2N v ,8i 2N s ,²x u iCP U.u/6CP U.i/x u i Dis.Loc.i/,Loc.u//6W(7)8.i ,j /2L s ,8.u ,v/2L v ,f uv ij BW.l uv /6BW.l ij /.(8)3.5.Connectivity constraints modelingConstraint (9)is flow conservation constraint for routing one unit of traffic from corresponding sub-strate node of u to corresponding substrate node of v .It requires that equal amounts of flow due to virtual link .u ,v/enter and leave each substrate node that does not correspond to the source u or destination v .Furthermore,the node u has an exogenous input of 1unit of traffic that has to find its way to the substrate corresponding to node v .8i 2N s ,8.u ,v/2L v ,X .i ,j /2L s f uv ij X .j ,i/2L s f uv j i D 8<:1if.x u i D 1/ 1if.x v i D 1/0otherwise.(9)3.6.Variable constraints modeling Constraint (10)ensures that a virtual node must correlate with just one substrate node,and constraint (11)denotes the domain constraints for the variables f uv ij and x u i .If path splitting is not supported,f uv ij is a binary variable in {0,1}(the model is an ILP),otherwise a continuous variable in [0,1](the model is a MIP).8i 2N s ,X u 2N vx u i 61,8u 2N v ,X i 2N s x u i D 1(10)1060Z.ZHANG ET AL.8i2N s,8u2N v,x u i2¹0,1º,8.i,j/2L s,8.u,v/2L v,f uvij2²Œ0,1 if path splitting¹0,1ºotherwise.(11) 3.7.Problem formulationThe goal of VN embedding problem in this paper is to minimize the resource cost for embedding each VN request;thus,we have the following optimization problem:M in X.u,v/2L v X.i,j/2L s f uv ij BW.l uv/(12)subject to Equations(7)–(11),where the model is an ILP if path splitting is not supported,otherwise a MIP.In the next section,we will propose our VNE-UEPSO algorithm to solve this optimal VN embedding problem.4.PROPOSED VNE-UEPSO ALGORITHMIn this section,wefirst give a brief introduction of PSO in Section4.1.When employing PSO to solve our optimal VN embedding problem,there are still some challenges as pointed out in Section1.Sections4.2–4.4describe the details of these challenges and how they are addressed respectively.In Section4.5,we present the algorithmic details of VNE-UEPSO.4.1.Basic Concepts of PSOParticle swarm optimization is an emerging population-based optimization method,first introduced by Eberhart and Kennedy in1995,that is inspired by theflocking behavior of many species, such birds or school offish,in their food hunting.It is a kind of random search algorithm that simulates nature evolutionary process and performs good characteristic in solving some difficult optimization problems.In PSO,a swarm of particles are represented as potential solutions,flying through the problem space by following the current optimum particles,and each particle i is associated with two vectors, that is,the position vector X i DŒx1i,x2i,:::,x D i and the velocity vector V i DŒv1i,v2i,:::,v D i , where D denotes the dimensions of the solution space.The position and velocity of each particle can be initialized randomly within the corresponding ranges.During the evolutionary process,the velocity and position of particle i on dimension d are updated as follows:v d i D wv d i C c1r d1.pBest d i x d i/C c2r d2.gBest d x d i//,(13)x d i D x d i C v d i,(14)where w is the inertia weight,c1is the cognition weight and c2is the social weight,and r d1and r d2are two random values uniformly distributed in the range of[0,1]for the d th dimension.pBest i is the position with the bestfitness found so far for the i th particle,and gBest is the best position in the swarm.4.2.Discrete PSO for VN embeddingBecause the basic PSO can only handle continuous optimization problems,the parameters and operations of the particles in PSO must be redefined to make it suitable to solve the optimal VN embedding problem when considering its discrete characteristic.Although there are some variants of PSO for discrete optimization problems such as[20]and[21],they are problem specific andUNIFIED EPSO-BASED VNE ALGORITHM 1061also cannot directly be used to solve the optimal VN embedding problem.Therefore,we propose a discrete version of PSO for our problem.Label the virtual nodes and substrate nodes,respectively.Redefine the position and velocity parameters for discrete PSO as follows:Position (X ):Let us suppose that the position vector X i D Œx 1i ,x 2i ,:::,x D i of a particle denotes a possible VN embedding solution,where x d i is the order number of the substrate node selected in the candidate node list of the d th virtual node.Here,D denotes the total number of virtual nodes in the VN request.Note that the position vector only represents the node mapping solution and whether the link mapping can be satisfied is unknown.In other words,the feasibility of the position of the particle still need to be checked.Therefore,we introduce a feasibility check procedure for the position that will be presented in the next subsection.Velocity (V ):The velocity vector V i D Œv 1i ,v 2i ,:::,v D i of the particle is used to guide the current VN embedding solution to adjust to an even better solution,where v d i is a binary value,if v d i =0;the corresponding virtual node mapping decision in the current VN embedding solution should be adjusted by reselecting another substrate node from its candidate node list;otherwise,it will remain the current choice.The subtraction,addition,and multiplication operations of the particles are redefined as follows:Subtraction («):X i «X j indicates the differences of the two VN embedding solutions X i and X j .The result value of the corresponding dimension is 1if X i and X j have the same values at the same dimension;otherwise,it is 0.For example,.1,2,3,4,5/«.1,5,3,4,6/D .1,0,1,1,0/.Addition (˚):P i V i ˚P j V j indicates the result of the formula that keeps V i with probability P i and keeps V j with the probability P j in corresponding dimension,where P i C P j D 1.For example,0.1.1,0,0,1,1/˚0.9.1,0,1,0,1/D .1,0, , ,1/. denotes uncertain to be 0or 1.In this example,the first is equal to 0with probability of 0.1and 1with probability of 0.9.Multiplication (˝):x d i ˝v d i indicates the position update process of particles.The result of this operation is a new position that corresponds to a new VN embedding solution.The oper-ating rule is as follows:if the value of V i in d dimension equals to 1,then the value of X i in the corresponding dimension will be kept;otherwise,the value of X i in the corresponding dimension should be adjusted by reselecting another substrate node from its candidate list.Taking .1,2,4,3,8/˝.1,0,1,0,1/as an example,the second and fourth virtual node embedding solutions should be adjusted.As a result,on the basis of the aforementioned redefinition,the velocity and position of parti-cle i on dimension d are determined according to the velocity and position update equations given as follows:v d i D P 1v d i ˚P 2.pBest d i «x d i /˚P 3.gBest d «x d i //,(15)x d i D x d i ˝v d i ,(16)where P 1is inertia weight,and P 2and P 3can be seen as the cognition and social weights,respec-tively.Typically,P 1,P 2,and P 3are set to constant values and satisfy the inequality P 16P 26P 3(P 1C P 2C P 3D 1).4.3.Feasibility check procedure for the position of a particleBecause the position vector only represents the node mapping solution,it is just a possible VN embedding solution,whether the capacity and connectivity constraints presented in Equations (8)and (9)between these mapped virtual nodes can be satisfied still needs to be checked.Here,we introduce a procedure to check the feasibility of the current particle’s position,which is correspond-ing to the link mapping stage of the VN embedding process.If the position is feasible,we can have its linking mapping solution and calculate its fitness value by Formula (6);otherwise,its fitness value is set to infinity.To check the feasibility of the position of a particle is equivalent to finding a link mapping solu-tion for the current VN request.Previous work [6]uses KSP for link mapping when path splitting is unsupported by the substrate;otherwise,MCF algorithm is used.However,if adopting the same1062Z.ZHANG ET AL.link mapping algorithms,our algorithm may become more time consuming as a result of the iter-ation processes of PSO.Thus,we devise the alternative link mapping algorithms according to the following two conditions:(i)On one hand,without path splitting feature for the substrate network, instead of searching the KSPs for increasing values of k,wefind a path that has enough bandwidth to map the corresponding virtual link.Wefirst remove the links,whose bandwidth cannot satisfy the virtual link bandwidth constraints,and then use the shortest path tofind a link solution shown in Algorithm1.(ii)On the other hand,when path splitting is supported by the substrate network,we propose GKSP in the link mapping stage:wefind a corresponding shortest path and make use of it (the substrate network bandwidth resource will be changed),irrespective of whether it can satisfy the virtual link.This procedure repeats until wefind enough bandwidth as shown in Algorithm2. Our GKSP link mapping algorithm can be solved in O.M C N log N C k/time[22]in a substrate network with N nodes and M links while the time complexity of MCF algorithm is approximately O.M2C kN/[23].4.4.L2S2preferred local selection strategyFor the basic PSO,it is common to generate or update the position parameter of the parti-cles randomly within the corresponding ranges with equal probability during the evolutionary process.However,taking the context of VN embedding problem into account,if we overuseUNIFIED EPSO-BASED VNE ALGORITHM1063 the bottleneck resources of the substrate network,it may make the substrate network resource unbalanced and fragmented and hinder the substrate network from accepting larger VN network request.Besides,because the possible VN embedding solutions are encoded by the position param-eter without considering the link mapping stage,it may lead to dissatisfying the connectivity constraints in the linking mapping stage and thereby slow the convergence speed of our algo-rithm.Therefore,we develop a L2S2preferred local selection strategy for position initialization and update processes of the particles both achieve quick convergence and balance the substrate network loads.It may increase the possibility to satisfy the virtual node’s connectivity constraints in the link-ing mapping stage if we embed a virtual node to a substrate node with more bandwidth resource. This is because the more bandwidth a network node has,the more degree it might have.When tak-ing the node mapping stage into consideration,a node with more CPU resource is also preferred. Therefore,similar to the previous work[6],a network node resource measure that can reflect both the CPU resource and the bandwidth resource of a node at the same time is introduced,given by the following:NR.u/D CP U.u/X l2L.u/BW.l/,(17)where,on a substrate network,L.u/is the set of all the adjacent links of u,CP U.u/is the remain-ing CPU resource of u,and BW.l/is the unoccupied bandwidth resource of link l.For a virtual node u,CP U.u/and BW.l/are the capacity constraints of this node.The main principle of the L2S2preferred local selection strategy is that the virtual node with larger resource requirements has higher probability to be mapped to the substrate node with larger available resources.The benefits of such a strategy are twofold:it helps to satisfy the resource requirement of the current VN request and consequently accelerate the convergence of our algorithm;it can balance the substrate network loads in the long run.For a VN request containing n virtual nodes,the L2S2preferred local selection strategy for position initialization(position update)is presented in Algorithm3as follows:4.5.VNE-UEPSO algorithm descriptionThe VNE-UEPSO algorithm shown in Algorithm4takes the substrate network and a VN embedding request as input,Formula6asfitness function f.X/,and an approximate optimal VN embedding solution of our algorithms as output.Theflowchart of VNE-UEPSO algorithm is also presented in Figure2.。
钢铁是怎样炼成的英语介绍作文The Making of Steel: A Metaphorical Journey.The creation of steel, a material that has come to symbolize strength and resilience, is a remarkable process that involves transformation and refinement. Just as steel transforms from a fragile state of iron ore into a durable and useful alloy, so too does the human spirit undergo a similar transformation through challenges and perseverance.The initial stages of steel production begin with the mining of iron ore, a raw material found in various parts of the world. This ore, in its natural state, is weak and brittle, unable to withstand the rigors of daily use. However, with the application of heat and pressure, along with the addition of carbon and other elements, this iron ore undergoes a radical transformation.The first step in the steelmaking process is smelting, where the iron ore is heated in a blast furnace along witha source of carbon, usually coke. This process reduces the iron oxide to its elemental state, releasing carbon dioxide as a byproduct. The resulting molten iron, known as pig iron, is then poured into molds to form ingots.But pig iron is not yet steel. It contains too much carbon and other impurities to be truly strong and ductile. To convert pig iron into steel, it must undergo a purification process known as refining. This usually involves removing the carbon content to a specific level through a series of treatments, such as oxidation with oxygen or addition of alloying elements.One of the most critical steps in steelmaking is rolling. The purified steel is heated to a high temperature and then passed through a series of rollers, which shape it into bars, sheets, or plates. This rolling process not only gives steel its desired shape but also enhances its mechanical properties by aligning its grain structure.The final stage of steel production involves heat treatment, which further modifies the properties of themetal. Processes like annealing, quenching, and tempering alter the microstructure of the steel, improving its hardness, toughness, and resistance to corrosion.The journey of steel from a fragile ore to a robust alloy is analogous to the growth and development of a human being. Just as steel requires heat, pressure, and purification to reach its full potential, so too does a person need challenges,困难, and self-improvement to become stronger and more resilient.Life's challenges, whether they be academic, professional, or personal, serve as the heat and pressure that shape us. Through these experiences, we learn to persevere, to adapt, and to grow. Just as impurities are removed from steel during the refining process, so too do we purge ourselves of negative habits and attitudes, replacing them with positive ones that make us stronger.The rolling process of steel, which shapes it into useful forms, can be likened to the challenges we face in life that mold us into the people we become. Just as steelis shaped and strengthened by the rollers, so too are our characters forged by the obstacles we overcome.Finally, the heat treatment of steel, which enhancesits properties, parallels the continuous self-improvement and learning that we engage in throughout our lives. Just as steel requires annealing, quenching, and tempering to achieve optimal performance, so too do we need to reflect, reevaluate, and rededicate ourselves to personal growth and development.In conclusion, the making of steel is not just a physical transformation but also a metaphor for the transformation of the human spirit. Just as steel is forged from weak iron ore into a strong and useful material, so too can we, through challenges, perseverance, and continuous self-improvement, transform ourselves into stronger and more resilient individuals.。
摘要在很多特殊的工业领域,为满足某些特殊的性能需求,需要用到铝及其合金,但铝及其合金的表面硬度低、耐腐蚀性与耐磨性差、抗热震性差制约了铝合金的应用。
通过表面处理工艺进行处理,可以提高铝合金的综合性能,微弧氧化工艺是在阳极氧化工艺基础上发展起来的新兴表面处理技术,微弧氧化膜层具有硬度高,绝缘性与耐腐蚀性和耐磨性好,高抗热震性,氧化膜与基体结合力强等优点,极大地提高了铝合金的应用范围。
本文用微弧氧化技术对铝合金表面进行强化处理,利用正交试验设计优化试验方案, 按4因素(Na2SiO3浓度、KOH浓度、H3BO3浓度、微弧氧化电压)3水平得到正交表,合理安排微弧氧化试验, 达到优化微弧氧化工艺条件的目的:并用极差法评价各因素对陶瓷膜硬度和厚度影响的主次顺序和可能最优水平。
结果表明,铝合金微弧氧化陶瓷膜的硬度和厚度受各因素水平的影响显著, 其中Na2SiO3浓度对陶瓷膜硬度和厚度两指标的影响最大;在最优工艺条件下(Na2SiO3浓度6g/L、H3BO3浓度1.5g/L、KOH浓度0.5g/L、微弧氧化电压360V) ,陶瓷膜致密层总厚度约200 μm。
关键词:铝合金;微弧氧化;正交试验设计;表面处理;陶瓷氧化膜。
1AbstractIn many industries, to meet special performance requirements, it must be used in aluminum and its alloys, aluminum and its alloys, but lower surface hardness, corrsion resistance and poor wear resistance, thermal shock resistance is poor, restricted application of aluminum alloy. Can be processed by surface treatment to improve the comprehensive peforrmance of aluminum alloy. Oxidation in the anodic oxidation process developed on the basis of the newsurface treatment technology, micro-arc oxidation film has high hardness, corrsion resistance and insulation resistance and good wear resistance, high thermal shock resistance,oxide film and the substrate combined with strong advantages, greatly improved the application of aluminun alloy.Applying the technology of micro-arc oxidation to strengthen handling the surface of aluminum alloy, optimal experiments were designed by or thogonal experimental and the or thogonal table was gained according to fuor elements (concentration of Na2 SiO3, concentration of KOH, concentration of H3 BO3, micro-arc oxidation voltage) and three levels, carry out the micro-arc oxidation experiments appropriately, for the aim to obtain condi tions in which micro-arc oxidation technics can be optimized, using the i ntegral balanceable method to estimate the possibly optimal level and evaluate the primary and secondary order of effect to the hardness and thickness of ceramic coating caused by different elements. The results show that the hardness and thickness of micro-arc oxidation ceramic coating on aluminum alloy are effected observably by each element, especially by the concentration of Na2SiO3. In the optimal technics condition ( 6 g/L Na2 SiO3 , 1.5 g/L H3 BO3, 0.5 g/L KOH, 340V micro-arc oxidation voltage) , ceramic coating can reach 1700 HV in hardness and 200 μm in thickness approximately.Keywords:aluminurn alloy;micro-arc oxidation;orthogonal experimental design;surface treatment;ceramic oxide film.目录摘要 (1)第一章绪论 (5)1.1微弧氧化表面处理工艺 (6)1.1.1微弧氧化工艺机理 (6)1.1.2微弧氧化成膜过程 (8)1.1.3微弧氧化工艺参数影响情况 (10)1.1.4微弧氧化技术的特点 (11)1.1.5微弧氧化技术的应用前景 (12)1.2本研究课题的目的和意义 (12)1.2.1本研究课题的目的 (12)1.2.2研究的意义 (13)1.3技术研究思路 (13)1.3.1正交试验设计的基本原理..................................................................... 错误!未定义书签。
Near-Optimal conversion of Hardness into Pseudo-RandomnessRussell Impagliazzo Computer Science and EngineeringUC,San Diego9500Gilman DriveLa Jolla,CA92093-0114russell@Ronen Shaltiel Department of Computer Science Hebrew UniversityJerusalem,Israel ronens@cs.huji.ac.ilAvi WigdersonDepartment of Computer ScienceHebrew UniversityJerusalem,Israelavi@cs.huji.ac.ilAbstractVarious efforts([3,5,6,9])have been made in recentyears to derandomize probabilistic algorithms using thecomplexity theoretic assumption that there exists a prob-lem in,that requires circuits of size,(for some function).These results are based onthe NW-generator[7].For the strong lower bound,[6],and later[9]get the optimal derandomization,.However,for weaker lower bound func-tions,these constructions fall far short of the nat-ural conjecture for optimal derandomization,namely that.The gap in these con-structions is due to an inherent limitation on efficiency inNW-style pseudo-random generators.In this paper we are able to get derandomization in al-most optimal time using any lower bound.We do thisby using the NW-generator in a new,more sophisticatedway.We view any failure of the generator as a reductionfrom the given“hard”function to its restrictions on smallerinput sizes.Thus,either the original construction works(almost)optimally,or one of the restricted functions is(al-most)as hard as the original.Any such restriction can thenbe plugged into the NW-generator recursively.This processgenerates many“candidate”generators,and at least oneis guaranteed to be“good”.Then,to perform the approx-power of randomized computation.Examples of hardness vs.randomness tradeoffs based on worst-case circuit com-plexity assumptions may be found in[3,5,6,9],(see also table1).Our contribution is a construction that gives a better tradeoff between the simulation quality and the strength of the assumed lower bound.The improve-ment is especially noticeable for“mid-level”strength func-tions,for example.1.1.Derandomization via generatorsFollowing[12],the task of derandomizing(two sided er-ror)probabilistic algorithms reduces to the problem of de-terministically approximating the fraction of inputs which a given circuit accepts.Define an approximator as a determin-istic algorithm that gets as input a circuit and approximates the fraction of inputs accepted by the circuit.An approx-imator can be used to deterministically simulate a proba-bilistic algorithm on a given input in the obvious way.The running time of the simulation is roughly the running time of the approximator,and our goal becomes constructing ef-ficient(in terms of running time)approximators.Previous Hardness vs.Randomness tradeoffs constructed efficient approximators via pseudo-random generators.A pseudo-random generator is a family of functions,which is computable in time,and has the property that for every,the set ofall outputs of can be used to approximate1the fraction of inputs accepted by any circuit of size.Intuitively,the generator“stretches”a short seed of random bits into a long string of pseudo-random bits which“fool”every circuit of size.A pseudo-random generator is sufficient to construct an approximator.Simply run the given circuit on all outputs of the pseudo-random generator.Thus the running time of this approximator is exponential in the generator’s seed size. (One has to go over seeds and activate the generator which runs in time).If one settles for derandomizing one sided error proba-bilistic algorithms,the object in need is a Hitting set.A hitting set(for circuits of size)is a(multi)-set of strings in which has the property that for every circuit of size which accepts at least half of the inputs there exists an element,which accepts.A hitting set genera-tor is a family of functions, which is computable in time,and has the property that for every,the set of all outputs of is a hitting set for circuits of size.Every pseudo-random generator isbits,and pro-duces bits.Previous results using worst-case assumptions([3,5,6,9])focused on“hardness amplification”,that is showingthe-distributional complexity hardness assumption fol-lows from the-worst case circuit complexity assumption(with some relation between and).Hardness vs.ran-domness tradeoffs then follow using hardness amplificationand activating the NW-generator.Recently,[9]came upwith an almost optimal hardness amplification scheme.In-formally speaking,they show that the two assumptions areequivalent.More precisely,they show that given a func-tion that meets the-worst case cir-cuit complexity assumption,one can construct a function which meets the requirements of the-distributional complexity hardness assumption.This is op-timal for our purposes since we are indifferent to the differ-ence between and.Having pushed the hardness amplification phase to thelimit,any remaining inefficiency in the derandomizationprocess is caused by the NW-generator.Indeed,there are some inherent limits to the NW-generator that make these derandomization sub-optimal for functions which are not exponential.When assuming the-worst case circuit complexity as-sumption,one may hope to get a generatorTable1.Result ComparisonAll results assume the-worst-case circuit complexity assumption.Reference Conclusion for[3][6]athis paper boptimal ca Impagliazzo and Wigderson state their result only for,and their result puts in,for such a lower bound.b Our result is a bit better,but we cannot state it in this notation.c The best we can hope for with current techniques,see section6.that fools circuits of size,(see section6).How-ever,the best result using the NW-generator takes a largerseed:circuits rather than size cir-cuits2.The second loss is that rather than constructing a generator,we construct candidate generators where at least one of them is a“good”generator.We don’t know how tofind the good generator in the huge collection.Nev-ertheless,a trivial observation is that this collection may be used to construct a hitting set generator.This is because the set of outputs of all candidate generators is a small hit-ting set.This suffices for one sided error probabilistic algo-rithms.As for two sided error probabilistic algorithms,we show that this collection of candidate generators is useful to construct an approximator which runs in time,(that.The main lemma of[7]says that if the NW-generator is not a pseudo-random generator then is easy.More precisely,the statement is that if there exists a circuit of size which“catches”,then there exists a circuit of size which approximates.Put dif-ferently,this means that when used with seed size,the NW-generator can fool circuits of size.To get ,one needs.Sincecuit complexity of some functions on bits.We distinguish between two cases:The optimistic case is that these func-tions can be computed by much smaller circuits.In this case the NW-lemma shows that is a good generator, and we don’t lose the factor.In the second case the pes-simistic bound applies and the NW-construction may fail to produce a correct generator.However,in this case we get a function which is harder than the original one we started with.This puts us in better condition as the NW-generator works better if you have a harder function.To be more precise,we make the following observation: when used with some function the NW-generator specifies a family of functions over bits(which are restric-tions of the given function).Either is not a good generator,or one of the specified functions requires large circuit size.[7]choose small enough to ensure that the latter case is impossible using the fact that every function on bits has a circuit of size at most.We instead choose and consider the following two cases:1.All the functions specified by the NW-generator have“small”(size)circuits.In such a case we get that the NW generator is indeed good for circuits of size,and we don’t lose the factor.2.At least one of the specified functions cannot be com-puted by a circuit of size.In this case it may be that the NW-generator is bad.However,we have at hand a function on many fewer bits than the original hard function(instead of),that requires roughly the same circuit size.From the point of view of the ratio between hardness to input size,this function is harder than the one we started with.We can“plug”it to the NW-generator and enjoy the better lower bound.(Re-call that the NW-generator is more efficient with strong lower bounds).This approach can be used recursively until we end up with a function which requires circuits of size exponential in the length of it’s input.On sucha function we can afford to use the old proof.The construction:While this may seem very appealing, there are still some problems.We don’t know which of the two cases happened,and even worse,in case2,we don’t know which of the specified functions is the hard function.Thus,we try all possibilities.We construct candi-date generators from the initial function and all its specified functions.We continue this recursively until we are sure that one of the functions we consider is“hard”but all its specified functions are“easy”.This can be shown to hap-pen after at most recursion levels,and at this point we have candidates.This process involves some loss3.At this point we know that at least one candidate is a good generator..From here we may continue in two different paths.It is easy to see that the union of outputs of all candidate gener-ators is a hitting set of size for circuits of size.This makes our construction a hitting set generator and suffices for one sided error probabilistic algorithms.We are not able to spot the good generator.However,in order to derandom-ize two sided error probabilistic algorithms it is enough to construct an approximator.[1]showed how to construct an efficient approximator given a hitting set generator.Our hit-ting set generator has special properties that makes the proof simpler.Recall that to construct an approximator,we need to approximate the fraction of inputs accepted by a given circuit.The idea is to hold a“tournament”between all the candidate generators.The winner of this tournament is not necessarily a good generator.However,it is certain to give a good approximation of the fraction of inputs accepted by the circuit we are given as input.A disperser a la Trevisan:Recently,Trevisan[11]used the NW-generator to construct an extractor.An extractor is an efficiently computable function,such that for all distributions on having min-entropy4,the distribution obtained by sampling ac-cording to,uniformly from and computing ,is statistically close to the uniform distribution on bits.Trevisan’s extractor works by treating as a func-tion over bits,“amplifying”its hardness,and ap-plying.We focus our attention to constructing extractors with minimal.Trevisan’s construction suffers from the same inefficiency of the NW-generator which we treat here.As suggested by our choice of letters,in the extractor terminol-ogy,the min-entropy takes the role of“hardness”.Indeed, Trevisan extractors use bits.As in the case of pseudo-random generators,the optimal5seed size is.We don’t get an improved extractor, since our construction does not give a pseudo random gen-erator.Instead we get the information theoretic analog of a hitting set generator which is called a disperser.The exact definition appears on section5.Unlike extractors,there ex-ists an explicit construction of optimal dispersers by[10]. Our construction is slightly inferior to the optimal one,but involves totally different methods.The technique used in this paper(combined with some more ideas)can be used to construct an almost optimal pseudo-random generator and therefore an almost optimal extractor.We delay this to a future paper.anization of the paperSection2includes definitions and cites the previous re-sults needed for our construction.Section3includes the main theorem and the main construction.Section4shows how to use the main construction to give an approximator. Section5concerns using Trevisan’s method with our result and constructs a disperser.Section6includes a construction of hard functions from hitting set generators and explains what we mean by optimal derandomization.2.Definitions and History2.1.Hard functionsWe start by defining“hardness”in both worst-case com-plexity and distributional complexity settings.Definition1For a function,we de-fine:1.circuits that compute cor-rectly on every input2.cir-cuits of size3.When invoking the NW-generator against circuits of size ,one needs a function,with2.2.Generators,Discrepancy sets,Hitting sets andApproximatorsIn this section we define pseudo-random/hitting set gen-erators,and algorithms we call approximators.In the fol-lowing definitions we will use the same parameter,both for the size of the circuit,and the size of the input given to the circuit.A circuit of size takes as input at most bits, and in case the circuit takes less bits we assume it takes a prefix of the bits that we prepare in advance.Definition2For a circuit of size bits define:Definition3A-discrepancy set,is a multi-set ,such that for all circuits of size:Definition4A-hitting set,is a multi-set, such that for all circuits of size,if then there exists such that.An easy observation is that a discrepancy set is also a hitting set,while the converse need not be true.We pro-ceed and define pseudo-random/hitting set generators.In both cases,for the purpose of derandomizing probabilistic algorithms,generators may be allowed to run in time expo-nential in their input.Definition5A-pseudo-random generator(resp.-hitting set generator)is a family of functions,such that:1.For all,the setis a-discrepancy set(resp.-hitting set).2.is computable in time,(exponential in the sizeof the input).From now on we will drop the“pseudo-random”when talk-ing about pseudo-random generators.The existence of“good”generators implies a non-trivial deterministic simulation of two sided error probabilistic al-gorithms.However,the proof works by building the follow-ing device(which appears implicitly since[12]and is also used implicitly in other efforts to derandomize such as[1]).Definition6A-approximator is a deterministic algorithm that takes as input a circuit and outputs an approximation of,that is,a number such thatThe following two implications are standard:Lemma1([12])1.If there exists a-generator,then there exists a-approximator that(ona circuit of size)runs in time.2.If there exists a-approximator that runs intime on circuits of size,then.Proof:(sketch)Having a generator,one can run the given circuit on all possible outputs of the generator.This is indeed an ef-ficient approximator.Having an approximator,and given a probabilistic algorithm,(where is the input, and is the random string),simply construct the circuit,and approximate it’s success probabil-ity.As seen from lemma1,the task of derandomizing two sided error probabilistic algorithms,reduces to construct-ing efficient generators,(Where efficiency means small as possible seed size).A similar argument shows that efficient hitting set generators suffice for one sided error probabilis-tic algorithms.2.3.The NW-generatorIn this section we present the NW-generator,it’s best known consequences for derandomization,and explain it’s inherent inefficiency when used with a sub-exponential lower bound.Theorem2[7](Construction of nearly disjoint sets)There exists an algorithm that given numbers,such that,for some constant.3.The running time of the algorithm is exponential in.Definition7(The NW-generator[7])Given some function ,and,the NW-generator works by building an-design,.It takes as input bits,and outputs bits.The thing to do now,is prove that if one“plugs”a hard enough into the NW-generator,it fools circuits of some size.Lemma2[7]Fix,and,be the promised bound on the intersection size.Let,be a function such that.The drawback in lemma2,is that,must be increased to roughly,which fools circuits of size,for.We may still expect to have an optimal generator,that iswhich fools circuits of size .This cannot be achieved by improving the design, as the next lemma shows that the current construction of designs is optimal.Lemma3If,and for all,,and for all,,and Proof:It is enough to prove the lemma forruns in time,withFor more general functions,the above equation does not seem to have a nice closed-form solution.How-ever,since the derandomization takes time,we can pick Then since,.This gives:Theorem5Let be a function computable in time ,such that for all,,then.This approximator should be compared to the one of[9], which is constructed by applying theorem3and lemma1 in sequence.[9]’s approximator runs in time.Let be the-design promised by theorem2,and let.Consider the set:If is not a-discrepancy set then there exist some ,and a partial assignment,such that:.This matches the assumption about.None of the restricted functions can require circuit complexitycircuits which com-pute for bining these with ,and using(1),we get a circuit of size1.is computable in time.2.For all,.Parameters for the construction:-the seed length.-the length of the“pseudo-random”string,(which is also the size of the circuitwe want to fool).-a bound on the error of the generator.-an input length on which is hard.-the lower bound known on,(that is a number such that:).The construction works by recursively calling the procedure construct(),(where are integers and is a function from to,represented as a truth table).(and are also inputs,but left unchanged in recursive calls.)Thefirst call is to construct()construct()e theorem1to create a function,such that if,then).e theorem2to create a,.3.Let,return.6.For all,and for all,Callconstruct(.4..Claim2The process described can be performed in time .Proof:We have alreadyfixed.The work done in each instantiation of construct can be done in time.We will bound the size of the recursion tree.The degree of the recursion tree at level is bounded ing the fact that for all,.This means that the total number of in-stantiations is bounded byClaim3The depth of the recursion tree is bounded by .Proof:We simply have to estimate such that.Using lemma4, we get that if the produced at the current instantiation of construct is not a-discrepancy set,then there exists a restriction,(for some choice of),such that:tree.From claim4we get that if non of the’s in levels up to is a-discrepancy set then one of the’s in the last level has.And so,-discrepancy -ing the fact that at level,6Actually,[1]constructs an approximator from a hitting set generator. Thus we could be done stating this result.However,we can give a much simpler proof for this particular setup.and picks a row such that all the numbers inlie on an interval of length.It then returns,the middle of the interval of.Such an exists,because has that property.For all,we have that,and therefore all the numbers in,are at a distance of at most from.From this we have that.By applying theorems6,7in sequence we get the approx-imator,and prove theorem4.5.An information theoretic analog a-la Tre-visanRecently,Trevisan[11]used the NW-generator to con-struct an extractor.Trevisan’s extractor suffers from the same inefficiency of the NW-generator.In this section we use our technique to build a disperser.Definition9A functionis called a-disperser,if for any,such that ,,(where denotes the set of elements in such that there exists and such that.Unlike extractors,dispersers with small seed size have already been constructed by[10].Our construction achieves a totally different almost optimal disperser.Theorem8For every there exists an-disperser,where-hitting set generator ,where,then there ex-ists a function,where, such that:1.is in.2.For all,.Proof:The function is defined as follows:On input of size,construct the hitting set.Accept if it does not appear as a prefix of an element of.One can easily compute by con-structing,(which takes time),and comparing the given input to all strings in.This puts in.Since ,if one looks at thefirst bits of elements in ,he will encounter at most half of the possible pat-terns.This means that accepts at least half of the inputs in.If is computed by some circuit of size ,then accepts half of the possible inputs in. We may view as a circuit that takes inputs and ig-nores all but of them.Since is a hitting set,then there must be an element in which accepts,but this contra-dicts the definition of.As a consequence we get that any construction that was superior to that we list as“optimal”7,would be a proof that circuit lower bounds for functions in could be au-tomatically boosted to give even larger lower bounds.This would show a gap in the possible circuit complexities of functions complete for.Since proving such a gap seems quite difficult,we believe progress in derandomization by current techniques is limited to matching the”optimal”bounds listed.Putting this in terms of simulation of prob-abilistic algorithms,if we start from a function with lower bound,the best result we can hope for is.AcknowledgementsWe thank Oded Goldreich for a conversation8that started us working on this paper,and for many helpful comments. We thank Ido Bregman for reading a preliminary version of this paper and for helpful comments.References[1] A.E.Andreev,A.E.F.Clementi,and J.D.P.Rolim.Hit-ting sets derandomize BPP.In F.Meyer auf der Heide andB.Monien,editors,Automata,Languages and Program-ming,23rd International Colloquium,volume1099of Lec-ture Notes in Computer Science,pages357–368,Paderborn, Germany,8–12July1996.Springer-Verlag.。