A 3D RGB Axis-based Color-oriented Cryptography
- 格式:pdf
- 大小:116.24 KB
- 文档页数:16
第一章测试1.2020年2月,国家11部委联合印发了《智能汽车创新发展战略》,对智能汽车进行了定义:通过搭载先进传感器等装置,运用()等新技术,具有自动驾驶功能,逐步成为智能移动空间和应用终端的新一代汽车。
A:车联网B:高精度地图C:工控机D:人工智能答案:D2.2004年DARPA举办了越野环境下的大挑战赛,虽然没有一辆参赛车完成比赛。
但该届赛事仍具有里程碑的意义。
()A:对B:错答案:A3.SAE将自动驾驶分为驾驶辅助、部分自动驾驶、有条件自动驾驶、高度自动驾驶以及完全自动驾驶五个级别。
()A:错B:对答案:B4.从人的参与来看,L1是逐步释放手脚;L2全部释放手脚,但不释放注意力。
()A:错B:对答案:B5.视频中介绍的第二种体系结构,把无人驾驶系统分为感知层、决策层和执行层三个部分。
在决策层引入()。
A:深度学习B:车联网C:3D高精度地图D:大数据答案:BC6.下面4辆自动驾驶试验车中,与其他三辆技术路线不同的是( )A:B:C:D:答案:A1.在无人驾驶车辆研究早期阶段,主要采用外加机构改造的方式来将有人驾驶车辆改造成无人驾驶车辆,主要是在()方面外加执行机构来实现无人化。
A:变速操纵B:制动操纵C:转向操纵D:油门操纵答案:ABCD2.无人驾驶汽车的一体化设计是指综合考虑无人驾驶汽车对行驶环境的感知和决策以及车辆的动力学特性之间的相互联系和影响,将汽车动力学特性与环境感知决策进行有机的结合。
()A:对B:错答案:A3.汽车转向系统的发展经历了纯机械式转向系统、液压助力转向系统、电控液压助力转向系统、电动助力转向系统、主动前轮转向系统和线控转向系统几个阶段。
()A:错B:对答案:B4.电控液压助力转向的优点是助力大小能够根据方向盘输入转矩和车速实时调节,降低了能量消耗且增强了路感。
()A:对B:错答案:A5.相较于传统的转向系统,线控转向的优点有()A:增强汽车舒适性。
B:.提高汽车安全性能。
1300 Henley CourtPullman, WA 99163509.334.6306PmodJSTK2™ Reference ManualRevised July 19, 2016This manual applies to the PmodJSTK2 rev. COverviewThe Digilent PmodJSTK2 (Revision C) is a versatile user input device that can be easily incorporated into a wide variety of projects. With a two-axis joystick on a center button, a trigger button, and a programmableRGB LED capable of 24-bit color.Features include:∙Factory Calibrated Two Axis ResistiveJoystick∙Center Joystick Button∙Trigger Style Push Button∙24-bit RGB LED∙6-pin Pmod connector with SPI interface∙Library and example code available inresource centerThe PmodJSTK2.1 Functional DescriptionsThe PmodJSTK2 utilizes two potentiometers oriented orthogonally to one another and are manipulated by moving the joystick in the X and Y directions. As the joystick moves, the voltage output at the sweep pin of each potentiometer changes and is measured by the 10-bit ADC present on the embedded PIC16F1618 microcontroller. The raw measured data is stored at a rate of 100Hz as a 16-bit right-justified variable in RAM with the upper 6 bits masked with zeros.Additionally, each successive measurement also produces two 8-bit values representative of the joysticks physical location with respect to each axis. Note that if inversion of either of the 8-bit position axis are set, the values will not change until the data has been re-collected by the PIC16 at the 100Hz rate.2 SpecificationsParameter Min Typical1Max Units Recommended Operating Voltage 3.1 3.3 3.5 V Maximum Supply Voltage - - 5.5 VPower Supply Current2- 4.85 - mAPower Supply Current3- 17.6 - mA Parameter Value Units Maximum Joystick Angle 25 DegreesCommunication Protocol SPINote¹: Data in the Typical Column uses V CC at 3.3V unless otherwise notedNote²: Normal operation with the RGB LED Off and no buttons pressedNote³: Normal operation with the RGB LED set to white and both buttons pressed3 Interfacing with the PmodThe PmodJSTK2 communicates with the host board via the SPI protocol. With the PmodJSTK2, there are two types of data packet protocols: the standard data packet of 5 bytes and an extended data packet with 6 or 7 bytes in total. With the standard 5 byte protocol, users may use the old code from the PmodJSTK without any syntax errors. The 5 byte packet structure is provided in the image below:As noted in the standard data packet structure, users may either send a zero and a series of 4 dummy bytes to receive the standard 5 bytes of data or they may send a single command byte with up to 4 parameters in the four following bytes to set the internal values such as the joystick calibration or on-board RGB LED.The extended data protocol allows for additional data to be obtained from the device during a communication session after the standard 5 bytes of information such as normalized 8-bit positional data for each axis. Users may also obtain the current calibration values and the status of the module through this method.3.1 SPI Timing RequirementsThe embedded PIC16F1618 requires certain SPI timing requirements in order for successful communication to occur. When the Chip Select line is brought low, users must wait at least 15 μS before sending the first byte of data. An interbyte delay of at least 10 μS is required when transferring multiple bytes. When the Chip Select line is brought high after the last byte has been transferred, at least 25 μS is required before users may bring the Chip Select line low again to initiate another communication session.3.2 Calibrating the ModuleThe PmodJSTK2 has a set of factory loaded calibration values that are used to calculate the 8-bit position values for each axis. Users may enter calibration mode to recalculate all of those values by rotating the joystick around so the embedded PIC16 can record all of the maximum and minimum samples for the two axes. The on-board blue LED will be flashing to indicate that the calibration sequence is taking place. When the embedded microcontroller detects that the joystick has not changed for an entire second, allowing the microcontroller to presume that the most recent set of measurements correspond to the joystick's center position, the blue LED will stop flashing and the green LED will flash twice to indicate that the calibration procedure was successful. However, if 10 seconds pass without the PIC16 detecting the center position, the blue LED will stop flashing and the red LED will flash twice indicating that the calibration procedure was not successful.Once the Chip Select pin goes high after the calibration command has been processed, the PmodJSTK2 will not accept any new commands during the calibration procedure. Users may still poll the status register to determine the current status of the device during this time.3.3 Using the High-endurance FlashThe PmodJSTK2 has a set of factory loaded calibration values that are used to calculate the 8-bit position values for each axis. Users may enter calibration mode to recalculate all of those values by rotating the joystick around so the embedded PIC16 can record all of the maximum and minimum samples for the two axes. The on-board blue LED will be flashing to indicate that the calibration sequence is taking place. When the embedded microcontroller detects that the joystick has not changed for an entire second, allowing the microcontroller to presume that the most recent set of measurements correspond to the joystick's center position, the blue LED will stop flashing and the green LED will flash twice to indicate that the calibration procedure was successful. However, if 10 seconds pass without the PIC16 detecting the center position, the blue LED will stop flashing and the red LED will flash twice indicating that the calibration procedure was not successful.Once the Chip Select pin goes high after the calibration command has been processed, the PmodJSTK2 will not accept any new commands during the calibration procedure. Users may still poll the status register to determine the current status of the device during this time.3.4 Command Summary 3.4.1 Get Commands3.4.2 Set Commands3.4.3 Other Commands4 Pinout Description TableA pinout table of the PmodJSTK2 is provided below.Although users are welcome to create their own interface code for the PmodJSTK2 if they so desire, pre-constructed libraries that provide functions for initializing the module, reading in values, and adjusting calibration values exist. They are available on the PmodJSTK2example code page.Any external power applied to the PmodJSTK2 must be within 2.95V and 5.5V; however, it is recommended that Pmod is operated at 3.3V.5 Physical DimensionsThe pins on the pin header are spaced 100 mil apart. The PCB is 1.875 inches long on the sides parallel to the pins on the pin header, 0.9375-inch-long on the sides perpendicular to the pin header, and 1.75 inches tall. With the 3-D printed housing the module is 1.875 inches long on the sides parallel to the pins on the pin header, 1.125 inches long on the sides perpendicular to the pin header, and 1.75 inches tall.。
基于RGB深度特征的3D人体活动识别(英文)赵洋;LIU Zicheng;程洪【期刊名称】《中国通信:英文版》【年(卷),期】2013()7【摘要】We study the problem of humanactivity recognition from RGB-Depth(RGBD)sensors when the skeletons are not available.The skeleton tracking in Kinect SDK workswell when the human subject is facing thecamera and there are no occlusions.In surveillance or nursing home monitoring scenarios,however,the camera is usually mounted higher than human subjects,and there may beocclusions.The interest-point based approachis widely used in RGB based activity recognition,it can be used in both RGB and depthchannels.Whether we should extract interestpoints independently of each channel or extract interest points from only one of thechannels is discussed in this paper.The goal ofthis paper is to compare the performances ofdifferent methods of extracting interest points.In addition,we have developed a depth mapbased descriptor and built an RGBD dataset,called RGBD-SAR,for senior activity recognition.We show that the best performance isachieved when we extract interest points solely from RGB channels,and combine the RGBbased descriptors with the depth map-baseddescriptors.We also present a baseline performance of the RGBD-SAR dataset.【总页数】11页(P93-103)【关键词】RGB;辨识;动作;3D;兴趣点;人类活动;骨架跟踪;描述符【作者】赵洋;LIU Zicheng;程洪【作者单位】School of Automation,University of Electronic Science and Technology of China;Microsoft Research【正文语种】中文【中图分类】TP391.41;TP13【相关文献】1.基于RGB特征与深度特征融合的物体识别算法 [J], 卢良锋;谢志军;叶宏武2.一种基于RGB-D特征融合的人体行为识别框架 [J], 毛峡;王岚;李建军3.一种基于D-S证据理论的融合RGB特征和Depth特征的3D目标识别方法 [J], 杨慧;胡良梅;张旭东;陈仲海;董文菁4.基于RGB-D图像特征的人体行为识别 [J], 唐超; 王文剑; 张琛; 彭华; 李伟5.融合RGB特征和Depth特征的3D目标识别方法 [J], 胡良梅;杨慧;张旭东;董文菁;陈仲海因版权原因,仅展示原文概要,查看原文内容请购买。
Fresnel incoherent correlation hologram-a reviewInvited PaperJoseph Rosen,Barak Katz1,and Gary Brooker2∗∗1Department of Electrical and Computer Engineering,Ben-Gurion University of the Negev,P.O.Box653,Beer-Sheva84105,Israel2Johns Hopkins University Microscopy Center,Montgomery County Campus,Advanced Technology Laboratory, Whiting School of Engineering,9605Medical Center Drive Suite240,Rockville,MD20850,USA∗E-mail:rosen@ee.bgu.ac.il;∗∗e-mail:gbrooker@Received July17,2009Holographic imaging offers a reliable and fast method to capture the complete three-dimensional(3D) information of the scene from a single perspective.We review our recently proposed single-channel optical system for generating digital Fresnel holograms of3D real-existing objects illuminated by incoherent light.In this motionless holographic technique,light is reflected,or emitted from a3D object,propagates througha spatial light modulator(SLM),and is recorded by a digital camera.The SLM is used as a beam-splitter of the single-channel incoherent interferometer,such that each spherical beam originated from each object point is split into two spherical beams with two different curve radii.Incoherent sum of the entire interferences between all the couples of spherical beams creates the Fresnel hologram of the observed3D object.When this hologram is reconstructed in the computer,the3D properties of the object are revealed.OCIS codes:100.6640,210.4770,180.1790.doi:10.3788/COL20090712.0000.1.IntroductionHolography is an attractive imaging technique as it offers the ability to view a complete three-dimensional (3D)volume from one image.However,holography is not widely applied to the regime of white-light imaging, because white-light is incoherent and creating holograms requires a coherent interferometer system.In this review, we describe our recently invented method of acquiring incoherent digital holograms.The term incoherent digi-tal hologram means that incoherent light beams reflected or emitted from real-existing objects interfere with each other.The resulting interferogram is recorded by a dig-ital camera and digitally processed to yield a hologram. This hologram is reconstructed in the computer so that 3D images appear on the computer screen.The oldest methods of recording incoherent holograms have made use of the property that every incoherent ob-ject is composed of many source points,each of which is self-spatial coherent and can create an interference pattern with light coming from the point’s mirrored image.Under this general principle,there are vari-ous types of holograms[1−8],including Fourier[2,6]and Fresnel holograms[3,4,8].The process of beam interfering demands high levels of light intensity,extreme stability of the optical setup,and a relatively narrow bandwidth light source.More recently,three groups of researchers have proposed computing holograms of3D incoherently illuminated objects from a set of images taken from differ-ent points of view[9−12].This method,although it shows promising prospects,is relatively slow since it is based on capturing tens of scene images from different view angles. Another method is called scanning holography[13−15],in which a pattern of Fresnel zone plates(FZPs)scans the object such that at each and every scanning position, the light intensity is integrated by a point detector.The overall process yields a Fresnel hologram obtained as a correlation between the object and FZP patterns.How-ever,the scanning process is relatively slow and is done by mechanical movements.A similar correlation is ac-tually also discussed in this review,however,unlike the case of scanning holography,our proposed system carries out a correlation without movement.2.General properties of Fresnel hologramsThis review concentrates on the technique of incoher-ent digital holography based on single-channel incoher-ent interferometers,which we have been involved in their development recently[16−19].The type of hologram dis-cussed here is the digital Fresnel hologram,which means that a hologram of a single point has the form of the well-known FZP.The axial location of the object point is encoded by the Fresnel number of the FZP,which is the technical term for the number of the FZP rings along the given radius.To understand the operation principle of any general Fresnel hologram,let us look on the difference between regular imaging and holographic systems.In classical imaging,image formation of objects at different distances from the lens results in a sharp image at the image plane for objects at only one position from the lens,as shown in Fig.1(a).The other objects at different distances from the lens are out of focus.A Fresnel holographic system,on the other hand,as depicted in Fig.1(b),1671-7694/2009/120xxx-08c 2009Chinese Optics Lettersprojects a set of rings known as the FZP onto the plane of the image for each and every point at every plane of the object being viewed.The depth of the points is en-coded by the density of the rings such that points which are closer to the system project less dense rings than distant points.Because of this encoding method,the 3D information in the volume being imaged is recorded into the recording medium.Thus once the patterns are decoded,each plane in the image space reconstructed from a Fresnel hologram is in focus at a different axial distance.The encoding is accomplished by the presence of a holographic system in the image path.At this point it should be noted that this graphical description of pro-jecting FZPs by every object point actually expresses the mathematical two-dimensional (2D)correlation (or convolution)between the object function and the FZP.In other words,the methods of creating Fresnel holo-grams are different from each other by the way they spatially correlate the FZP with the scene.Another is-sue to note is that the correlation should be done with a FZP that is somehow “sensitive”to the axial locations of the object points.Otherwise,these locations are not encoded into the hologram.The system described in this review satisfies the condition that the FZP is depen-dent on the axial distance of each and every objectpoint.parison between the Fresnel holography principle and conventional imaging.(a)Conventional imaging system;(b)fresnel holographysystem.Fig.2.Schematic of FINCH recorder [16].BS:beam splitter;L is a spherical lens with focal length f =25cm;∆λindicates a chromatic filter with a bandwidth of ∆λ=60nm.This means that indeed points,which are closer to the system,project FZP with less cycles per radial length than distant points,and by this condition the holograms can actually image the 3D scene properly.The FZP is a sum of at least three main functions,i.e.,a constant bias,a quadratic phase function and its complex conjugate.The object function is actually corre-lated with all these three functions.However,the useful information,with which the holographic imaging is real-ized,is the correlation with just one of the two quadratic phase functions.The correlation with the other quadratic phase function induces the well-known twin image.This means that the detected signal in the holographic system contains three superposed correlation functions,whereas only one of them is the required correlation between the object and the quadratic phase function.Therefore,the digital processing of the detected image should contain the ability to eliminate the two unnecessary terms.To summarize,the definition of Fresnel hologram is any hologram that contains at least a correlation (or convolu-tion)between an object function and a quadratic phase function.Moreover,the quadratic phase function must be parameterized according to the axial distance of the object points from the detection plane.In other words,the number of cycles per radial distance of each quadratic phase function in the correlation is dependent on the z distance of each object point.In the case that the object is illuminated by a coherent wave,this correlation is the complex amplitude of the electromagnetic field directly obtained,under the paraxial approximation [20],by a free propagation from the object to the detection plane.How-ever,we deal here with incoherent illumination,for which an alternative method to the free propagation should be applied.In fact,in this review we describe such method to get the desired correlation with the quadratic phase function,and this method indeed operates under inco-herent illumination.The discussed incoherent digital hologram is dubbed Fresnel incoherent correlation hologram (FINCH)[16−18].The FINCH is actually based on a single-channel on-axis incoherent interferometer.Like any Fresnel holography,in the FINCH the object is correlated with a FZP,but the correlation is carried out without any movement and without multiplexing the image of the scene.Section 3reviews the latest developments of the FINCH in the field of color holography,microscopy,and imaging with a synthetic aperture.3.Fresnel incoherent correlation holographyIn this section we describe the FINCH –a method of recording digital Fresnel holograms under incoher-ent illumination.Various aspects of the FINCH have been described in Refs.[16-19],including FINCH of re-flected white light [16],FINCH of fluorescence objects [17],a FINCH-based holographic fluorescence microscope [18],and a hologram recorder in a mode of a synthetic aperture [19].We briefly review these works in the current section.Generally,in the FINCH system the reflected incoher-ent light from a 3D object propagates through a spatial light modulator (SLM)and is recorded by a digital cam-era.One of the FINCH systems [16]is shown in Fig.2.White-light source illuminates a 3D scene,and the reflected light from the objects is captured by a charge-coupled device (CCD)camera after passing through a lens L and the SLM.In general,we regard the system as an incoherent interferometer,where the grating displayed on the SLM is considered as a beam splitter.As is com-mon in such cases,we analyze the system by following its response to an input object of a single infinitesimal point.Knowing the system’s point spread function (PSF)en-ables one to realize the system operation for any general object.Analysis of a beam originated from a narrow-band infinitesimal point source is done by using Fresnel diffraction theory [20],since such a source is coherent by definition.A Fresnel hologram of a point object is obtained when the two interfering beams are two spherical beams with different curvatures.Such a goal is achieved if the SLM’s reflection function is a sum of,for instance,constant and quadratic phase functions.When a plane wave hits the SLM,the constant term represents the reflected plane wave,and the quadratic phase term is responsible for the reflected spherical wave.A point source located at some distance from a spher-ical positive lens induces on the lens plane a diverging spherical wave.This wave is split by the SLM into two different spherical waves which propagate toward the CCD at some distance from the SLM.Consequently,in the CCD plane,the intensity of the recorded hologram is a sum of three terms:two complex-conjugated quadratic phase functions and a constant term.This result is the PSF of the holographic recording system.For a general 3D object illuminated by a narrowband incoherent illumination,the intensity of the recorded hologram is an integral of the entire PSFs,over all object intensity points.Besides a constant term,thehologramFig.3.(a)Phase distribution of the reflection masks dis-played on the SLM,with θ=0◦,(b)θ=120◦,(c)θ=240◦.(d)Enlarged portion of (a)indicating that half (randomly chosen)of the SLM’s pixels modulate light with a constant phase.(e)Magnitude and (f)phase of the final on-axis digi-tal hologram.(g)Reconstruction of the hologram of the three characters at the best focus distance of ‘O’.(h)Same recon-struction at the best focus distance of ‘S’,and (i)of ‘A’[16].expression contains two terms of correlation between an object and a quadratic phase,z -dependent,function.In order to remain with a single correlation term out of the three terms,we follow the usual procedure of on-axis digital holography [14,16−19].Three holograms of the same object are recorded with different phase con-stants.The final hologram is a superposition of the three holograms containing only the desired correlation between the object function and a single z -dependent quadratic phase.A 3D image of the object can be re-constructed from the hologram by calculating theFresnelFig.4.Schematics of the FINCH color recorder [17].L 1,L 2,L 3are spherical lenses and F 1,F 2are chromaticfilters.Fig.5.(a)Magnitude and (b)phase of the complex Fres-nel hologram of the dice.Digital reconstruction of the non-fluorescence hologram:(c)at the face of the red dots on the die,and (d)at the face of the green dots on the die.(e)Magnitude and (f)phase of the complex Fresnel hologram of the red dots.Digital reconstruction of the red fluorescence hologram:(g)at the face of the red dots on the die,and (h)at the face of the green dots on the die.(i)Magnitude and (j)phase of the complex Fresnel hologram of the green dots.Digital reconstruction of the green fluorescence hologram:(k)at the face of the red dots on the die,and (l)at the face of the green dots on the position of (c),(g),(k)and that of (d),(h),(l)are depicted in (m)and (n),respectively [17].Fig.6.FINCHSCOPE schematic in uprightfluorescence microscope[18].propagation formula.The system shown in Fig.2has been used to record the three holograms[16].The SLM has been phase-only, and as so,the desired sum of two phase functions(which is no longer a pure phase)cannot be directly displayed on this SLM.To overcome this obstacle,the quadratic phase function has been displayed randomly on only half of the SLM pixels,and the constant phase has been displayed on the other half.The randomness in distributing the two phase functions has been required because organized non-random structure produces unnecessary diffraction orders,therefore,results in lower interference efficiency. The pixels are divided equally,half to each diffractive element,to create two wavefronts with equal energy.By this method,the SLM function becomes a good approx-imation to the sum of two phase functions.The phase distributions of the three reflection masks displayed on the SLM,with phase constants of0◦,120◦and240◦,are shown in Figs.3(a),(b)and(c),respectively.Three white-on-black characters i th the same size of 2×2(mm)were located at the vicinity of rear focal point of the lens.‘O’was at z=–24mm,‘S’was at z=–48 mm,and‘A’was at z=–72mm.These characters were illuminated by a mercury arc lamp.The three holo-grams,each for a different phase constant of the SLM, were recorded by a CCD camera and processed by a computer.Thefinal hologram was calculated accord-ing to the superposition formula[14]and its magnitude and phase distributions are depicted in Figs.3(e)and (f),respectively.The hologram was reconstructed in the computer by calculating the Fresnel propagation toward various z propagation distances.Three different recon-struction planes are shown in Figs.3(g),(h),and(i).In each plane,a different character is in focus as is indeed expected from a holographic reconstruction of an object with a volume.In Ref.[17],the FINCH has been capable to record multicolor digital holograms from objects emittingfluo-rescent light.Thefluorescent light,specific to the emis-sion wavelength of variousfluorescent dyes after excita-tion of3D objects,was recorded on a digital monochrome camera after reflection from the SLM.For each wave-length offluorescent emission,the camera sequentially records three holograms reflected from the SLM,each with a different phase factor of the SLM’s function.The three holograms are again superposed in the computer to create a complex-valued Fresnel hologram of eachflu-orescent emission without the twin image problem.The holograms for eachfluorescent color are further combined in a computer to produce a multicoloredfluorescence hologram and3D color image.An experiment showing the recording of a colorfluo-rescence hologram was carried out[17]on the system in Fig. 4.The phase constants of0◦,120◦,and240◦were introduced into the three quadratic phase functions.The magnitude and phase of thefinal complex hologram,su-perposed from thefirst three holograms,are shown in Figs.5(a)and(b),respectively.The reconstruction from thefinal hologram was calculated by using the Fresnel propagation formula[20].The results are shown at the plane of the front face of the front die(Fig.5(c))and the plane of the front face of the rear die(Fig.5(d)).Note that in each plane a different die face is in focus as is indeed expected from a holographic reconstruction of an object with a volume.The second three holograms were recorded via a redfilter in the emissionfilter slider F2 which passed614–640nmfluorescent light wavelengths with a peak wavelength of626nm and a full-width at half-maximum,of11nm(FWHM).The magnitude and phase of thefinal complex hologram,superposed from the‘red’set,are shown in Figs.5(e)and(f),respectively. The reconstruction results from thisfinal hologram are shown in Figs.5(g)and(h)at the same planes as those in Figs.5(c)and(d),respectively.Finally,an additional set of three holograms was recorded with a greenfilter in emissionfilter slider F2,which passed500–532nmfluo-rescent light wavelengths with a peak wavelength of516 nm and a FWHM of16nm.The magnitude and phase of thefinal complex hologram,superposed from the‘green’set,are shown in Figs.5(i)and(j),respectively.The reconstruction results from thisfinal hologram are shown in Figs.5(k)and(l)at the same planes as those in Fig. 5(c)and(d),positions of Figs.5(c), (g),and(k)and Figs.5(d),(h),and(l)are depicted in Figs.5(m)and(n),respectively.Note that all the colors in Fig.5(colorful online)are pseudo-colors.These last results yield a complete color3D holographic image of the object including the red and greenfluorescence. While the optical arrangement in this demonstration has not been optimized for maximum resolution,it is im-portant to recognize that even with this simple optical arrangement,the resolution is good enough to image the fluorescent emissions with goodfidelity and to obtain good reflected light images of the dice.Furthermore, in the reflected light images in Figs.5(c)and(m),the system has been able to detect a specular reflection of the illumination from the edge of the front dice. Another system to be reviewed here is thefirst demon-stration of a motionless microscopy system(FINCH-SCOPE)based upon the FINCH and its use in record-ing high-resolution3Dfluorescent images of biological specimens[18].By using high numerical aperture(NA) lenses,a SLM,a CCD camera,and some simplefilters, FINCHSCOPE enables the acquisition of3D microscopic images without the need for scanning.A schematic diagram of the FINCHSCOPE for an upright microscope equipped with an arc lamp sourceFig.7.FINCHSCOPE holography of polychromatic beads.(a)Magnitude of the complex hologram 6-µm beads.Images reconstructed from the hologram at z distances of (b)34µm,(c)36µm,and (d)84µm.Line intensity profiles between the beads are shown at the bottom of panels (b)–(d).(e)Line intensity profiles along the z axis for the lower bead from reconstructed sections of a single hologram (line 1)and from a widefield stack of the same bead (28sections,line 2).Beads (6µm)excited at 640,555,and 488nm with holograms reconstructed (f)–(h)at plane (b)and (j)–(l)at plane (d).(i)and (m)are the combined RGB images for planes (b)and (d),respectively.(n)–(r)Beads (0.5µm)imaged with a 1.4-NA oil immersion objective:(n)holographic camera image;(o)magnitude of the complex hologram;(p)–(r)reconstructed image at planes 6,15,and 20µm.Scale bars indicate image size [18].Fig.8.FINCHSCOPE fluorescence sections of pollen grains and Convallaria rhizom .The arrows point to the structures in the images that are in focus at various image planes.(b)–(e)Sections reconstructed from a hologram of mixed pollen grains.(g)–(j)Sections reconstructed from a hologram of Convallaria rhizom .(a),(f)Magnitudes of the complex holograms from which the respective image planes are reconstructed.Scale bars indicate image size [18].is shown in Fig. 6.The beam of light that emerges from an infinity-corrected microscope objective trans-forms each point of the object being viewed into a plane wave,thus satisfying the first requirement of FINCH [16].A SLM and a digital camera replace the tube lens,reflec-tive mirror,and other transfer optics normally present in microscopes.Because no tube lens is required,infinity-corrected objectives from any manufacturer can be used.A filter wheel was used to select excitation wavelengths from a mercury arc lamp,and the dichroic mirror holder and the emission filter in the microscope were used to direct light to and from the specimen through an infinity-corrected objective.The ability of the FINCHSCOPE to resolve multicolor fluorescent samples was evaluated by first imaging poly-chromatic fluorescent beads.A fluorescence bead slidewith the beads separated on two separate planes was con-structed.FocalCheck polychromatic beads(6µm)were used to coat one side of a glass microscope slide and a glass coverslip.These two surfaces were juxtaposed and held together at a distance from one another of∼50µm with optical cement.The beads were sequentially excited at488-,555-,and640-nm center wavelengths(10–30nm bandwidths)with emissions recorded at515–535,585–615,and660–720nm,respectively.Figures7(a)–(d) show reconstructed image planes from6µm beads ex-cited at640nm and imaged on the FINCHSCOPE with a Zeiss PlanApo20×,0.75NA objective.Figure7(a) shows the magnitude of the complex hologram,which contains all the information about the location and in-tensity of each bead at every plane in thefield.The Fresnel reconstruction from this hologram was selected to yield49planes of the image,2-µm apart.Two beads are shown in Fig.7(b)with only the lower bead exactly in focus.Figure7(c)is2µm into thefield in the z-direction,and the upper bead is now in focus,with the lower bead slightly out of focus.The focal difference is confirmed by the line profile drawn between the beads, showing an inversion of intensity for these two beads be-tween the planes.There is another bead between these two beads,but it does not appear in Figs.7(b)or(c) (or in the intensity profile),because it is48µm from the upper bead;it instead appears in Fig.7(d)(and in the line profile),which is24sections away from the section in Fig.7(c).Notice that the beads in Figs.7(b)and(c)are no longer visible in Fig.7(d).In the complex hologram in Fig.7(a),the small circles encode the close beads and the larger circles encode the distant central bead. Figure7(e)shows that the z-resolution of the lower bead in Fig.7(b),reconstructed from sections created from a single hologram(curve1),is at least comparable to data from a widefield stack of28sections(obtained by moving the microscope objective in the z-direction)of the same field(curve2).The co-localization of thefluorescence emission was confirmed at all excitation wavelengths and at extreme z limits,as shown in Figs.7(f)–(m)for the 6-µm beads at the planes shown in Figs.7(b)((f)–(i)) and(d)((j)–(m)).In Figs.7(n)–(r),0.5-µm beads imaged with a Zeiss PlanApo×631.4NA oil-immersion objective are shown.Figure7(n)presents one of the holo-grams captured by the camera and Fig.7(o)shows the magnitude of the complex hologram.Figures7(p)–(r) show different planes(6,15,and20µm,respectively)in the bead specimen after reconstruction from the complex hologram of image slices in0.5-µm steps.Arrows show the different beads visualized in different z image planes. The computer reconstruction along the z-axis of a group offluorescently labeled pollen grains is shown in Figs. 8(b)–(e).As is expected from a holographic reconstruc-tion of a3D object with volume,any number of planes can be reconstructed.In this example,a different pollen grain was in focus in each transverse plane reconstructed from the complex hologram whose magnitude is shown in Fig.8(a).In Figs.8(b)–(e),the values of z are8,13, 20,and24µm,respectively.A similar experiment was performed with the autofluorescent Convallaria rhizom and the results are shown in Figs.8(g)–(j)at planes6, 8,11,and12µm.The most recent development in FINCH is a new lens-less incoherent holographic system operating in a syn-thetic aperture mode[19].Synthetic aperture is a well-known super-resolution technique which extends the res-olution capabilities of an imaging system beyond thetheoretical Rayleigh limit dictated by the system’s ac-tual ing this technique,several patternsacquired by an aperture-limited system,from variouslocations,are tiled together to one large pattern whichcould be captured only by a virtual system equippedwith a much wider synthetic aperture.The use of optical holography for synthetic apertureis usually restricted to coherent imaging[21−23].There-fore,the use of this technique is limited only to thoseapplications in which the observed targets can be illu-minated by a laser.Synthetic aperture carried out by acombination of several off-axis incoherent holograms inscanning holographic microscopy has been demonstratedby Indebetouw et al[24].However,this method is limitedto microscopy only,and although it is a technique ofrecording incoherent holograms,a specimen should alsobe illuminated by an interference pattern between twolaser beams.Our new scheme of holographic imaging of incoher-ently illuminated objects is dubbing a synthetic aperturewith Fresnel elements(SAFE).This holographic lens-less system contains only a SLM and a digital camera.SAFE has an extended synthetic aperture in order toimprove the transverse and axial resolutions beyond theclassic limitations.The term synthetic aperture,in thepresent context,means time(or space)multiplexing ofseveral Fresnel holographic elements captured from vari-ous viewpoints by a system with a limited real aperture.The synthetic aperture is implemented by shifting theSLM-camera set,located across thefield of view,be-tween several viewpoints.At each viewpoint,a differentmask is displayed on the SLM,and a single element ofthe Fresnel hologram is recorded(Fig.9).The variouselements,each of which is recorded by the real aperturesystem during the capturing time,are tiled together sothat thefinal mosaic hologram is effectively consideredas being captured from a single synthetic aperture,whichis much wider than the actual aperture.An example of such a system with the synthetic aper-ture three times wider than the actual aperture can beseen in Fig.9.For simplicity of the demonstration,the synthetic aperture was implemented only along thehorizontal axis.In principle,this concept can be gen-eralized for both axes and for any ratio of synthetic toactual apertures.Imaging with the synthetic apertureis necessary for the cases where the angular spectrumof the light emitted from the observed object is widerthan the NA of a given imaging system.In the SAFEshown in Fig.9,the SLM and the digital camera movein front of the object.The complete Fresnel hologramof the object,located at some distance from the SLM,isa mosaic of three holographic elements,each of which isrecorded from a different position by the system with thereal aperture of the size A x×A y.The complete hologram tiled from the three holographic Fresnel elements has thesynthetic aperture of the size3(·A x×A y)which is three times larger than the real aperture at the horizontal axis.The method to eliminate the twin image and the biasterm is the same as that has been used before[14,16−18];。
MFG124360Exploring Toolpath and Probe Path Definition and Visualization in VR/ARZhihao CuiAutodeskDescriptionVisualizing and defining 3D models on a 2D screen has always been a challenge for CAD and CAM users. Tool paths and probe paths add other levels of complexity to take into consideration, as the user cannot fully appreciate the problem on a 2D viewer. Imagine yourself trying to define a tool axis on a complex shape—it’s very hard to take every single aspect of the shape into account, except by guessing, calculating, and retrying repeatedly. With augmented reality (AR) and virtual reality (VR) technologies, the user gains the ability to inspect and define accurate 3D transformations (position and rotation) for machine tools in a much more natural way. We will demonstrate one potential workflow to address this during the class, which includes how to export relevant models from PowerMill software or PowerInspect projects; how to reconstruct, edit, and optimize models in PowerShape software and 3ds Max software; and eventually how to add simple model interactions and deploy them in AR/VR environments with game engines like Stingray or Unity.SpeakerZhihao is a Software Engineer in Advanced Consulting team within Autodesk. His focus for AR and VR technologies is in manufacturing industry and he wishes to continuously learn and contribute to it.Data PreparationThe first step of the journey to AR or VR is generating the content to be visualized. Toolpath and probe path need to be put into certain context to be meaningful, which could be models for the parts, tools, machines or even the entire factory.PowerMill ExportFigure 1 Typical PowerMill ProjectPartsExporting models of the part is relatively simple.1. Choose the part from the Explorer -> Models -> Right click on part name -> ExportModel…2. Follow the Export Model dialog to choose the name with DMT format1.1DGK file is also supported if additional CAD modification is needed later. See Convert PowerMill part mesh on page 7Figure 2 PowerMill – Export ModelToolTool in PowerMill consists three parts – Tip, Shank and Holder.To export the geometry of the tool, type in the macro commands shown in Figure 3, which would generate STL files2 contains the corresponding parts. Three lines of commands3 are used instead of exporting three in one file (See Figure 11), or one single mesh would be created instead of three which will make the coloring of the tool difficult.EDIT TOOL ; EXPORT_STL TIP "powermill_tool_tip.stl"EDIT TOOL ; EXPORT_STL SHANK "powermill_tool_shank.stl"EDIT TOOL ; EXPORT_STL HOLDER "powermill_tool_holder.stl"Figure 3 PowerMill Macro - Export ToolToolpathToolpath is the key part of the information generated by a CAM software. They are created based on the model of the part and various shapes of the tool for different stages (e.g. roughing, polishing, etc.). Toolpaths are assumed to be fully defined for visualization purposes in this class, and other classes might be useful around toolpath programming, listed on page 15. Since there doesn’t exist a workflow to directly stream data into AR/VR environment, a custom post-processor4 is used to extract minimal information needed to describe a toolpath, i.e. tool tip position, normal direction and feed rate (its format is described in Figure 17).The process is the same way as an NC program being generated for a real machine to operate. Firstly, create an NC program with the given post-processor shown in Figure 4. Then grab and drop the toolpath onto the NC program and write it out to a text file shown in Figure 5.2DDX file format can also be exported if geometry editing is needed later in PowerShape3 The macro is also available in addition al material PowerMill\ExportToolMesh.mac4 The file is in additional material PowerMill\simplepost_free.pmoptzFigure 4 PowerMill Create NC ProgramFigure 5 PowerMill Insert NC ProgramPowerInspect ExportFigure 6 Typical PowerInspect OMV ProjectCADCAD files can be found in the CAD tab of the left navigation panel. The model can be re-processed into a generic mesh format for visualization using PowerShape, which is discussed in Section Convert PowerMill part mesh on page 7.Figure 7 Find CAD file path in PowerInspectProbeDefault probe heads are installed at the following location:C:\Program Files\Autodesk\PowerInspect 2018\file\ProbeDatabaseCatalogueProbes shown in PowerInspect are defined in Catalogue.xml file and their corresponding mesh files are in probeheads folder. These files will be used to assemble the probe mentioned in Section Model PowerInspect probe on page 9.Probe toolAlthough probe tool is defined in PowerInspect, they cannot be exported as CAD geometries to be reused later. In Model PowerInspect probe section on page 9, steps to re-create the probe tool will be introduced in detail based on the stylus definition.Probe pathLike toolpath in PowerMill, probe path can be exported using post processor5 to a generic MSR file format, which contains information of nominal and actual probing points, measuring tolerance, etc.This can be achieved from Run tab -> NC Program, which is shown in Figure 8.Figure 8 Export Probe Path from PowerInspect5 The file is in additional materialPowerInspect\MSR_ResultsGenerator_1.022.pmoptzModelling using PowerShapeConvert PowerMill part meshDMT or DGK files can be converted to mesh in PowerShape to FBX format, which is a more widely adopted format.DMT file contains mesh definition, which can be exported again from PowerShape after color change and mesh decimation if needed (discussed in Section Exporting Mesh in PowerShape on page 10).Figure 9 PowerShape reduce meshDGK file exported from PowerMill / PowerInspect is still parametric CAD model not mesh, which means further editing on the shape is made possible. Theoretically, the shape of the model won’t be changed since the toolpath is calculated based on the original version, but further trimming operations could be carried here to keep minimal model to be rendered on the final device. For example, not all twelve blades of the impeller may be needed to visualize the toolpath defined on one single surface. It’s feasible to remove ten out of the twelve blades and still can verify what’s going on with the toolpath defined. After editing the model, PowerShape can convert the remaining to mesh and export to FBX format as shown below.Figure 10 Export FBX from PowerShapeModel PowerMill toolImport three parts of the tool’s STL files into PowerShape, and change the color of individual meshes to match PowerMill’s color scheme for easier recognition.Figure 11 PowerShape Model vs PowerMill assembly viewBefore exporting, move the assembled tool such that the origin is at the tool tip and oriented z-axis upwards, which saves unnecessary positional changes during AR/VR setup. Then follow Figure 10 to export FBX file from PowerShape to be used in later stages.Model PowerInspect probeTake the example Probe OMP400. OMP400.mtd file6 contains where the mesh of individual components of the probe head are located and their RGB color. For most of the probe heads, DMT mesh files will be located in its subfolder. They can be dragged and dropped into PowerShape in one go to form the correct shape, but all in the same color (left in Figure 14). To achieve similar looking in PowerInspect, it’s better to follow the definition fi le, and import each individual model and color it according to the rgb value one by one (right in Figure 14).<!-- Head --><machine_part NAME="head"><model_list><dmt_file><!-- Comment !--><path FILE="probeheads/OMP400/body.dmt"/><rgb R="192"G="192"B="192"/></dmt_file>Figure 12 Example probe definition MTD fileFigure 13 PowerShape apply custom colorFigure 14 Before and after coloring probe headFor the actual probe stylus, it’s been defined in ProbePartCatalogue.xml file. For theTP20x20x2 probe used in the example, TP20 probe body, TP20_STD module and M2_20x2_SS stylus are used. Construct them one by one in the order of probe body, module and stylus, and each of them contains the definition like the below, which is the TP20 probe body.6C:\Program Files\Autodesk\PowerInspect 2018\file\ProbeDatabaseCatalogue<ProbeBody name="TP20"from_mounting="m8"price="15.25"docking_height="0"to_mounting="AutoMagnetic"length="17.5"><Manufacturer>Renishaw</Manufacturer><Geometry><Cylinder height="14.5"diameter="13.2"offset="0"reference_length="14.5" material="Aluminium"color="#C8C8C8"/><Cylinder height="3.0"diameter="13.2"offset="0"reference_length="3.0" material="Stainless"color="#FAFAFA"/></Geometry></ProbeBody>Figure 15 Example TP20 probe body definitionAlmost all geometries needed are cylinder, cone and sphere to model a probing tool. Start with the first item in the Geometry section, and use the parameters shown in the definition to model each of the geometries with solid in PowerShape and then convert to mesh. To make the result look as close as it shows in PowerInspect, color parameter can also be utilized (Google “color #xxx” to convert the hex color).Figure 16 Model PowerInspect ToolU nlike PowerMill tool, PowerInspect probe’s model origin should be set to the probe center instead of tip, which is defined in the MSR file. But the orientation should still be tuned to be z-axis facing upwards.DiscussionsExporting Mesh in PowerShapeIn PowerShape, there are different ways that a mesh can be generated and exported. Take the impellor used in PowerMill project as an example, the end mesh polycount is 786,528 if it’s been converted from surfaces to solid and then mesh with a tolerance set to 0.01. However, if the model was converted straight from surface to mesh, the polycount is 554,630, where the 30% reduce makes a big impact on the performance of the final AR/VR visualization.Modifying the tolerance could be another choice. For visualization purposes, the visual quality will be the most impactable factor of choosing the tolerance value. If choosing the value is set too high, it may introduce undesired effect that the simulated tool is clipped into the model in certain position. However, setting the tolerance too small will quickly result in a ridiculous big mesh, which will dramatically slow down the end visualization.Choosing the balance of the tolerance here mainly depends on what kind of end devices will the visualization be running on. If it will be a well-equipped desktop PC running VR, going towards a large mesh won’t ne cessarily be a problem. On the other hand, if a mobile phone is chosen for AR, a low polycount mesh will be a better solution, or it can be completely ignored as a placeholder, which is discussed in Section On-machine simulation on page 12.Reading dataSame set of model and paths data can be used in multiple ways on different devices. The easiest way to achieve this is through game engines like Stingray or Unity 3D, which has built-in support for rendering in VR environment like HTC Vive and mobile VR, and AR environment like HoloLens and mobile AR.Most of the setup in the game engine will be the same for varies platform, like models and paths to be displayed. Small proportion will need to be implemented differently for each platform due to different user interaction availability. For example, for AR when using HoloLens, the user will mainly control the application with voice and gesture commands, while on the mobile phones, it will make more sense to offer on-screen controls.For part and tool models, FBX files can be directly imported into the game engines without problem. Unit of the model could be a problem here, where export from PowerShape is usually in millimeter but units in game engines are normally in meters. Unit change in this case could result in a thousand times bigger, which may cause the user seeing nothing when running the application.For toolpath data, three sets of toolpath information are exported from PowerMill with the given post-processor, i.e. tool tip position, tool normal vector and its feed rate. They can be read line by line, and its positions can be used to create toolpath lines. And together with the normal vector and feed rates, an animation of the tool head can be created.Position(x,y,z) Normal(i,j,k) Feed rate33.152,177.726,52.0,0.713,-0.208,0.67,3000.0Figure 17 Example toolpath output from PowerMillFor probe path data, similar concept could be applied with an additional piece of information7–actual measured point, which means not only the nominal probe path can be simulated ahead of time, but also the actual measured result could be visualized with the same setup.7 See page 14 for MSR file specification.STARTG330 N0 A0.0 B0.0 C0.0 X0.0 Y0.0 Z0.0 I0 R0G800 N1 X0 Y0 Z25.0I0 J0 K1 O0 U0.1 L-0.1G801 N1 X0.727 Y0.209 Z27.489 R2.5ENDFigure 18 Example probe path output from PowerInspectUse casesOn-machine simulationWhen running a new NC program with a machine tool, it’s common to see the machine operator tuning down the feed rate and carefully looking through the glass to see what is happening inside the box. After several levels of collision checking in CAM software and machine code simulator, why would they still not have enough confidence to run the program?Figure 19 Toolpath simulation with AR by Hans Kellner @ AutodeskOne potential solution to this problem is using AR on the machine. Since how the fixture is used nowadays is still fairly a manual job constrained by operator’s experience, variations of fixtures make it a very hard process to verify ahead of machining process. Before hitting the start button for the NC program, the operator could start the AR simulation on the machine bed, with fixtures and part held in place. It will become an intuitive task for the operator to check for collisions between part of the virtual tool and the real part and fixtures. Furthermore, a three-second in advance virtual simulation of the tool head can be shown during machining process to significantly increase the confidence and therefore leave the machine always running at full speed, which ultimately increases the process efficiency.Toolpath programming assistanceProgramming a toolpath within a CAM software can sometimes be a long iterative try and error process since the user always imagines how the tool will move with the input parameters. Especially with multi-axis ability, the user will often be asked to provide not only the basic parameters like step over values but also coordinate or direction in 3D for the calculation to start. Determining these 3D values on a screen becomes increasingly difficult when othersurfaces surround the places needed to be machined. Although there are various ways to let the user to navigate to those positions through hiding and sectioning, workarounds are always not ideal and time-consuming. As shown in Figure 20, there’s no easy and intuitive way to analyze the clearance around the tool within a tight space, which is one of the several places to be considering.Figure 20 Different angles of PowerMill simulation for a 5-axis toolpath in a tight spaceTaking the example toolpath in PowerMill, a user will need to recalculate the toolpath after each modification of the tool axis point, to balance between getting enough clearance8and achieving better machining result makes the user and verify the result is getting better or worth. However, this workflow can be changed entirely if the user can intuitively determine the position in VR. The tool can be attached to the surface and freely moved by hand in 3D, which would help to determine the position in one go.Post probing verificationProbing is a common process to follow a milling process on a machine tool, making sure the result of the manufacturing is within desired tolerance. Generating an examination report in PowerInspect is one of the various ways to control the quality. However, what often happens is that if an out of tolerance position is detected, the quality engineer will go between the PC screen and the actual part to determine what is the best treatment process depending on different kind of physical appearance.8 Distance between the tool and the partFigure 21 Overlay probing result on to a physical partOverlaying probing result with AR could dramatically increase the efficiency by avoiding this coming back and forth. Same color coded probed point can be positioned exactly at the place of occurrence, so that the surrounding area can be considered separately. The same technique could also be applied to scanning result, as shown in Figure 22.Figure 22 Overlaying scanning result on HoloLens by Thomas Gale @ AutodeskAppendixReference Autodesk University classesPowerMillMFG12196-L: PowerMILL Hands on - Multi Axis Machining by GORDON MAXWELL MP21049: How to Achieve Brilliant Surface Finishes for CNC Machining by JEFF JAJE MSR File format9G330 Orientation of the probeG800 Nominal valuesG801 Measured valuesN Item numberA Rotation about the X axisB Rotation about the Y axisC Rotation about the Z axisXYZ Translations along the X, Y and Z axes (these are always zero)U Upper toleranceL Lower toleranceO OffsetI and R (in G330) just reader valuesR (in G801) probe radius9 Credit to Stefano Damiano @ Autodesk。
GeneralEN ISO 10360-2CMMwith movingbridge.Measuring systems alongwith air bearing guidingthe three axes.Measuringvolume(X/Y/Z):Machine version 454460 x 510 x 420 mm460 x 710 x 420 mmTESA-REFLEXMH3D:0,001 mm or0.00001 inManualormotorised(RC version only)Light alloymachine base;measuringtable in granite.Opto-electronicmeasuring systemsbased on incremen-tal glass scales0,039 µm(system)Manual version760 mm/sec.RC1 µm/pas, 10 or 20 mm/sec.Control Panel154 x 116 mmdisplay fieldwith illuminatedbackground7-decadedisplay (digits)plus sign forthe measured values.Icon-based graphic User’sinterface .RS232MPE(3 + 4 L/1000) µmMPE* L in mmQ-4500500Main Features•Motorised displacement in the three axes X/Y/Z at a selectable speed of 1 µm/step, 10 mm/sec. or 20 mm/sec.•Manual displacement in the three coordinate axes at the speed of 760 mm/sec.•Fine adjust device.•TESASTAR-i probe head, indexable.•TESA Reflex software learned in a few hours.•Joystick with integrated ZMouse.featuresOverall dimensions:(W x D x H) Machine version 454600 x 750 x 430 mm 600 x 990 x 430 mm Maximum weight:Versions 454 Versions 474 featuresMass(W x D x H) Machine version 454, manual 970 x 930 x 1620 mmMachine version 474, manual 970 x 1130 x 1730 mm Net weight: Versions 454/474= 210/315 kg (granite tables included).Gross weight 300/445 kg.Air pressure:3,9 bars(60 à 120 psi).Air absorption: 60 Nl/min.115 to 230 Vac ± 10%, 50 to 60 Hz20° ±1°C 13°C to 35°CShipping box(W x D x H) Version 454, manual 1100 x 1150 x 2200 mm Versions 474, manual + RC 1580 x 1400 x 2200 mmInspection report•ZMouse for significant time savings.•Manual displacement in the 3 coordinate axes.•Automatic reproduction of the manual machine displacement.•Displacement speed in automatic mode:200 mm/sec.•TESA-REFLEX Recorder application software.•TESASTAR-i indexable probe head.GeneralEN ISO 10360-2CMMwith movingbridge. Measuring systems along with air bearing guiding the three axes.Measuringvolume(X/Y/Z):440 x 490 x 390 mmTESA-REFLEXRecorder0,001 mm or0.00001 inManualprobingmovements. Manual or motorised execu-tion of a part programme.Light alloymachine base;granite measuringtable.Opto-electronicmeasuring systemsbased on incremen-tal glass scales0,039 µm(system)Manual version760 mm/sec.Motorised version:200 mm/sec. Control Panel154 x 116 mmdisplay fieldwith illuminated background7-decadedisplay (digits)plus sign forthe measured values.Icon-based graphical User interface.RS232Manual mode:MPE(3 + 4 L/1000) µm MPEP= 3 µmMotorised mode03939020featuresOverall dimensions:(W x D x H) 600 x 750 x 430 mmMaximum weight: 227 kgCMM-oriented featuresMass(W x P x H) 1030 x 1100 x 1680 mmNet weight: 225 kg(granite table included). Table alone: 99 kg.Air pressure:3,9 bars(60 à 120 psi).Air absorption: 60 Nl/min.115 to 230 Vac ±10%, 50 to 60 Hz.Air absorption: 0,3 to 0,7 A20°C ±1°C 13°C to 35°CShipping box (W x D x H) 1350 x 1350 x 2200 mmInspection report EN ISO 10360-2TESASTAR probe head。
10進数 decimal numeral 十进制数10進表記 decimal notation 十进制数法10進法 decimal 十进制16bit/チャンネル 16位/通道16進数 hexadecimal numeral 十六进制数16進表記 hexadecimal notation 十六进制数法16進法 hexadecimal 十六进制180°180度1ギガヘルツ 1G赫兹1つずつ取り出し一个个地取出1バイト single byte 单字节1バイトコード体系 ***CS(Single Byte Code System) 单字码系统1週間用 1周用2000年問題计算机两千年问题2階調化阈值2次キャッシュ secondary cache 辅助高速缓冲存储器2次記憶 secondary storage 辅助存储器2進化10進数 BCD (Binary Coded Decimal) 2进制编码的10进制表示法2進数 binary numeral 二进制数值2進数字 binary digit 二进制数位2進表記 binary notation 二进制数法2進法 binary 二进制2値論理 two valued logic 2值伦理3Dトランスフォーム 3D trance form 3D变换3D画像 three-dimension image 三维图像3D枠 3D框8bit/チャンネル 8位/通道90°(時計回り) 90度(顺时针)90°(反時計回り) 90度(逆时针)abstract クラス AbstractClass 抽象类ADCチェックプログラム ADCcheckprogram 模拟信息变数字信息的转换器检验程序ADSG Application Development Standard Guide 应用程序开发标准向导ADSL Asymmetrical Digital Subscriber Loop 非对称数字用户环线AD値 AD值Agilent 安捷伦(一种模拟器)AI artificial intelligence 人工智能Altキー Alt键API Application Programming Interface 应用编程接口ARP Address Resolution Protocol 地址解析协议ASCII American Standard Code for Information Interchange 美国信息交换标准码ASIC Application Specific Integrated Circuit 特定用途集成电路ASP Application Specific Integrated Circuit 程序开发语言AT&T AT&T 美国电话电报公司ATM Automatic Teller Machine 自动取款机B2B Business to Business 商家対商家B2C Business to Custom 商家对客户BASIC Beginner's All-purpose Symbolic Instruction Code 初学者通用符号指令码BBS Bulletin Board System 电子布告栏系统BCC 密件抄送Big5コード Big5 code 大五码BIOS Basic Input Output System 标准输入输出系统BMP Bitmap 位图BOOTP Bootstrap Protocol 网络引导协议Borland 美国著名软件开发公司bps bits per second 波特Bフレッツ Bflets Bflets宽带技术CAD Computer Aided Design 计算机辅助设计CAI Computer Assisted Instruction 计算机辅助教学catch ブロック catch block catch块CC 抄送CD-R compact disc-record 可录光盘CD-R/RWドライブ compact disc-record/read-write 刻录机CD-ROM compact disc read-only memory 光盘CD-ROMドライブCD-ROMDrive 光驱CD-RW compact disc read-write 可擦写光盘CD-RWドライブ CD-RW drive CD-RW驱动器CD-RWレコーダー CD-RW recorder CD-RW刻录器CD-Rドライブ CD-R drive CD-R驱动器CD-Rレコーダー CD-R recorder CD-R刻录器CDオーディオトラックの再生播放CD乐曲CDからコピー从CD复制CDドライブ compact disc driver 光驱CGI Common Gateway Interface 公共网关接口CHTML 简化超文本标记语言CIF Common Intermediate Format 公共媒介格式CMOS complementary metal-oxide-semiconductor 互补金属氧化物半导体CMS Conversational Monitor System 会话式监督系统CMM 能力成熟度模型CMMI 能力成熟度模型集成CMYKカラー CMYK color CMYK颜色COMポート COM port 串行通讯端口Cooksの削除删除CooksCPU central processing unit 中央处理器CRT cathode-ray tube 显像管CSS cascading style sheet 层叠式样式表CSVファイル CSV file CSV文件Ctrlキー Ctrl键Cue cue 快速提示Cue cue 热点Cコンパイラ C语言编译器DBA Data Base Administrator 数据库管理员DB更新情報 DB更新信息DD data dictionary 数据字典DDB distributed database 分布式数据库DDK Device Drivers Kit 设备驱动程序开发包DDL Data Definition Language 数据定义语言DDNS Dynamic DNS 动态DNSDHCP Dynamic Host Configuration Protocol 动态主构造协议DLL Dynamic Link Library 动态链接库DMA Direct Memory Access 直接存储器访问DML data manipulation language 数据操作语言DNS Domain Name Server 域名服务器DOM Document Object Model 文档对象模型DOS Disk Operating System 磁盘操作系统DOSモードDOS mode DOS模式DSL digital subscriber line 数字用户线路DSN data source name 数据源名DSP Digital Signal Processing 数字信号处理DVD Digital Video Disk 数字化视频光盘DVD/CD-ROMドライブ DVD/CD-ROM drive DVD/CD-ROM驱动器DWDM Dense Wavelength Division Multiplexing 密集波分复用技术e-book electronic-book 电子图书(E)メール mail 邮件ebXML electronic business XML 电子商务XMLe-mail e-mail 伊妹儿e-mailアドレス e-mail address 电子邮件地址EMS Express Mail Service 邮政特快专递EMS extended memory standard 扩展内存规范E-R図EntityRelationship数据库表关联图Exchangeフォルダ Exchange folder Exchange文件夹EXE 可执行程序扩展名Eコマース electronic-commerce 电子商务Eメール E-mail 电子邮件e-ラーニング e-running 网上教程FAQ Frequently Asked Questions 常见问题Faxの宛先传真收件人FEM計算 FEM计算FEP front end processor 前端处理器File名文件名final 変数 final variable final变量FTP File Transfer Protocol 文件传输协议FW fire wall 防火墙GBコード GB code 国标码GB拡張(GBK)コード GB extension code 国标扩展码GIF Graphic Interexchange Fromate 可交换图像文件格式GPS Global Positioning System 全球定位系统GUI Graphical User Interface 图形用户界面HDML Handheld Device Markup Language 手持设备标记语言Hi-loプログラマー Hi-lo programmer 河洛烧写器HSM High-Speed Memory 高速存储器HTML HypeText Markup Language 超文本链接标示语言HTMLエディタ HTML editor HTML编辑器HTTP Hypertext Transfer Protocol WWW服务所使用的协议(超文本传输协议)HUB 集线器I love you ウィルスI love you virus 爱虫病毒I/F InterFace 界面IBM International Business Machines 美国国际商用机器公司ID identifier 身分IDE 集成电路设备IE Internet Explorer 微软WEB浏览器IIS Internet Information Services 互联网信息服务inaction ブロック inaction block 静块INI 初始化设置文件扩展名IP internet protocol 网际协议IPアドレス IP address IP地址IRDA 红外线ISDN Integrated Services Digital Network 综合服务数字网ISP Internet Service Provider 互联网服务提供者IT Information Technology 信息技术Iモード I mode I模式JAN Digital Equipment Corporation 美国数字设备公司JAVA JAVA语言JDK Java Development Kit JAVA开发工具包JPEG Joint Photographic Expert Group 联合图像专家组KPA 关键过程域Labカラー Lab color Lab颜色LAN Local Area Network 局域网LANの設定局域网设置LDAP Lightweight Directory Access Protocol 轻量级目录访问协议LDB logic database 逻辑数据库LIPS 页面描述语言LPR Line Printer 打印协议Lucent Lucent 美国郎讯公司mainメソッド主方法MAN Metropolitan Area Network 城域网Mbps Mega bits per second 兆比特每秒MDEファイルの作成生成MDE文件MDI Multiple Document Interface 多文档界面程序MFC Microsoft Foundation Class ++的类库Microsoft Webページ Microsoft Web page Microsoft网站MIDI Musical Intrument Data Interface 乐器数字接口MIPS Million Instructions Per Second 每秒百万条指令MMC Microsoft Managemet Console 微软管理控制台MO Magnet Optical disk 磁光盘MOドライブ MO drive 磁光盘驱动器MP3プレイヤー Mpeg Audio Layer3 player MP3播放器MPEG Motion Picture Expert Group 运动图像专家组MSDE MicroSoft Database Engine 微软数据库引擎MSP Moduler System Program 摸块化系统程序MSS Mass Storage System 海量存储系统MTA Mail Transfer Agent 邮件传输代理NEC Nippon Electric Company 日本电气公司Netscapeユーザーのために Netscape用户NIC network interface card 网络接口卡Nimda 尼达NIS Network Information Services 网络信息服务NNTP Network News Transfer Protocol 网络新闻传输协议NS Netscape 网景WEB浏览器NTSCカラー NTSC color NTSC颜色NTT Nippon Telegram and Telephone Corp 日本电信电话公司NTTコム NTT Com NTT ComNTT子局 NTT子局NTT接続局NTT连接局OA Office Automation 办公自动化OA化办公自动化OCR Optical Character Recognition 光学字符识别ODBC Open DataBase Connectivity 开放式数据库连接OFFする设为OFFOLE Object Linking and Embedding 对象链接和嵌入OLE DB OLE DataBase OLE数据库OLTP OnLine Transaction Processing 联机事务处理OOP Object Oriented Programming 面向对象程序设计Oracle Oracle数据库管理系统OS OS (Operating System) 操作系统PCI Peripheral Component Interconnect 周边元件扩展接口PCX 图形文件PCコマンド PC command PC命令PCとの同PC的PC側 PC端PDA Personal Digital Assistant 个人数字助理PDF Portable Document Format adobe的可移植文档格式Perl 后端服务器程序语言PG名 PG名PHP 新型CGI网络程序开发语言PL/2 programming language 程序设计语言PLC 通用次序控制器POP Post Office Protocol 邮件处理协议POPメール POP mail POP邮件POSIX Portable Operating System Interface 便携式操作系统接口PostgreSQL 一种数据库系统PPP Peer-Peer Protocol 点对点协议QCIF Quarter CIF 四分之一CIFQMS Quality management system 质量管理系统QQVGA Quarter Quarter Vga 十六分之一VGAQVGA Quarter Vga 四分之一VGARAM Random Access Memory 随机存储器RDBMS Relation DataBase Manage System 关系型数据库管理系统RDF Report Document Format 报表文档格式RGB red,green,blue 三原色(红绿蓝)RGBカラー RGB color RGB颜色ROM read-only memory 只读存储器ROM焼き烧入ROMRPC Remote Procedure Call 远程过程调用RPM Redhat Package Manager 红帽软件包管理器RTC real time clock 实时时钟RTCP Real-time Transport Control Protocol 实时传输控制协议RTP Real-time Transport Protocol 实时传输协议SA2100 SA2100SCSI通信 SCSI通信SDI Single Document Interface 单文档界面程序SDK Software Development Kit 软件开发工具包SE system engineer 系统工程师SEPG software engineer process group 过程改善推进组SMTP Simple Mail Transfer Protocol 简单邮件传输协议SNMP Simple Network Management Protocol 简单网络管理协议SNTP Simple Network Time Protocol 简单网络时间协议SOAP simple object access protocol 简单对象访问协议Spaceキー Space键SPI 软件过程改进SQA 软件质量保证SQL Structured Query Language 结构化查询语言SQL文 SQL statement SQL语句SRS WOWエフェクト SRS WOW effect SRS WOW效果SSL Secure Sockets Layer 加密套接字协议层SSLの状態のクリア清除SSL状态ST System Test 系统测试STS System Test Spec 系统测试式样书SVGA Super VGA 超级VGATCP/IP Transmission Control Protocol/Internet Protocol 传输控制协议/网际协议TDG test data generator 测试数据生成器TIF Tagged Image File 标签图像文件格式UDDI universal description, discovery and integration 通用发现、描述和集成UDP User Datagram Protocol 用户数据报协议UML unification modeling language 统一建模语言UPS Uninterrupt Power Supply 不间断电源URL Uniform Resource Locator 统一资源定位器URLを追加添加URLU*** Universal Serial Bus Intel公司开发的通用串行总线架构U***コントローラ U*** controller 通用串行总线控制器U***ポート Universal Serial Bus U***接口VCD Video Compact Disc 视频高密光盘VCDプレーヤー VCD player 影机VGA Video Graphic Array 视频图形阵列VIP Very Important Person 贵宾VIPカード VIP card 贵宾卡VPN Virtual Private Network 虚拟个人网络VSS Visual Source Safe 开发环境VxWorks VxWorks 一种操作系统W4C World Wide Web Consortium 3W协会WAN Wide Area Network 广域网WAP wireless application protocol 无线应用协议WDM技術 wavelength division multiplexing 波分复用技术Webコンポーネント Web component Web组件Webサーバー Web Server 网络服务器Webサービス記述言語 web Services Description Language web服务描述语言Webサイト Web site 网站Webストレージ Web Storage 网络存储器Webツール Web tool Web工具箱Webディスカッション Web discussion Web讨论Webの検索搜索Webwebページ web page 网页Webページとして保存另存为Web页Webレイアウト Web layout Web版面Web上のツール网上工具Web設定のリセット重新设置WebWindows XPツアー Windows XP tour 漫游Windows XP Windows2000 Windows2000 Windows2000WLAN Wireless LAN 无线局域网WSDL web Services Description Language web服务描述语言WWW WWW (World Wide Web) 环球网WWW World Wide web 万维网XML extensible markup language 扩展标示语言Yahoo 雅虎Yコマンド Y command Y命令アーカイバ archiver 档案库存储器アーカイバー archiver 档案库存储器アーカイビング archiving 归档アーカイブ archive 档案文件/存档アーキ Archi Archiアーキテクチャー architecture 体系结构アーギュメント argument 参数アークコサイン arc cosine 反余弦アークタンジェント arc tangent 反正切アーチ形 arch 弧アーティスティック艺术效果アービトレーションフェーズ arbitration phase 裁决状态アイク IKE 光电摄像管アイコン icon 图标,图符アイコンの自動整列自动排列アイコンの整列排列图标あいさつ文问候语アイデア idea 主意アイテム item 项,项目アイドル時間 idle time 空闲时间アイリス iris 光圈アウェイ away 客场アウトソーシング out-sorcing 外加工アウトプット output 输出アウトライン outline 轮廓,大纲アウトラインからスライド幻灯片(从大纲)アウトラインのクリア清除分级显示アウトラインの自動作成自动建立分级显示アウトルック outllook 邮件收发软件アカウント帐户名アカウントロック accountlock 帐户锁アカウント名 account name 帐户名アカデミック academic 教学工具アキュムレータaccumulator 累加器アクション action 行为,动作アクセサリ accessory Windows中的附件アクセシビリティ accessibility 可接近性アクセス access 存取,访问アクセスアドレス access address 存取地址アクセスカウンター access counter 访问计数器アクセスセキュリティ access seculity 访问安全アクセスナンバー access number 访问号アクセスビット access bit 访问位アクセスポイント access point 接入点アクセスメソッド access method 访问方法アクセスログ access log 访问日志アクセス許可 access permission 存取许可アクセス権 access authority,access right 存取权限アクセス権限 access permission 访问权限アクセス識別子 access identifier 访问修饰符アクセス制限ステータス access control status 访问限制状态アクセス制御修飾子访问控制修饰符アクセス頻度 access frequency 访问频率アクセス優先度 access priority 存取优先级アクセッサ accessor 存取器(MSS中使用)アクセプタ Acceptor 受主アクセプター acceptor 接收器アクセル accerelator 加速装置アクセンチュア accenture Accenture公司アクター actor 行动器アクチュエータ actuator 传动器アクティビティ図 activity diagram 活动图アクティブ active 活动アクティブデスクトップ active desktop 活动桌面アクティブウィンドウ active window 活动窗口アクティブエクス ActiveX 网络化多媒体对象技术アクティブにする active 激活アクティブファイル active file 活动文件アクティブフィルター active filter 有源滤波器アクティブユーザー active user 现时用户アクティブレポート active report 活动报表アグレッシブ aggressive 攻击性的アゴラコール agora call 公共调用アサーション assertion 断言アサイン assign 分配アジェンダ agenda 议程アシスタント assistant 助手,助理アシスタントを表示する显示帮助アスキーコード ASCII code ASCII码アスタリスク asterisk 星号アスペクト比 aspect ratio 纵横比アセス assess 评估版アセンブラ assembler 装配工アセンブラー assembler 汇编程序アセンブラ言語 assembler 汇编语言アセンブリ assembly 程序集アセンブリー言語 assembly language 汇编语言アタッチ attach 附加,连接アダプタ adapter 适配器アダプタシミュレータ adapter simulator 适配模拟器アダプタビリティ adaptability 适应性アダプテック Adaptec Adaptec公司アッセンブラー assembler 汇编程序,汇编语言アッセンブリー assembly 程序集アップ(ロード) up(lord) 上传アップグレード upgrade 升级アップグレード版 upgrade version 升级版アップサイジングウィザード upsizing wizard 升迁向导アップダウンキー up down key 上下键アップデート/更新 update 更新アップリンク uplink 上行路线,向上传输アップル apple 美国苹果公司アップローディング up-loading 加载アップロード upload 上传アドイン add-in 内插附件アドイン add-in 加载宏アドインマネージャ add-in manager 加载项管理器アドバンスト Advanced 高级的アドバンストパワーマネージメント APM 高级电源管理アドバンテージ advantage 优点,有利,长处アドビ adobe Adobe公司アドホックマネジメント ad hoc manegement 特别管理アドミニストレーター administrator 管理员アドミン admin 管理アトリビュート Attribute 属性アドレス/番地 address 地址アドレスグループ address group 地址组アドレスバー address bar 地址栏アドレスバス address bus 地址总线アドレスフィールド address field 地址段アドレスマルチプレクサー address multiplexer 地址复用混合器アドレスライン address line 地址线アドレス演算子 address 地址运算符アドレス元 address 源地址アドレス先 address 目的地址アドレス帳 address book 地址簿アドレス領域 address area 地址区域アドレッシング addressing 寻址方法、编址技术アナウンス announcement 宣告アナライザ analyser 分析器アナリストanalyst 分析者アナログ analog 模拟量,模拟アナログディジタル変換器 analog-digital converter 模拟-数字转换器アナログデータ analog data 模拟数据アナログベースクラス analog base class 虚基类アナログ回路 analog circuit 模拟电路アナログ出力 analog output 模拟输出アナログ信号 analog signal 模拟信号アナログ制御 analog control 模拟控制アナログ値 analog value 模拟值アナログ入力 analog input 模拟输入アナログ入力設定模拟输入设定期アニメ anime 动画アニメーション animation 动画アニメーションコントロール animation control 动画控件アニメーションの設定自定义动画アニメリソース anime resource 动画资源アニュアル annual 年鉴アノード anode 阳极,正极アノテーション annotation 注释アパッチ Apache 一种程序开发语言アプウィザード Appwizard 应用程序向导工具アブストラクト abstract 摘要アフターサービス after-service 售后服务アフタファイブ after five 5点之后アプライアンス Appliance 器具アプリ application 应用程序アプリケーション application 应用(程序)アプリケーションシリーズ application series 应用程序系列アプリケーションソフトウェア application software 应用软件アプリケーションの自動修復检测并修复アプリケーションヒープ application heap 应用程序堆アプリケーションプログラム application program 应用程序アプリケーション配置应用程序配置アプリケーション配置ドキュメント程序配置文档アプリケーション名应用程序名アプレット applet 小程序アプレットビューアー applet viewer 小程序阅读器アプレット状態遷移 applet状态迁移アプローチ approach 探讨,接近アベイラビリティ availability 可用性アペンド append 附加アボート abort 异常终止,异常中断アボリジン aborigine 土著人アミューズメント amusement 消遣アラート alert 警告アラーム alarm 报警アラームメッセージ alarm message 警告信息アラームリスト alarm list 警报列表アライアンス alliance 联合アライン aline 排列あらかじめ预先アラジンドロップスタッフ aladdin dropstuff 阿拉延开发工具あらまし概要アリアス alias 别名アルガス Algas Algasアルカリ alkali 碱アルゴリズム algorithm 运算法则アルゴリズム algorithm 算法アルバム album 影集アルファベット alphabet 字母表アルファベット alphabet 字母アレイ array 数组。
3M™ Advanced Light Control Film AutomotiveALCF-A2+ALCF-A3+ALCF-A5+39ALCF-A5+39HProduct Description3M™ Advanced Light Control Film (ALCF-A2+,ALCF-A3+ and A5+39(H)) constrains light exitingfrom liquid crystal displays and interior light sources.This effect minimizes reflections on the windscreen orside glass.Construction/Performance3M Display Materials & Systems DivisionTechnical Data 2021Automotive Environmental Testing*for 3M ALCF-A5+39 and 3M ALCF-A5+39HAutomotive Display – Recommended Solutions3M™ Dual Brightness Enhancement Film (DBEF)3M™ Enhanced Specular Reflector (ESR)*example data shown in plot from ALCF-A3+3M™ Brightness Enhancement Film (BEF)3M™ Advanced Light Control Film (ALCF)3M™ Advanced Light Control Film ComparisonFull Instrument Cluster or Center Stack Sample Performance (Single BEF)559.2 > 745.1 373.3 187.3 < 1.4559.3> 745.1373.4187.6< 1.8602.7> 803.1402.4202< 1.6Note: Samples measured on a 12.3" IPS display with White Reflector/Light Guide/Diffuser/BEF3-t-155n/ALCF-AX+ film stack Axial Gain with reference diffuser only.Partial Instrument Cluster Sample PerformanceALCF-A2+ALCF-A3+ALCF-A5+39H661 > 881 441 221 < 1631.9> 842.1421.6211.4< 1.2711.3> 948.1474.5237.7< 0.9Note: Samples measured on a 12.3" IPS display with White Reflector/Light Guide/Diffuser/Crossed BEF3-t-155n/ALCF-AX+ film stack Axial Gain with reference diffuser only.3M Demo – Example Angular Luminance DataMeasurement100-80-60-40-20204060800-80-60-40-2020406080200300400500600700800900100200300400500600700800900A2+A3+A5+39A2+A3+A5+39Vertical Cross Section Horizontal Cross Section Optical Stack = Diffuser Sheet/3M™ BEF3-T-155n/Light Control Film Sample Optical Stack = Diffuser Sheet/3M™ BEF3-T-155n/Light Control Film SampleStorage and HandlingProduct must be stored flat, in its original 3M packaging, out of sunlight, and in a clean, dry (relative humidity between 30% and 60%) area that is maintained at a temperature between 10°C and 30°C. Avoid applying pressure to, or resting objects on, the product to prevent marking, denting or deforming. Hold product by the edges to prevent soiling of the view area and wear gloves to prevent fingerprints or nail marks. Do not slide the product.Limited Warranty3M warrants that each Product will conform to our Customer Quality Standard (the “3M Warranty”) for12 months after 3M’s Product shipment (“Warranty Period”). Any engineering or technical information, recommendations, installation instructions, jumbo delivery standards, certificates of analysis, and other information or materials relating to Products (“Other Product Information”) is provided for the Product buyer’s convenience and is not warranted. 3M will have no obligations under the 3M Warranty with respect to any Product that has been: (a) not stored, handled, or transported in accordance with this Warranty or Other Product Information; or (b) modified or damaged by anyone other than 3M.3M Display Materials & Systems Division3M Center, Building 235-1E-54 St. Paul, MN 55144-1000 U.S.A. Phone 1-800-3M HELPS Web /displayfilms 3M is a trademark of 3M Company.All other trademarks herein are the property of their respective owners.© 3M 2021. All rights reserved. dz29888Measurement (continued)。
A3D RGB Axis-based Color-oriented CryptographyDocument version: 1.0Document author: Kirti Chawla Document signature: 07072005_1023:20914IntroductionIn this document, a formal approach to encrypt, decrypt, transmit and receive information usingcolors is explored. A piece of information consists of set of symbols with a definite property imposed on the generating set. The symbols are usually encoded using ASCII scheme.A linear-to-3D transformation is presented. The change of axis from traditional [x, y, z] to [r, g, b] is highlighted and its effect are studied. A point in this new axis is then represented as a unique color and a vector or matrix is associated with it, making it amenable to standard vector ormatrix operations. A formal notion on hybrid cryptography is introduced as the algorithm lies onthe boundary of symmetric and asymmetric cryptography. No discussion is complete, without mentioning reference to communication aspects of “secure” information in a channel. Transmission scheme pertaining to light as carrier is introduced and studied. Key-exchanges donot come under the scope of current frame of document.Information and Encoding SchemeIn this section, a basic notion of information and encoding scheme is introduced. Perceive-able information is created from a set of symbols having a pre-defined property imposed on all set-members. The scope of information is limited to set of symbols, which consists of numbers, english characters (small, capital) and a few special symbols. These symbols together form a set called ASCII set. There are 256 symbols in this set. Total bits required to represent the whole set is LOG 2[256] bits or 8-bits. Let’s introduce a formal notion to represent this set as a list, where members can be accessed using indices. This can be seen as under:S = {x | ((x ∈ A 1) ∨ (x ∈ A 2) ∨ (x ∈ A 3) ∨ (x ∈ A 4)) ∧ (∆(x) ≤ L)} Where, the symbols have the following meaning: S = A set which has ASCII characters as members x = A set member A 1 = A sub-set of whole numbers limited in range [0, 9] A 2 = A set of small english characters A 3 = A set of capital english characters A 4 = A set of special characters∆ = Number-of-bits operator L = LOG 2[256] or 8 The member of this set can be accessed using indices as in an ordered list. This notion is introduced as under: where, [x(1) ... x(n)] represent symbols with a range from [1, 256]. Each member has a unique L-bit code in this list. It can be seen as a “linear” sequence of bits or as a (L-1)-degree polynomial. Alternatively a member with L-bit code as [10110100] can be seen, f(x) = x 7 + x 5 + x 4 + x 2. Operations like addition and multiplication are always modulo-2L so as to wrap around in case of overflows, hence multiplication of [x(170)] = [10101010] and [x(241)] = [11110001] is given as [x(2)] = [00000010]. This also gives the notion of closure property over set members. Alternative base-system like hexadecimal or octal can be used, for the sake of brevity, hence may mean [x(170)16] = [AA] and [x(170)8] = [252] respectively. The property can be extended over other operators as well. The notion of modulo-2L can also be extended over other bases. 122(1)(2)()L M em ber x M em ber x M em ber x n =Linear-to-3D TransformationIn this section, the notion of Linear-to-3D transformation is introduced. The notion of linear sequence of bits was introduced in previous section. The transformation which maps a linear sequence of bits to 3D is given as under:G012 ; X-axisG345 ; Y-axisT[x{i}] =G678 ; Z-axisWhere, the symbols have the following meaning:T = Linear-to-3D transformationx(i) = A set member of set S with index {i}G012 = B0B1B2 {Bits 0-2 of x(i), where bit-0 is added for symmetry purposes}G345 = B3B4B5 {Bits 3-5 of x(i)}G678 = B6B7B8 {Bits 6-8 of x(i)}The linear sequence is divided into groups of 3-bits (one extra bit is pre-pended for symmetry purposes and is always 0) and mapped onto 3D as a point P[x, y, z], where each of [x], [y] and [z] consists of G012, G345and G678respectively. Each of [x], [y], [z] has N-bits inclusive of their respective group bits, where N > 3. More number of bits in each of [x], [y] and [z], results into more points in the space.Extending the notion of modulo-2L to N-bit per dimension space results into modulo-2N based operations. Each of the modulo-2N operation (addition, subtraction, multiplication …) can be applied individually per dimension.The whole concept can be visualized as the following illustration.Z-axisAxis Transformation In this section, the traditional 3D axis is transformed into a different axis but with same number of dimensions. Individually [x], [y] and [z] are mapped onto [r], [g] and [b] axis respectively. So a point P[x, y, z] in traditional space is transformed to P[r, g, b]. This also introduces a subtle property called color. A unique point P[r, g, b] represents a unique color. The number of bits per dimension in RGB axis results into more number of points and hence more number of colors. Extending modulo-2N operations here results into a definite color per transformation. The following transformation shows the illustration of the whole concept:The transformation described above is an implied operation. The notion of change of axis even though denotes a point P in RGB axis; the output is set of bits. These set of bits are used to influence the color encoding in a beam of light, which acts as carrier. Total number of points or colors in a RGB axis with N-bits per dimension is given as under: Where, the symbols have the following meaning: C = Total number of colors or points in RGB axis N = Total number of bits in each dimension {r}, {g} and {b} A point P in RGB axis represents a vector or matrix of dimensions (1 x 3). This vector has to be homogeneous in nature for the sake of operations to be consistent across all dimensions. This introduces the notion of identity element. The transformed vector or matrix hence can be seen as under: Various operations can now be applied onto this transformed vector or matrix. For the sake of brevity P is dropped from the equation and simply can be stated as V = [r, g, b, 1]. The symbol [x(i)] represents a unique point, a unique vector or matrix and a unique color in RGB axis for a unique index [i]. 32NC =[],,,1V P r g b =Point Operations In this section, various vector or matrix operations are explored, which can be applied and its effects are studied. Continuing from previously introduced notion of vector or matrix as V in RGB axis, extensions are now presented to operate on this vector or matrix. Let V be denoted as [V] for sake of complete-ness. The dimension of [V] is (1 x 4), inclusive of identity element, which is required when performing homogeneous operations. Any binary operation (addition, subtraction, multiplication …) can be applied with the use of dimensionally-matching vector or matrix. A few basic operations are given as under: 1. Addition: In this operation, the dimension of second vector or matrix [K] has to be equal to [V]. The symbolic representation of addition operation is given as under: 2. Subtraction: In this operation, the dimension of second vector or matrix [K] has to be equal to [V]. The symbolic representation of subtraction operation is given as under: 3. Multiplication: In this operation, the number of rows in second vector or matrix [K] has to be equal to number of columns of [V]. The symbolic representation of multiplication operation is given as under: Binary operations, as described as above, can be performed any number of times. A general notation for all such operations is given as under: This essentially means, a given point in RGB axis can be manipulated as a vector or matrix by applying a series of operations. The effect of these operations is potential relocation of the point. The following illustration shows the concept:141414[][][]O V K ×××=+141414[][][]O V K ×××=−1144[][][]j j O V K ×××=×1122311[](([][])...[])N N N a a a a a a N a a O V K K −××××=Cryptography in 3DIn this section, point operations introduced in previous section are explored for cryptographic purposes. A defines crypto-system defines at-least two operations, which are given as under: 1. Encryption: In this operation a given symbol is transformed into another symbol by a set of operation(s). 2. Decryption: In this operation a given symbol is transformed back to original symbol by a set of operation(s). The notions of symbols and operations are already defined in previous sections. Extension to previously defined descriptions, from the perspective of cryptography, is given as under: 1. Message: A message is juxtaposition of symbols from set S. A symbol is a vector or matrix as defined in previous section. 2. Key: A key is secret datum which sender uses to operate upon message using encryption operation. The receiver gets the encrypted message and applies decryption operation to decrypts it using his key to get the original message. A key is a vector or matrix. 3. Sender: A sender sends a message to receiver. 4. Receiver: A receiver accepts the message from sender. 5. Channel: A link over which sender and receiver communicate. With these preliminaries defined, the encryption and decryption operation can now be defined. They are defined as under: Encryption: Let message be defined as M, where M is juxtaposition of symbols from set S. It can be represented as under: The point operations defined in previous sections can now be applied to above message to encrypt it. Let the operation be defined as O[x, k], where [x] is a symbol from above message and [k] is a key. The effect of this operation is movement of point in RGB axis to a new location. Cumulatively, the encryption operation can be seen as under: Where, the symbols have following meaning: E = Encryption operation M = Message O[x, k] = operation x(1)…x(n) = Symbols from set S k(1)…k(n) = Keys (1)(2)...()M x x x n =⋅⋅()[(1),(1)][(2),(2)]...[(),()]E M O x k O x k O x n k n =⋅⋅The effect of this operation is that individually [x(1), x(2), …x(n)] are relocated to new position in RGB axis with the help of keys [k(1), k(2), …k(n)]. Alternatively, one can move a set of symbols by fixed or variable amount in RGB axis, hence making it amenable for stream and block of symbols. Operations that are suitable for relocation of points in RGB axis, are vector or matrix (addition, subtraction, multiplication) oriented. It is computationally intensive to find the starting locations of each of these symbols, after they have relocated and keys are kept secret. Decryption: Let encrypted message be defined as M’, where M’ is juxtaposition of encrypted symbols from set S. It can also be represented as under: The point operations defined in previous sections can now be applied to above message to decrypt it. Let the operation be defined as O[y, l], where [y] is a encrypted symbol from above message and [l] is an inverse key. The effect of this operation is movement of point in RGB axis to a previously known location. Cumulatively, the decryption operation can be seen as under: Where, the symbols have following meaning: D = Decryption operation M’ = Encrypted Message O[y, k] = operation y(1)…y(n) = Encrypted symbols, where y(i)=O[x(i), k(i)] l(1)…l(n) = Inverse keys, where l(i) = k(i)-1 The effect of this operation is that individually [y(1), y(2), …y(n)] are relocated to original position in RGB axis with the help of inverse keys [l(1), l(2), …l(n)]. Alternatively, one can move a set of symbols by fixed or variable amount in RGB axis, hence making it amenable for stream and block of symbols. Operations that are suitable for relocation of points in RGB axis, are vector or matrix (addition, subtraction and multiplication) oriented. Hybrid Cryptography: The algorithm given here lies in between symmetric and asymmetric cryptography in terms of key used. The keys need to be transported to receiver as in the case of symmetric cryptography, but the keys that are used to decrypt the message are not the same as the ones which are used to encrypt and hence behaving more like asymmetric cryptography. This scheme can be seen with the following illustration:'(1)(2)...()M y y y n =⋅⋅'()[(1),(1)][(2),(2)]...[(),()]D M O y l O y l O y n l n =⋅⋅TransmissionIn this section, transmission aspects of information are discussed. The symbol is represented as a vector or matrix on both sender and receiver side. The elements in a given vector or matrix provide not only location information in 3D space, but also color information in RGB axis. Individually, [x], [y], [z] can be input to a light encoder, which encodes the color information in the carrier, to be decoded by the receiver. Each “packet” consists of interleaved color and other meta-data information. There can be two approaches to organize data. First approach requires sender and receiver also share a precise clock or counter and hence “synchronize” per color information. Second approach requires end-of-packet marker to indicate boundaries between color information. Besides this classification of color information, data can also be grouped in set of symbols to increase information content per packet. A generic model of transmission is presented as follows:Generic Model of Transmission:In this model, basic building blocks of transmission, which take advantage of color information are presented. The components required are given as under:1.Sender: A sender generates encrypted symbols.2.Receiver: A receiver consumes decrypted symbols.3.Light-Encoder: A light-encoder takes encrypted symbols and encodes it into light.4.Light-Decoder: A light-decoder takes encoded light signal and generates encryptedsymbols.5.Transmission Channel: A medium in which symbols are transmitted.The illustration of generic model of transmission with all the above components and the components described in previous section are given as under:Where, the terms have following meaning:x(i) = A symbol from set Sy(i) = Encrypted symbolλ(1)…λ(n) = Channel symbolsk(i) = Keyl(i) = Inverse keyEncoding or modulation of encrypted symbol results into channel symbol.Synchronous Transmission: In this mode of transmission, sender and receiver share a clock or counter, which keep both of them in-sync with respect to symbols. The illustration showing this concept is shown as under: Clock or CounterThe format of packet depends upon color information and meta-data information. There can be two possible in-order arrangements of the above-said information. In one arrangement the color information is interleaved with meta-data information in fine-grain manner, whereas in other arrangement the interleaving is more coarse-grain. The formats of these arrangements are shown as under: 1. Fine-grain interleaving format : In this format, the color information is more closely interleaved with meta-data information. The format is shown as under: Packet N Packet 1 Where, the symbols have following meaning: MDI = Meta-Data Information {Clock or Counter, Packet-Number …} C 1…C 3 = Color Information {Encrypted Symbol as vector or matrix} OI = Other Information {Wrap-around factor, Bits-per-axis …} 2. Coarse-grain interleaving format : In this format, the color information is coarsely interleaved with meta-data information. The information is shown as under: Color Information Meta-Data Information Where, the symbols have following meaning: MDI 1…MDI n = Meta-Data Information {Clock or Counter, Packet-Number …} C 1…C 3 = Color Information {Encrypted Symbol as vector or matrix} OI 1 = Other Packet-specific Information {Wrap-around factor, Bits-per-axis …} C 3 C 2 C 1 MDI OI C 3 C 2 C 1 MDI OIMDI 1 C 3 C 2 C 1 OI 1 MDI 2 MDI nAsynchronous Transmission: In this mode of transmission, sender and receiver do not share a clock, but rely on end-of-packet markers, which keep both of them in-sync with respect to symbols. The illustration showing this concept is shown as under:The format of packet depends upon color information and end-of-packet marker. There can be two possible in-order arrangements of the above-said information. In one arrangement the color information is interleaved with end-of-packet marker in fine-grain manner, whereas in other arrangement interleaving is more coarse-grain. The formats of these arrangements are shown as under: 3. Fine-grain interleaving format : In this format, the color information is more closely interleaved with end-of-packet marker. The format is shown as under: Packet N Packet 1 Where, the symbols have following meaning: C 1…C 3 = Color Information {Encrypted Symbol as vector or matrix} OI = Other Information {Wrap-around factor, Bits-per-axis …} EPM = End-of-Packet Marker 4. Coarse-grain interleaving format : In this format, the color information is coarsely interleaved with end-of-packet marker. The information is shown as under: End-of-Packet Marker(s) Color Information Where, the symbols have following meaning: C 1…C 3 = Color Information {Encrypted Symbol as vector or matrix} OI 1 = Other Packet-specific Information {Wrap-around factor, Bits-per-axis …} EPM 1…EPM n = End-of-Packet Marker(s) The useful information per packet can be determined using Information-to-Packet Ratio. EPM 1 C 3 C 2 C 1 OI 1 EPM 2 EPM n OI C 3 C 2 C 1 EPMC 3 C 2 C 1 OI EPMAppendixWrap-around: This phenomenon happens when any of the chosen binary operation (Addition, Subtraction, Multiplication …) results into values greater than defined by number of bits. In this case, a method is introduced which ensures re-construction of value. The preliminaries are defined as under: Where, the symbols have following meaning: N = Number to be re-constructed L = Total bits allocated per number WAF = Wrap-Around-Factor or dividend of division of N by 2L R = Remainder from division of N by 2LConsider the concept when two numbers are manipulated using binary operation. The details are given as under: Where, the symbols have following meaning: T = Result of binary operation on N 1 and N 2 N 1, N 2 = L-bit numbers The result of this operation is such that it exceeds the maximum possible value given L-bits. Remainder is stored as vector or matrix in 3D space. Wrap-around-factor is stored with the vector information on per symbol or set of symbol basis. Bits-per-axis: This metric is used to provide information about the total number of bits allocated per axis, when re-constructing the encrypted symbol. During the binary transformation, there is a possibility that the total number of bits exceed the initial grouping of 3-bits per axis, hence to accommodate this behavior, special organization of bits can be allowed. Bits-per-axis or BPA is 3-tuple and is given as BPA = (T R , T G , T B ). The following illustration shows the concept: Where, the symbols have following meaning: T R ,T G ,T B = Total number of bits in encrypted symbol in RGB-axis This information can be transmitted alongside the encrypted symbol over the channel and poses no threat, infact acts as a “lure” to attacker for re-constructing the encrypted symbol. (2)L N WAF R =×+122mod 22L LL T N N T R T T WAF =>==Information-to-Packet Ratio: A given packet contains color information and other relevant information. This other relevant information can be collectively called a header. This ratio helps to determine color information in a packet vis-à-vis its header. The ratio is given as under: Where, the symbols have following meaning: IPR = Information-to-Packet Ratio B C = Bytes allocated for color information B P = Bytes allocated for total packet, including header The ratio gives the amount of information present per packet or actual packet utilization. Configurability: The concept can be configured in multitude of ways. Some of the ways are given as under: 1. Bits-per-axis during linear-to-3D transformation 2. Bits-per-axis during re-construction 3. Total number of binary transformations applied per symbol 4. Fixed or variable binary transformations per axis 5. Packet format and utilization 6. Encoding and decoding schemes for light 7. Other information added as header and trailer in packet There may be other means to configure various aspects of the concept, which are not mentioned here. The concept is amenable both of stream of symbols or block of symbols. In later case, a set of points in 3D space represent the block of symbols and hence can be manipulated individually or in combined manner.C P B IPR B =Algorithm: The algorithm for the whole concept can be divided into two sections. First section is dedicated to encryption operation and second section is dedicated to decryption operation. Algorithms pertaining to pertinent operations are given as under:1.Encryption operation:This section defines the algorithm pertaining to encryptionoperation. It is given as under:Step 1: Take a symbol [x(i)] or set of symbolsStep 2: Transform the symbols into 3D space {linear-to-3D transformation}Step 3: Treat 3D axis as RGB axisStep 4: Let [V(i)] be the homogeneous vector associated per symbol [x(i)]Step 5: Treat [V(i)] as a point in RGB axisStep 6: Apply any number of binary transformations to the point, with the help of keyvectors [K(i)] {translation, rotation …}Step 7: Keep track of overflows and maintain wrap-around-factor per symbolStep 8: Output the symbol [y(i)] or set of symbolsStep 9: Keep track of bits-per-axis for re-constructionStep 10: Stop2.Decryption operation:This section defines the algorithm pertaining to decryptionoperation. It is given as under:Step 1: Take a symbol [y(i)] or set of symbolsStep 2: Transform the symbols into 3D space {linear-to-3D transformation}Step 3: Treat 3D axis as RGB axisStep 4: Let [V’(i)] be the homogeneous vector associated per symbol [y(i)]Step 5: Treat [V(i)] as a point in RGB axisStep 6: Apply same number of binary transformations to the point, with the help ofinverse key vectors [L(i)] {translation, rotation …}Step 7: Take into account wrap-around-factor and bits-per-axis during re-construction of symbolStep 8: Output the symbol [x(i)] or set of symbolsStep 9: StopStrength: The strength of algorithm lies in the fact that given a point in 3D space, it is difficult to find its earlier position, if the transformation information is not given. The transformation forms the basis of keys. The algorithm may reveal the nature of transformations, without revealing the values of those transformations, hence making it difficult to find out the exact previous location of a point in 3D space, which actually is a symbol in linear space. This concept can be seen as under:Target location of symbol in 3D space As already been discussed, the symbols are transformed to points in 3D space from linear space, if transformation vector or key in not known and only [y(i)] is given at an instant, the problem is to find [x(i)], with only the given information. Alternatively, the strength also comes from the fact that the symbol present on channel may not directly correspond to encrypted symbol, so the attacker first has to re-construct the encrypted symbol and then transform it to original symbol.)[(),O x i =Example: The example shown here considers set of three symbols and applies a single binary transformation on all of them. The details are given as under:Let the set of symbol be denoted as S = {‘A’,’0’,’/’}, where [x(0)] = ‘A’, [x(1)] = ‘0’ and[x(2)] = ‘/’. The equivalent binary values are [x(0)] = [01000001], [x(1)] = [00110000] and[x(2)] = [00101111].Linear-to-3D transformation for the above symbols is given as [x(0)] = {001, 000, 001},[x(1)] = {000, 110, 000} and [x(2)] ={000, 101, 111} respectively. Each of these symbols nowcorresponds to a unique point in 3D space. There are 8-bits per axis, of which 3-bits per axis issymbol information.Above points in 3D space can be represented in the form of homogeneous vectors and are givenas under:[X(0)] = [1 0 1 1], [X(1)] = [0 6 0 1], [X(2)] = [0 5 7 1]Let the binary transformation be multiplication and the value of binary transformation or vector isgiven as under:1 0 0 0[V] = 0 1 0 00 0 1 03 4 5 1The result of the binary transformation is translation of points to a new location. This can be seenas under:[Y(0)] = [4 4 6 1], [Y(1)] = [3 10 5 1], [Y(2)] = [3 9 12 1]Once the points are relocated, they can be transformed back from 3D space to linear space. Thiscan be seen as [y(0)] = {100, 100, 110}, [y(1)] = {011, 1010, 101} and [y(2)] = {011, 1001, 1100} respectively. When all the bits are combined, it can be represented as[y(0)] = [100100110], [y(1)] = [0111010101] and [y(2)] = [01110011100] respectively. As it canbe inferred from the values that they exceed 2L, where L = 8, hence modulo-2L approach is takento wrap-around. The wrap around values are given as [z(0)] = [00100110], [z(1)] = [11010101]and [z(2)] = [10011100] respectively. The wrap-around-factor for each of the symbol is given as[w(0)] = [00000001], [w(1)] = [00000001] and [w(2)] = [00000011] respectively. Bits-per-axisfor each of the symbol is given as [b(0)] = [333], [b(1)] = [343] and [b(2)] = [344] respectively.The encrypted symbols are [y(0)], [y(1)] and [y(2)] respectively and further “mangling” withwrap-around-factors and bits-per-axis results into [z(0)], [z(1)] and [z(2)] respectively, which istransmitted over channel. Wrap-around-factors and bits-per-axis are transmitted alongside the“mangled” symbols over the channel. On the receiver end, the symbols [z(0)], [z(1)] and [z(2)]are “re-constructed” with the help of wrap-around-factors and bits-per-axis and plotted in 3Dspace, where the inverse binary transformation or inverse key is applied to get the original set ofsymbols.。