Robust Inter-Slice Intensity Normalization using Histogram Scale-Space Analysis
- 格式:pdf
- 大小:875.80 KB
- 文档页数:8
几种特征点提取算子的分析和比较特征点提取是计算机视觉中的一个重要任务,用于定位和描述图像中的重要局部特征,如角点、边缘、斑点等。
通过提取图像的特征点,可以实现目标识别、图像配准、图像检索等任务。
常用的特征点提取算子包括Harris角点检测算子、SIFT(尺度不变特征变换)算子、SURF(加速稳健特征)算子和FAST(快速特征点)算子。
下面对这几种算子进行分析和比较。
1. Harris角点检测算子:Harris角点检测算子是一种基于图像亮度变化的角点检测方法。
它通过计算图像每个像素的Harris响应函数来判断是否为角点。
Harris算子具有旋转不变性和尺度不变性的优点,但对于光照变化比较敏感。
2.SIFT算子:SIFT算子是一种局部特征描述算子,通过尺度空间的不变性和局部光度不变性来提取特征点。
SIFT算子对旋转、尺度、光照和仿射变化具有较好的不变性,适用于一些复杂场景下的目标识别和图像匹配任务。
3.SURF算子:SURF算子是一种基于SIFT算子的加速算法,它通过使用积分图像和快速Hessian矩阵的计算方法,提高了特征点提取的效率。
SURF算子在保持SIFT算子的不变性的基础上,显著提升了运算速度。
4.FAST算子:FAST算子是一种基于灰度阈值的快速特征点提取算子。
FAST算子速度快,适用于实时应用和大规模图像处理任务。
但FAST算子对于尺度和旋转变化较为敏感,不适用于复杂场景下的图像处理任务。
综上所述,不同的特征点提取算子适用于不同的图像处理任务。
如果要求高精度、高稳定性和较好的不变性,可以选择SIFT或SURF算子;如果要求处理速度较快,可以选择FAST算子。
实际应用中,可以根据具体需求选择适合的算子或者结合多个算子进行特征点提取,以达到更好的效果。
边缘提取以及边缘增强是不少图像处理软件都具有的基本功能,它的增强效果很明显,在用于识别的应用中,图像边缘也是非常重要的特征之一。
图像边缘保留了原始图像中相当重要的部分信息,而又使得总的数据量减小了很多,这正符合特征提取的要求。
在以后要谈到的霍夫变换(检测图像中的几何形状)中,边缘提取就是前提步骤。
这里我们只考虑灰度图像,用于图像识别的边缘提取比起仅仅用于视觉效果增强的边缘提取要复杂一些。
要给图像的边缘下一个定义还挺困难的,从人的直观感受来说,边缘对应于物体的边界。
图像上灰度变化剧烈的区域比较符合这个要求,我们一般会以这个特征来提取图像的边缘。
但在遇到包含纹理的图像上,这有点问题,比如说,图像中的人穿了黑白格子的衣服,我们往往不希望提取出来的边缘包括衣服上的方格。
但这个比较困难,涉及到纹理图像的处理等方法。
好了,既然边缘提取是要保留图像的灰度变化剧烈的区域,从数学上,最直观的方法就是微分(对于数字图像来说就是差分),在信号处理的角度来看,也可以说是用高通滤波器,即保留高频信号。
这是最关键的一步,在此之前有时需要对输入图像进行消除噪声的处理。
用于图像识别的边缘提取往往需要输出的边缘是二值图像,即只有黑白两个灰度的图像,其中一个灰度代表边缘,另一个代表背景。
此外,还需要把边缘细化成只有一个像素的宽度。
总的说来边缘提取的步骤如下:1,去噪声2,微分运算3,2值化处理4,细化第二步是关键,有不少书把第二步就直接称为边缘提取。
实现它的算法也有很多,一般的图像处理教科书上都会介绍好几种,如拉普拉兹算子,索贝尔算子,罗伯特算子等等。
这些都是模板运算,首先定义一个模板,模板的大小以3*3的较常见,也有2*2,5*5或更大尺寸的。
运算时,把模板中心对应到图像的每一个像素位置,然后按照模板对应的公式对中心像素和它周围的像素进行数学运算,算出的结果作为输出图像对应像素点的值。
需要说明的是,模板运算是图像的一种处理手段--邻域处理,有许多图像增强效果都可以采用模板运算实现,如平滑效果,中值滤波(一种消除噪声的方法),油画效果,图像的凹凸效果等等。
二维巴特沃斯滤波器1. 简介二维巴特沃斯滤波器是一种常用的图像处理方法,用于对图像进行频域滤波。
它基于巴特沃斯滤波器的原理,在频域中对图像进行平滑或增强。
本文将详细介绍二维巴特沃斯滤波器的原理、实现步骤以及应用案例。
2. 巴特沃斯滤波器原理巴特沃斯滤波器是一种频率域滤波器,通过调整截止频率和阶数来控制信号的频率响应。
其传输函数可以表示为:H (u,v )=11+(D (u,v )D 0)2n其中,D (u,v ) 是图像中每个点到中心点的距离,D 0 是截止频率,n 是阶数。
当 n 取不同值时,巴特沃斯滤波器可以实现不同程度的平滑或增强效果。
当 n >1 时,增加了阶数可以使得低频信号更加平坦;当 n <1 时,减小了阶数可以使得低频信号更加突出。
3. 实现步骤二维巴特沃斯滤波器的实现步骤如下:步骤 1:读取图像首先,需要从文件中读取待处理的图像。
可以使用 C 语言中的图像处理库,如 OpenCV ,来实现图像读取功能。
#include <opencv2/opencv.hpp>int main() {// 读取图像cv::Mat image = cv::imread("input.jpg", cv::IMREAD_GRAYSCALE);// 其他处理步骤...return 0;}步骤 2:进行傅里叶变换将读取的图像进行傅里叶变换,得到频域表示。
可以使用 OpenCV 提供的函数dft 来实现傅里叶变换。
#include <opencv2/opencv.hpp>int main() {// 读取图像cv::Mat image = cv::imread("input.jpg", cv::IMREAD_GRAYSCALE);// 进行傅里叶变换cv::Mat frequencyDomain;cv::dft(image, frequencyDomain, cv::DFT_COMPLEX_OUTPUT);// 其他处理步骤...return 0;}步骤 3:生成巴特沃斯滤波器根据巴特沃斯滤波器的传输函数公式,可以生成巴特沃斯滤波器的频域表示。
斯仑贝谢所有测井曲线英文名称解释OCEAN DRILLING PROGRAMACRONYMS USED FOR WIRELINE SCHLUMBERGER TOOLS ACT Aluminum Clay ToolAMS Auxiliary Measurement SondeAPS Accelerator Porosity SondeARI Azimuthal Resistivity ImagerASI Array Sonic ImagerBGKT Vertical Seismic Profile ToolBHC Borehole Compensated Sonic ToolBHTV Borehole TeleviewerCBL Casing Bond LogCNT Compensated Neutron ToolDIT Dual Induction ToolDLL Dual LaterologDSI Dipole Sonic ImagerFMS Formation MicroScannerGHMT Geologic High Resolution Magnetic ToolGPIT General Purpose Inclinometer ToolGR Natural Gamma RayGST Induced Gamma Ray Spectrometry ToolHLDS Hostile Environment Lithodensity SondeHLDT Hostile Environment Lithodensity ToolHNGS Hostile Environment Gamma Ray SondeLDT Lithodensity ToolLSS Long Spacing Sonic ToolMCD Mechanical Caliper DeviceNGT Natural Gamma Ray Spectrometry ToolNMRT Nuclear Resonance Magnetic ToolQSST Inline Checkshot ToolSDT Digital Sonic ToolSGT Scintillation Gamma Ray ToolSUMT Susceptibility Magnetic ToolUBI Ultrasonic Borehole ImagerVSI Vertical Seismic ImagerWST Well Seismic ToolWST-3 3-Components Well Seismic ToolOCEAN DRILLING PROGRAMACRONYMS USED FOR LWD SCHLUMBERGER TOOLSADN Azimuthal Density-NeutronCDN Compensated Density-NeutronCDR Compensated Dual ResistivityISONIC Ideal Sonic-While-DrillingNMR Nuclear Magnetic ResonanceRAB Resistivity-at-the-BitOCEAN DRILLING PROGRAMACRONYMS USED FOR NON-SCHLUMBERGER SPECIALTY TOOLSMCS Multichannel Sonic ToolMGT Multisensor Gamma ToolSST Shear Sonic ToolTAP Temperature-Acceleration-Pressure ToolTLT Temperature Logging ToolOCEAN DRILLING PROGRAMACRONYMS AND UNITS USED FOR WIRELINE SCHLUMBERGER LOGSAFEC APS Far Detector Counts (cps)ANEC APS Near Detector Counts (cps)AX Acceleration X Axis (ft/s2)AY Acceleration Y Axis (ft/s2)AZ Acceleration Z Axis (ft/s2)AZIM Constant Azimuth for Deviation Correction (deg)APLC APS Near/Array Limestone Porosity Corrected (%)C1 FMS Caliper 1 (in)C2 FMS Caliper 2 (in)CALI Caliper (in)CFEC Corrected Far Epithermal Counts (cps)CFTC Corrected Far Thermal Counts (cps)CGR Computed (Th+K) Gamma Ray (API units)CHR2 Peak Coherence, Receiver Array, Upper DipoleCHRP Compressional Peak Coherence, Receiver Array, P&SCHRS Shear Peak Coherence, Receiver Array, P&SCHTP Compressional Peak Coherence, Transmitter Array, P&SCHTS Shear Peak Coherence, Transmitter Array, P&SCNEC Corrected Near Epithermal Counts (cps)CNTC Corrected Near Thermal Counts (cps)CS Cable Speed (m/hr)CVEL Compressional Velocity (km/s)DATN Discriminated Attenuation (db/m)DBI Discriminated Bond IndexDEVI Hole Deviation (degrees)DF Drilling Force (lbf)DIFF Difference Between MEAN and MEDIAN in Delta-Time Proc. (microsec/ft) DRH HLDS Bulk Density Correction (g/cm3)DRHO Bulk Density Correction (g/cm3)DT Short Spacing Delta-Time (10'-8' spacing; microsec/ft)DT1 Delta-Time Shear, Lower Dipole (microsec/ft)DT2 Delta-Time Shear, Upper Dipole (microsec/ft)DT4P Delta- Time Compressional, P&S (microsec/ft)DT4S Delta- Time Shear, P&S (microsec/ft))DT1R Delta- Time Shear, Receiver Array, Lower Dipole (microsec/ft)DT2R Delta- Time Shear, Receiver Array, Upper Dipole (microsec/ft)DT1T Delta-Time Shear, Transmitter Array, Lower Dipole (microsec/ft)DT2T Delta-Time Shear, Transmitter Array, Upper Dipole (microsec/ft)DTCO Delta- Time Compressional (microsec/ft)DTL Long Spacing Delta-Time (12'-10' spacing; microsec/ft)DTLF Long Spacing Delta-Time (12'-10' spacing; microsec/ft)DTLN Short Spacing Delta-Time (10'-8' spacing; microsec/ftDTRP Delta-Time Compressional, Receiver Array, P&S (microsec/ft)DTRS Delta-Time Shear, Receiver Array, P&S (microsec/ft)DTSM Delta-Time Shear (microsec/ft)DTST Delta-Time Stoneley (microsec/ft)DTTP Delta-Time Compressional, Transmitter Array, P&S (microsec/ft)DTTS Delta-Time Shear, Transmitter Array, P&S (microsec/ft)ECGR Environmentally Corrected Gamma Ray (API units)EHGR Environmentally Corrected High Resolution Gamma Ray (API units) ENPH Epithermal Neutron Porosity (%)ENRA Epithermal Neutron RatioETIM Elapsed Time (sec)FINC Magnetic Field Inclination (degrees)FNOR Magnetic Field Total Moment (oersted)FX Magnetic Field on X Axis (oersted)FY Magnetic Field on Y Axis (oersted)FZ Magnetic Field on Z Axis (oersted)GR Natural Gamma Ray (API units)HALC High Res. Near/Array Limestone Porosity Corrected (%)HAZI Hole Azimuth (degrees)HBDC High Res. Bulk Density Correction (g/cm3)HBHK HNGS Borehole Potassium (%)HCFT High Resolution Corrected Far Thermal Counts (cps)HCGR HNGS Computed Gamma Ray (API units)HCNT High Resolution Corrected Near Thermal Counts (cps)HDEB High Res. Enhanced Bulk Density (g/cm3)HDRH High Resolution Density Correction (g/cm3)HFEC High Res. Far Detector Counts (cps)HFK HNGS Formation Potassium (%)HFLC High Res. Near/Far Limestone Porosity Corrected (%)HEGR Environmentally Corrected High Resolution Natural Gamma Ray (API units) HGR High Resolution Natural Gamma Ray (API units)HLCA High Res. Caliper (inHLEF High Res. Long-spaced Photoelectric Effect (barns/e-)HNEC High Res. Near Detector Counts (cps)HNPO High Resolution Enhanced Thermal Nutron Porosity (%)HNRH High Resolution Bulk Density (g/cm3)HPEF High Resolution Photoelectric Effect (barns/e-)HRHO High Resolution Bulk Density (g/cm3)HROM High Res. Corrected Bulk Density (g/cm3)HSGR HNGS Standard (total) Gamma Ray (API units)HSIG High Res. Formation Capture Cross Section (capture units) HSTO High Res. Computed Standoff (in)HTHO HNGS Thorium (ppm)HTNP High Resolution Thermal Neutron Porosity (%)HURA HNGS Uranium (ppm)IDPH Phasor Deep Induction (ohmm)IIR Iron Indicator Ratio [CFE/(CCA+CSI)]ILD Deep Resistivity (ohmm)ILM Medium Resistivity (ohmm)IMPH Phasor Medium Induction (ohmm)ITT Integrated Transit Time (s)LCAL HLDS Caliper (in)LIR Lithology Indicator Ratio [CSI/(CCA+CSI)]LLD Laterolog Deep (ohmm)LLS Laterolog Shallow (ohmm)LTT1 Transit Time (10'; microsec)LTT2 Transit Time (8'; microsec)LTT3 Transit Time (12'; microsec)LTT4 Transit Time (10'; microsec)MAGB Earth's Magnetic Field (nTes)MAGC Earth Conductivity (ppm)MAGS Magnetic Susceptibility (ppm)MEDIAN Median Delta-T Recomputed (microsec/ft)MEAN Mean Delta-T Recomputed (microsec/ft)NATN Near Pseudo-Attenuation (db/m)NMST Magnetometer Temperature (degC)NMSV Magnetometer Signal Level (V)NPHI Neutron Porosity (%)NRHB LDS Bulk Density (g/cm3)P1AZ Pad 1 Azimuth (degrees)PEF Photoelectric Effect (barns/e-)PEFL LDS Long-spaced Photoelectric Effect (barns/e-)PIR Porosity Indicator Ratio [CHY/(CCA+CSI)]POTA Potassium (%)RB Pad 1 Relative Bearing (degrees)RHL LDS Long-spaced Bulk Density (g/cm3)RHOB Bulk Density (g/cm3)RHOM HLDS Corrected Bulk Density (g/cm3)RMGS Low Resolution Susceptibility (ppm)SFLU Spherically Focused Log (ohmm)SGR Total Gamma Ray (API units)SIGF APS Formation Capture Cross Section (capture units)SP Spontaneous Potential (mV)STOF APS Computed Standoff (in)SURT Receiver Coil Temperature (degC)SVEL Shear Velocity (km/s)SXRT NMRS differential Temperature (degC)TENS Tension (lb)THOR Thorium (ppm)TNRA Thermal Neutron RatioTT1 Transit Time (10' spacing; microsec)TT2 Transit Time (8' spacing; microsec)TT3 Transit Time (12' spacing; microsec)TT4 Transit Time (10' spacing; microsec)URAN Uranium (ppm)V4P Compressional Velocity, from DT4P (P&S; km/s)V4S Shear Velocity, from DT4S (P&S; km/s)VELP Compressional Velocity (processed from waveforms; km/s)VELS Shear Velocity (processed from waveforms; km/s)VP1 Compressional Velocity, from DT, DTLN, or MEAN (km/s)VP2 Compressional Velocity, from DTL, DTLF, or MEDIAN (km/s)VCO Compressional Velocity, from DTCO (km/s)VS Shear Velocity, from DTSM (km/s)VST Stonely Velocity, from DTST km/s)VS1 Shear Velocity, from DT1 (Lower Dipole; km/s)VS2 Shear Velocity, from DT2 (Upper Dipole; km/s)VRP Compressional Velocity, from DTRP (Receiver Array, P&S; km/s) VRS Shear Velocity, from DTRS (Receiver Array, P&S; km/s)VS1R Shear Velocity, from DT1R (Receiver Array, Lower Dipole; km/s) VS2R Shear Velocity, from DT2R (Receiver Array, Upper Dipole; km/s) VS1T Shear Velocity, from DT1T (Transmitter Array, Lower Dipole; km/s) VS2T Shear Velocity, from DT2T (Transmitter Array, Upper Dipole; km/s) VTP Compressional Velocity, from DTTP (Transmitter Array, P&S; km/s) VTS Shear Velocity, from DTTS (Transmitter Array, P&S; km/s)#POINTS Number of Transmitter-Receiver Pairs Used in Sonic Processing W1NG NGT Window 1 counts (cps)W2NG NGT Window 2 counts (cps)W3NG NGT Window 3 counts (cps)W4NG NGT Window 4 counts (cps)W5NG NGT Window 5 counts (cps)OCEAN DRILLING PROGRAMACRONYMS AND UNITS USED FOR LWD SCHLUMBERGER LOGSAT1F Attenuation Resistivity (1 ft resolution; ohmm)AT3F Attenuation Resistivity (3 ft resolution; ohmm)AT4F Attenuation Resistivity (4 ft resolution; ohmm)AT5F Attenuation Resistivity (5 ft resolution; ohmm)ATR Attenuation Resistivity (deep; ohmm)BFV Bound Fluid Volume (%)B1TM RAB Shallow Resistivity Time after Bit (s)B2TM RAB Medium Resistivity Time after Bit (s)B3TM RAB Deep Resistivity Time after Bit (s)BDAV Deep Resistivity Average (ohmm)BMAV Medium Resistivity Average (ohmm)BSAV Shallow Resistivity Average (ohmm)CGR Computed (Th+K) Gamma Ray (API units)DCAL Differential Caliper (in)DROR Correction for CDN rotational density (g/cm3).DRRT Correction for ADN rotational density (g/cm3).DTAB AND or CDN Density Time after Bit (hr)FFV Free Fluid Volume (%)GR Gamma Ray (API Units)GR7 Sum Gamma Ray Windows GRW7+GRW8+GRW9-Equivalent to Wireline NGT window 5 (cps) GRW3 Gamma Ray Window 3 counts (cps)-Equivalent to Wireline NGT window 1GRW4 Gamma Ray Window 4 counts (cps)-Equivalent to Wireline NGT window 2GRW5 Gamma Ray Window 5 counts (cps)-Equivalent to Wireline NGT window 3GRW6 Gamma Ray Window 6 counts (cps)-Equivalent to Wireline NGT window 4GRW7 Gamma Ray Window 7 counts (cps)GRW8 Gamma Ray Window 8 counts (cps)GRW9 Gamma Ray Window 9 counts (cps)GTIM CDR Gamma Ray Time after Bit (s)GRTK RAB Gamma Ray Time after Bit (s)HEF1 Far He Bank 1 counts (cps)HEF2 Far He Bank 2 counts (cps)HEF3 Far He Bank 3 counts (cps)HEF4 Far He Bank 4 counts (cps)HEN1 Near He Bank 1 counts (cps)HEN2 Near He Bank 2 counts (cps)HEN3 Near He Bank 3 counts (cps)HEN4 Near He Bank 4 counts (cps)MRP Magnetic Resonance PorosityNTAB ADN or CDN Neutron Time after Bit (hr)PEF Photoelectric Effect (barns/e-)POTA Potassium (%) ROPE Rate of Penetration (ft/hr)PS1F Phase Shift Resistivity (1 ft resolution; ohmm)PS2F Phase Shift Resistivity (2 ft resolution; ohmm)PS3F Phase Shift Resistivity (3 ft resolution; ohmm)PS5F Phase Shift Resistivity (5 ft resolution; ohmm)PSR Phase Shift Resistivity (shallow; ohmm)RBIT Bit Resistivity (ohmm)RBTM RAB Resistivity Time After Bit (s)RING Ring Resistivity (ohmm)ROMT Max. Density Total (g/cm3) from rotational processing ROP Rate of Penetration (m/hr)ROP1 Rate of Penetration, average over last 1 ft (m/hr).ROP5 Rate of Penetration, average over last 5 ft (m/hr)ROPE Rate of Penetration, averaged over last 5 ft (ft/hr)RPM RAB Tool Rotation Speed (rpm)RTIM CDR or RAB Resistivity Time after Bit (hr)SGR Total Gamma Ray (API units)T2 T2 Distribution (%)T2LM T2 Logarithmic Mean (ms)THOR Thorium (ppm)TNPH Thermal Neutron Porosity (%)TNRA Thermal RatioURAN Uranium (ppm)OCEAN DRILLING PROGRAMADDITIONAL ACRONYMS AND UNITS(PROCESSED LOGS FROM GEOCHEMICAL TOOL STRING)AL2O3 Computed Al2O3 (dry weight %)AL2O3MIN Computed Al2O3 Standard Deviation (dry weight %) AL2O3MAX Computed Al2O3 Standard Deviation (dry weight %) CAO Computed CaO (dry weight %)CAOMIN Computed CaO Standard Deviation (dry weight %) CAOMAX Computed CaO Standard Deviation (dry weight %) CACO3 Computed CaCO3 (dry weight %)CACO3MIN Computed CaCO3 Standard Deviation (dry weight %) CACO3MAX Computed CaCO3 Standard Deviation (dry weight %) CCA Calcium Yield (decimal fraction)CCHL Chlorine Yield (decimal fraction)CFE Iron Yield (decimal fraction)CGD Gadolinium Yield (decimal fraction)CHY Hydrogen Yield (decimal fraction)CK Potassium Yield (decimal fraction)CSI Silicon Yield (decimal fraction)CSIG Capture Cross Section (capture units)CSUL Sulfur Yield (decimal fraction)CTB Background Yield (decimal fraction)CTI Titanium Yield (decimal fraction)FACT Quality Control CurveFEO Computed FeO (dry weight %)FEOMIN Computed FeO Standard Deviation (dry weight %) FEOMAX Computed FeO Standard Deviation (dry weight %) FEO* Computed FeO* (dry weight %)FEO*MIN Computed FeO* Standard Deviation (dry weight %) FEO*MAX Computed FeO* Standard Deviation (dry weight %) FE2O3 Computed Fe2O3 (dry weight %)FE2O3MIN Computed Fe2O3 Standard Deviation (dry weight %) FE2O3MAX Computed Fe2O3 Standard Deviation (dry weight %) GD Computed Gadolinium (dry weight %)GDMIN Computed Gadolinium Standard Deviation (dry weight %) GDMAX Computed Gadolinium Standard Deviation (dry weight %) K2O Computed K2O (dry weight %)K2OMIN Computed K2O Standard Deviation (dry weight %)K2OMAX Computed K2O Standard Deviation (dry weight %) MGO Computed MgO (dry weight %)MGOMIN Computed MgO Standard Deviation (dry weight %) MGOMAX Computed MgO Standard Deviation (dry weight %)S Computed Sulfur (dry weight %)SMIN Computed Sulfur Standard Deviation (dry weight %) SMAX Computed Sulfur Standard Deviation (dry weight %)SIO2 Computed SiO2 (dry weight %)SIO2MIN Computed SiO2 Standard Deviation (dry weight %) SIO2MAX Computed SiO2 Standard Deviation (dry weight %) THORMIN Computed Thorium Standard Deviation (ppm) THORMAX Computed Thorium Standard Deviation (ppm)TIO2 Computed TiO2 (dry weight %)TIO2MIN Computed TiO2 Standard Deviation (dry weight %) TIO2MAX Computed TiO2 Standard Deviation (dry weight %) URANMIN Computed Uranium Standard Deviation (ppm) URANMAX Computed Uranium Standard Deviation (ppm) VARCA Variable CaCO3/CaO calcium carbonate/oxide factor。
A Fast and Accurate Plane Detection Algorithm for Large Noisy Point CloudsUsing Filtered Normals and Voxel GrowingJean-Emmanuel DeschaudFranc¸ois GouletteMines ParisTech,CAOR-Centre de Robotique,Math´e matiques et Syst`e mes60Boulevard Saint-Michel75272Paris Cedex06jean-emmanuel.deschaud@mines-paristech.fr francois.goulette@mines-paristech.frAbstractWith the improvement of3D scanners,we produce point clouds with more and more points often exceeding millions of points.Then we need a fast and accurate plane detection algorithm to reduce data size.In this article,we present a fast and accurate algorithm to detect planes in unorganized point clouds usingfiltered normals and voxel growing.Our work is based on afirst step in estimating better normals at the data points,even in the presence of noise.In a second step,we compute a score of local plane in each point.Then, we select the best local seed plane and in a third step start a fast and robust region growing by voxels we call voxel growing.We have evaluated and tested our algorithm on different kinds of point cloud and compared its performance to other algorithms.1.IntroductionWith the growing availability of3D scanners,we are now able to produce large datasets with millions of points.It is necessary to reduce data size,to decrease the noise and at same time to increase the quality of the model.It is in-teresting to model planar regions of these point clouds by planes.In fact,plane detection is generally afirst step of segmentation but it can be used for many applications.It is useful in computer graphics to model the environnement with basic geometry.It is used for example in modeling to detect building facades before classification.Robots do Si-multaneous Localization and Mapping(SLAM)by detect-ing planes of the environment.In our laboratory,we wanted to detect small and large building planes in point clouds of urban environments with millions of points for modeling. As mentioned in[6],the accuracy of the plane detection is important for after-steps of the modeling pipeline.We also want to be fast to be able to process point clouds with mil-lions of points.We present a novel algorithm based on re-gion growing with improvements in normal estimation and growing process.For our method,we are generic to work on different kinds of data like point clouds fromfixed scan-ner or from Mobile Mapping Systems(MMS).We also aim at detecting building facades in urban point clouds or little planes like doors,even in very large data sets.Our input is an unorganized noisy point cloud and with only three”in-tuitive”parameters,we generate a set of connected compo-nents of planar regions.We evaluate our method as well as explain and analyse the significance of each parameter. 2.Previous WorksAlthough there are many methods of segmentation in range images like in[10]or in[3],three have been thor-oughly studied for3D point clouds:region-growing, hough-transform from[14]and Random Sample Consen-sus(RANSAC)from[9].The application of recognising structures in urban laser point clouds is frequent in literature.Bauer in[4]and Boulaassal in[5]detect facades in dense3D point cloud by a RANSAC algorithm.V osselman in[23]reviews sur-face growing and3D hough transform techniques to de-tect geometric shapes.Tarsh-Kurdi in[22]detect roof planes in3D building point cloud by comparing results on hough-transform and RANSAC algorithm.They found that RANSAC is more efficient than thefirst one.Chao Chen in[6]and Yu in[25]present algorithms of segmentation in range images for the same application of detecting planar regions in an urban scene.The method in[6]is based on a region growing algorithm in range images and merges re-sults in one labelled3D point cloud.[25]uses a method different from the three we have cited:they extract a hi-erarchical subdivision of the input image built like a graph where leaf nodes represent planar regions.There are also other methods like bayesian techniques. In[16]and[8],they obtain smoothed surface from noisy point clouds with objects modeled by probability distribu-tions and it seems possible to extend this idea to point cloud segmentation.But techniques based on bayesian statistics need to optimize global statistical model and then it is diffi-cult to process points cloud larger than one million points.We present below an analysis of the two main methods used in literature:RANSAC and region-growing.Hough-transform algorithm is too time consuming for our applica-tion.To compare the complexity of the algorithm,we take a point cloud of size N with only one plane P of size n.We suppose that we want to detect this plane P and we define n min the minimum size of the plane we want to detect.The size of a plane is the area of the plane.If the data density is uniform in the point cloud then the size of a plane can be specified by its number of points.2.1.RANSACRANSAC is an algorithm initially developped by Fis-chler and Bolles in[9]that allows thefitting of models with-out trying all possibilities.RANSAC is based on the prob-ability to detect a model using the minimal set required to estimate the model.To detect a plane with RANSAC,we choose3random points(enough to estimate a plane).We compute the plane parameters with these3points.Then a score function is used to determine how the model is good for the remaining ually,the score is the number of points belonging to the plane.With noise,a point belongs to a plane if the distance from the point to the plane is less than a parameter γ.In the end,we keep the plane with the best score.Theprobability of getting the plane in thefirst trial is p=(nN )3.Therefore the probability to get it in T trials is p=1−(1−(nN )3)ing equation1and supposing n minN1,we know the number T min of minimal trials to have a probability p t to get planes of size at least n min:T min=log(1−p t)log(1−(n minN))≈log(11−p t)(Nn min)3.(1)For each trial,we test all data points to compute the score of a plane.The RANSAC algorithm complexity lies inO(N(Nn min )3)when n minN1and T min→0whenn min→N.Then RANSAC is very efficient in detecting large planes in noisy point clouds i.e.when the ratio n minN is 1but very slow to detect small planes in large pointclouds i.e.when n minN 1.After selecting the best model,another step is to extract the largest connected component of each plane.Connnected components mean that the min-imum distance between each point of the plane and others points is smaller(for distance)than afixed parameter.Schnabel et al.[20]bring two optimizations to RANSAC:the points selection is done locally and the score function has been improved.An octree isfirst created from point cloud.Points used to estimate plane parameters are chosen locally at a random depth of the octree.The score function is also different from RANSAC:instead of testing all points for one model,they test only a random subset and find the score by interpolation.The algorithm complexity lies in O(Nr4Ndn min)where r is the number of random subsets for the score function and d is the maximum octree depth. Their algorithm improves the planes detection speed but its complexity lies in O(N2)and it becomes slow on large data sets.And again we have to extract the largest connected component of each plane.2.2.Region GrowingRegion Growing algorithms work well in range images like in[18].The principle of region growing is to start with a seed region and to grow it by neighborhood when the neighbors satisfy some conditions.In range images,we have the neighbors of each point with pixel coordinates.In case of unorganized3D data,there is no information about the neighborhood in the data structure.The most common method to compute neighbors in3D is to compute a Kd-tree to search k nearest neighbors.The creation of a Kd-tree lies in O(NlogN)and the search of k nearest neighbors of one point lies in O(logN).The advantage of these region growing methods is that they are fast when there are many planes to extract,robust to noise and extract the largest con-nected component immediately.But they only use the dis-tance from point to plane to extract planes and like we will see later,it is not accurate enough to detect correct planar regions.Rabbani et al.[19]developped a method of smooth area detection that can be used for plane detection.Theyfirst estimate the normal of each point like in[13].The point with the minimum residual starts the region growing.They test k nearest neighbors of the last point added:if the an-gle between the normal of the point and the current normal of the plane is smaller than a parameterαthen they add this point to the smooth region.With Kd-tree for k nearest neighbors,the algorithm complexity is in O(N+nlogN). The complexity seems to be low but in worst case,when nN1,example for facade detection in point clouds,the complexity becomes O(NlogN).3.Voxel Growing3.1.OverviewIn this article,we present a new algorithm adapted to large data sets of unorganized3D points and optimized to be accurate and fast.Our plane detection method works in three steps.In thefirst part,we compute a better esti-mation of the normal in each point by afiltered weighted planefitting.In a second step,we compute the score of lo-cal planarity in each point.We select the best seed point that represents a good seed plane and in the third part,we grow this seed plane by adding all points close to the plane.Thegrowing step is based on a voxel growing algorithm.The filtered normals,the score function and the voxel growing are innovative contributions of our method.As an input,we need dense point clouds related to the level of detail we want to detect.As an output,we produce connected components of planes in the point cloud.This notion of connected components is linked to the data den-sity.With our method,the connected components of planes detected are linked to the parameter d of the voxel grid.Our method has 3”intuitive”parameters :d ,area min and γ.”intuitive”because there are linked to physical mea-surements.d is the voxel size used in voxel growing and also represents the connectivity of points in detected planes.γis the maximum distance between the point of a plane and the plane model,represents the plane thickness and is linked to the point cloud noise.area min represents the minimum area of planes we want to keep.3.2.Details3.2.1Local Density of Point CloudsIn a first step,we compute the local density of point clouds like in [17].For that,we find the radius r i of the sphere containing the k nearest neighbors of point i .Then we cal-culate ρi =kπr 2i.In our experiments,we find that k =50is a good number of neighbors.It is important to know the lo-cal density because many laser point clouds are made with a fixed resolution angle scanner and are therefore not evenly distributed.We use the local density in section 3.2.3for the score calculation.3.2.2Filtered Normal EstimationNormal estimation is an important part of our algorithm.The paper [7]presents and compares three normal estima-tion methods.They conclude that the weighted plane fit-ting or WPF is the fastest and the most accurate for large point clouds.WPF is an idea of Pauly and al.in [17]that the fitting plane of a point p must take into consider-ation the nearby points more than other distant ones.The normal least square is explained in [21]and is the mini-mum of ki =1(n p ·p i +d )2.The WPF is the minimum of ki =1ωi (n p ·p i +d )2where ωi =θ( p i −p )and θ(r )=e −2r 2r2i .For solving n p ,we compute the eigenvec-tor corresponding to the smallest eigenvalue of the weightedcovariance matrix C w = ki =1ωi t (p i −b w )(p i −b w )where b w is the weighted barycenter.For the three methods ex-plained in [7],we get a good approximation of normals in smooth area but we have errors in sharp corners.In fig-ure 1,we have tested the weighted normal estimation on two planes with uniform noise and forming an angle of 90˚.We can see that the normal is not correct on the corners of the planes and in the red circle.To improve the normal calculation,that improves the plane detection especially on borders of planes,we propose a filtering process in two phases.In a first step,we com-pute the weighted normals (WPF)of each point like we de-scribed it above by minimizing ki =1ωi (n p ·p i +d )2.In a second step,we compute the filtered normal by us-ing an adaptive local neighborhood.We compute the new weighted normal with the same sum minimization but keep-ing only points of the neighborhood whose normals from the first step satisfy |n p ·n i |>cos (α).With this filtering step,we have the same results in smooth areas and better results in sharp corners.We called our normal estimation filtered weighted plane fitting(FWPF).Figure 1.Weighted normal estimation of two planes with uniform noise and with 90˚angle between them.We have tested our normal estimation by computing nor-mals on synthetic data with two planes and different angles between them and with different values of the parameter α.We can see in figure 2the mean error on normal estimation for WPF and FWPF with α=20˚,30˚,40˚and 90˚.Us-ing α=90˚is the same as not doing the filtering step.We see on Figure 2that α=20˚gives smaller error in normal estimation when angles between planes is smaller than 60˚and α=30˚gives best results when angle between planes is greater than 60˚.We have considered the value α=30˚as the best results because it gives the smaller mean error in normal estimation when angle between planes vary from 20˚to 90˚.Figure 3shows the normals of the planes with 90˚angle and better results in the red circle (normals are 90˚with the plane).3.2.3The score of local planarityIn many region growing algorithms,the criteria used for the score of the local fitting plane is the residual,like in [18]or [19],i.e.the sum of the square of distance from points to the plane.We have a different score function to estimate local planarity.For that,we first compute the neighbors N i of a point p with points i whose normals n i are close toFigure parison of mean error in normal estimation of two planes with α=20˚,30˚,40˚and 90˚(=Nofiltering).Figure 3.Filtered Weighted normal estimation of two planes with uniform noise and with 90˚angle between them (α=30˚).the normal n p .More precisely,we compute N i ={p in k neighbors of i/|n i ·n p |>cos (α)}.It is a way to keep only the points which are probably on the local plane before the least square fitting.Then,we compute the local plane fitting of point p with N i neighbors by least squares like in [21].The set N i is a subset of N i of points belonging to the plane,i.e.the points for which the distance to the local plane is smaller than the parameter γ(to consider the noise).The score s of the local plane is the area of the local plane,i.e.the number of points ”in”the plane divided by the localdensity ρi (seen in section 3.2.1):the score s =card (N i)ρi.We take into consideration the area of the local plane as the score function and not the number of points or the residual in order to be more robust to the sampling distribution.3.2.4Voxel decompositionWe use a data structure that is the core of our region growing method.It is a voxel grid that speeds up the plane detection process.V oxels are small cubes of length d that partition the point cloud space.Every point of data belongs to a voxel and a voxel contains a list of points.We use the Octree Class Template in [2]to compute an Octree of the point cloud.The leaf nodes of the graph built are voxels of size d .Once the voxel grid has been computed,we start the plane detection algorithm.3.2.5Voxel GrowingWith the estimator of local planarity,we take the point p with the best score,i.e.the point with the maximum area of local plane.We have the model parameters of this best seed plane and we start with an empty set E of points belonging to the plane.The initial point p is in a voxel v 0.All the points in the initial voxel v 0for which the distance from the seed plane is less than γare added to the set E .Then,we compute new plane parameters by least square refitting with set E .Instead of growing with k nearest neighbors,we grow with voxels.Hence we test points in 26voxel neigh-bors.This is a way to search the neighborhood in con-stant time instead of O (logN )for each neighbor like with Kd-tree.In a neighbor voxel,we add to E the points for which the distance to the current plane is smaller than γand the angle between the normal computed in each point and the normal of the plane is smaller than a parameter α:|cos (n p ,n P )|>cos (α)where n p is the normal of the point p and n P is the normal of the plane P .We have tested different values of αand we empirically found that 30˚is a good value for all point clouds.If we added at least one point in E for this voxel,we compute new plane parameters from E by least square fitting and we test its 26voxel neigh-bors.It is important to perform plane least square fitting in each voxel adding because the seed plane model is not good enough with noise to be used in all voxel growing,but only in surrounding voxels.This growing process is faster than classical region growing because we do not compute least square for each point added but only for each voxel added.The least square fitting step must be computed very fast.We use the same method as explained in [18]with incre-mental update of the barycenter b and covariance matrix C like equation 2.We know with [21]that the barycen-ter b belongs to the least square plane and that the normal of the least square plane n P is the eigenvector of the smallest eigenvalue of C .b0=03x1C0=03x3.b n+1=1n+1(nb n+p n+1).C n+1=C n+nn+1t(pn+1−b n)(p n+1−b n).(2)where C n is the covariance matrix of a set of n points,b n is the barycenter vector of a set of n points and p n+1is the (n+1)point vector added to the set.This voxel growing method leads to a connected com-ponent set E because the points have been added by con-nected voxels.In our case,the minimum distance between one point and E is less than parameter d of our voxel grid. That is why the parameter d also represents the connectivity of points in detected planes.3.2.6Plane DetectionTo get all planes with an area of at least area min in the point cloud,we repeat these steps(best local seed plane choice and voxel growing)with all points by descending order of their score.Once we have a set E,whose area is bigger than area min,we keep it and classify all points in E.4.Results and Discussion4.1.Benchmark analysisTo test the improvements of our method,we have em-ployed the comparative framework of[12]based on range images.For that,we have converted all images into3D point clouds.All Point Clouds created have260k points. After our segmentation,we project labelled points on a seg-mented image and compare with the ground truth image. We have chosen our three parameters d,area min andγby optimizing the result of the10perceptron training image segmentation(the perceptron is portable scanner that pro-duces a range image of its environment).Bests results have been obtained with area min=200,γ=5and d=8 (units are not provided in the benchmark).We show the re-sults of the30perceptron images segmentation in table1. GT Regions are the mean number of ground truth planes over the30ground truth range images.Correct detection, over-segmentation,under-segmentation,missed and noise are the mean number of correct,over,under,missed and noised planes detected by methods.The tolerance80%is the minimum percentage of points we must have detected comparing to the ground truth to have a correct detection. More details are in[12].UE is a method from[12],UFPR is a method from[10]. It is important to notice that UE and UFPR are range image methods and our method is not well suited for range images but3D Point Cloud.Nevertheless,it is a good benchmark for comparison and we see in table1that the accuracy of our method is very close to the state of the art in range image segmentation.To evaluate the different improvements of our algorithm, we have tested different variants of our method.We have tested our method without normals(only with distance from points to plane),without voxel growing(with a classical region growing by k neighbors),without our FWPF nor-mal estimation(with WPF normal estimation),without our score function(with residual score function).The compari-son is visible on table2.We can see the difference of time computing between region growing and voxel growing.We have tested our algorithm with and without normals and we found that the accuracy cannot be achieved whithout normal computation.There is also a big difference in the correct de-tection between WPF and our FWPF normal estimation as we can see in thefigure4.Our FWPF normal brings a real improvement in border estimation of planes.Black points in thefigure are non classifiedpoints.Figure5.Correct Detection of our segmentation algorithm when the voxel size d changes.We would like to discuss the influence of parameters on our algorithm.We have three parameters:area min,which represents the minimum area of the plane we want to keep,γ,which represents the thickness of the plane(it is gener-aly closely tied to the noise in the point cloud and espe-cially the standard deviationσof the noise)and d,which is the minimum distance from a point to the rest of the plane. These three parameters depend on the point cloud features and the desired segmentation.For example,if we have a lot of noise,we must choose a highγvalue.If we want to detect only large planes,we set a large area min value.We also focus our analysis on the robustess of the voxel size d in our algorithm,i.e.the ratio of points vs voxels.We can see infigure5the variation of the correct detection when we change the value of d.The method seems to be robust when d is between4and10but the quality decreases when d is over10.It is due to the fact that for a large voxel size d,some planes from different objects are merged into one plane.GT Regions Correct Over-Under-Missed Noise Duration(in s)detection segmentation segmentationUE14.610.00.20.3 3.8 2.1-UFPR14.611.00.30.1 3.0 2.5-Our method14.610.90.20.1 3.30.7308Table1.Average results of different segmenters at80%compare tolerance.GT Regions Correct Over-Under-Missed Noise Duration(in s) Our method detection segmentation segmentationwithout normals14.6 5.670.10.19.4 6.570 without voxel growing14.610.70.20.1 3.40.8605 without FWPF14.69.30.20.1 5.0 1.9195 without our score function14.610.30.20.1 3.9 1.2308 with all improvements14.610.90.20.1 3.30.7308 Table2.Average results of variants of our segmenter at80%compare tolerance.4.1.1Large scale dataWe have tested our method on different kinds of data.We have segmented urban data infigure6from our Mobile Mapping System(MMS)described in[11].The mobile sys-tem generates10k pts/s with a density of50pts/m2and very noisy data(σ=0.3m).For this point cloud,we want to de-tect building facades.We have chosen area min=10m2, d=1m to have large connected components andγ=0.3m to cope with the noise.We have tested our method on point cloud from the Trim-ble VX scanner infigure7.It is a point cloud of size40k points with only20pts/m2with less noise because it is a fixed scanner(σ=0.2m).In that case,we also wanted to detect building facades and keep the same parameters ex-ceptγ=0.2m because we had less noise.We see infig-ure7that we have detected two facades.By setting a larger voxel size d value like d=10m,we detect only one plane. We choose d like area min andγaccording to the desired segmentation and to the level of detail we want to extract from the point cloud.We also tested our algorithm on the point cloud from the LEICA Cyrax scanner infigure8.This point cloud has been taken from AIM@SHAPE repository[1].It is a very dense point cloud from multiplefixed position of scanner with about400pts/m2and very little noise(σ=0.02m). In this case,we wanted to detect all the little planes to model the church in planar regions.That is why we have chosen d=0.2m,area min=1m2andγ=0.02m.Infigures6,7and8,we have,on the left,input point cloud and on the right,we only keep points detected in a plane(planes are in random colors).The red points in thesefigures are seed plane points.We can see in thesefig-ures that planes are very well detected even with high noise. Table3show the information on point clouds,results with number of planes detected and duration of the algorithm.The time includes the computation of the FWPF normalsof the point cloud.We can see in table3that our algo-rithm performs linearly in time with respect to the numberof points.The choice of parameters will have little influence on time computing.The computation time is about one mil-lisecond per point whatever the size of the point cloud(we used a PC with QuadCore Q9300and2Go of RAM).The algorithm has been implented using only one thread andin-core processing.Our goal is to compare the improve-ment of plane detection between classical region growing and our region growing with better normals for more ac-curate planes and voxel growing for faster detection.Our method seems to be compatible with out-of-core implemen-tation like described in[24]or in[15].MMS Street VX Street Church Size(points)398k42k7.6MMean Density50pts/m220pts/m2400pts/m2 Number of Planes202142Total Duration452s33s6900sTime/point 1ms 1ms 1msTable3.Results on different data.5.ConclusionIn this article,we have proposed a new method of plane detection that is fast and accurate even in presence of noise. We demonstrate its efficiency with different kinds of data and its speed in large data sets with millions of points.Our voxel growing method has a complexity of O(N)and it is able to detect large and small planes in very large data sets and can extract them directly in connected components.Figure 4.Ground truth,Our Segmentation without and with filterednormals.Figure 6.Planes detection in street point cloud generated by MMS (d =1m,area min =10m 2,γ=0.3m ).References[1]Aim@shape repository /.6[2]Octree class template /code/octree.html.4[3] A.Bab-Hadiashar and N.Gheissari.Range image segmen-tation using surface selection criterion.2006.IEEE Trans-actions on Image Processing.1[4]J.Bauer,K.Karner,K.Schindler,A.Klaus,and C.Zach.Segmentation of building models from dense 3d point-clouds.2003.Workshop of the Austrian Association for Pattern Recognition.1[5]H.Boulaassal,ndes,P.Grussenmeyer,and F.Tarsha-Kurdi.Automatic segmentation of building facades using terrestrial laser data.2007.ISPRS Workshop on Laser Scan-ning.1[6] C.C.Chen and I.Stamos.Range image segmentationfor modeling and object detection in urban scenes.2007.3DIM2007.1[7]T.K.Dey,G.Li,and J.Sun.Normal estimation for pointclouds:A comparison study for a voronoi based method.2005.Eurographics on Symposium on Point-Based Graph-ics.3[8]J.R.Diebel,S.Thrun,and M.Brunig.A bayesian methodfor probable surface reconstruction and decimation.2006.ACM Transactions on Graphics (TOG).1[9]M.A.Fischler and R.C.Bolles.Random sample consen-sus:A paradigm for model fitting with applications to image analysis and automated munications of the ACM.1,2[10]P.F.U.Gotardo,O.R.P.Bellon,and L.Silva.Range imagesegmentation by surface extraction using an improved robust estimator.2003.Proceedings of Computer Vision and Pat-tern Recognition.1,5[11] F.Goulette,F.Nashashibi,I.Abuhadrous,S.Ammoun,andurgeau.An integrated on-board laser range sensing sys-tem for on-the-way city and road modelling.2007.Interna-tional Archives of the Photogrammetry,Remote Sensing and Spacial Information Sciences.6[12] A.Hoover,G.Jean-Baptiste,and al.An experimental com-parison of range image segmentation algorithms.1996.IEEE Transactions on Pattern Analysis and Machine Intelligence.5[13]H.Hoppe,T.DeRose,T.Duchamp,J.McDonald,andW.Stuetzle.Surface reconstruction from unorganized points.1992.International Conference on Computer Graphics and Interactive Techniques.2[14]P.Hough.Method and means for recognizing complex pat-terns.1962.In US Patent.1[15]M.Isenburg,P.Lindstrom,S.Gumhold,and J.Snoeyink.Large mesh simplification using processing sequences.2003.。
宽禁带半导体ZnS物性的第一性原理研究摘要硫化锌(ZnS)是一种新型的II-VI族宽禁带电子过剩本征半导体材料,其禁带宽度为3.67eV,具有良好的光致发光性能和电致发光性能。
在常温下禁带宽度是3.7eV,具有光传导性好,在可见光和红外范围分散度低等优点。
ZnS和基于ZnS的合金在半导体研究领域己经得到了越来越广泛的关注。
由于它们具有较宽的直接带隙和很大的激子结合能,在光电器件中具有很好的应用前景。
本文介绍了宽禁带半导体ZnS目前国内外的研究现状及其结构性质和技术上的应用。
阐述了密度泛函理论的基本原理,对第一性原理计算的理论基础作了详细的总结,并采用密度泛函理论的广义梯度近似(GGA)下的平面波贋势法,利用Castep软件计算了闪锌矿结构ZnS晶体的电子结构和光学性质。
电子结构如闪锌矿ZnS晶体的能带结构,态密度。
光学性质如反射率,吸收光谱,复数折射率,介电函数,光电导谱和损失函数谱。
通过对其能带及结构的研究,可知闪锌矿硫化锌为直接带隙半导体,通过一系列对光学图的分析,可以对闪锌矿ZnS的进一步研究做很好的预测。
关键词ZnS;宽禁带半导体;第一性原理;闪锌矿结构-I -First-principles Research on Physical Properties of Wide Bandgap Semiconductor ZnSAbstractZinc sulfide (ZnS) is a new family of ll-VI wide band gap electronic excess in tri nsic semic on ductor material with good photolu min esce nee properties and electroluminescent properties. At room temperature band gap is 3.7 eV, and there is good optical transmission in the visible and infrared range and low dispersi on. ZnS and Zn S-based alloy in the field of semic on ductor research has bee n paid more and more atte nti on. Because of their wide direct ban dgap and large excit on binding en ergy, the photovoltaic device has a good prospect.This thesis describes the current research status and structure of nature and tech ni cal applicati ons on wide band gap semic on ductor ZnS. Described the basic principles density functional theory, make a detailed summary for the basis of first principles theoretical calculations, using the density functional theory gen eralized gradie nt approximati on (GGA) un der the pla ne wave pseudopote ntial method, calculated using Castep software sphalerite ZnS crystal structure of electronic structure and optical properties. Electronic structures, such as sphalerite ZnS crystal band structure, density of states. Optical properties such as reflecta nee, absorpti on spectra, complex refractive in dex, dielectric function, optical conductivity spectrum and the loss function spectrum. Band and the structure through its research known as zin cble nde ZnS direct band gap semic on ductor,Through a series of optical map an alysis, can make a good predicti on for further study on zin cble nde ZnS.Keywords ZnS; Wide ban dgap semic on ductor; First-pri nciples; Zin cble nde structure-2-目录摘要 (I)Abstract (II)第1章绪论 ........................................................... 1..1.1 ZnS半导体材料的研究背景 ....................................... 1.1.2 ZnS的基本性质和应用........................................... 1.1.3 ZnS材料的研究方向和进展 (3)1.4 ZnS的晶体...................................................... 4.1.4.1 ZnS晶体结构 ............................................... 4.1.4.2 ZnS的能带结构............................................. 5.1.5 ZnS的发光机理................................................. 6.1.6研究目的和主要内容.............................................. 7.2.1相关理论......................................................... 9.2.1.1密度泛函理论 (9)2.1.2交换关联函数近似........................................... 1.12.2总能量的计算 (13)2.2.1势平面波方法 (14)2.2.2结构优化 (16)2.3 CASTEP软件包功能特点........................................ 1.8第3章ZnS晶体电子结构和光学性质.................................... 1.93.1闪锌矿结构ZnS的电子结构 (19)3.1.1晶格结构 (19)3.1.2能带结构 (20)3.1.3态密度 (21)3.2闪锌矿ZnS晶体的光学性质 (24)结论 (30)致谢 (31)参考文献 (32)附录A (33)附录B (45)-ill -第1章绪论1.1 ZnS半导体材料的研究背景Si是应用最为广泛的半导体材料,现代的大规模集成电路之所以成功推广应用,关键就在于Si半导体在电子器件方面的突破。
pcl 法向夹角特征点提取1. 什么是法向夹角特征点?法向夹角特征点是一种局部几何特征,它描述了表面法向向量之间的夹角。
法向夹角特征点可以用来检测表面上的突变、褶皱和边缘等特征。
2. PCL 中的法向夹角特征点提取PCL 中提供了多种法向夹角特征点提取算法,其中最常用的算法是曲率估计算法和主曲率算法。
曲率估计算法曲率估计算法通过计算表面曲率来检测法向夹角特征点。
曲率是曲面法向向量在曲线上变化的程度的度量。
曲率越大,曲面变化越快。
PCL 中提供了多种曲率估计算法,其中最常用的算法是法向向量法。
法向向量法通过计算曲面法向向量在曲线上变化的程度来估计曲率。
主曲率算法主曲率算法通过计算曲面的两个主曲率来检测法向夹角特征点。
主曲率是曲面法向向量在曲线上变化最快的两个方向上的曲率。
PCL 中提供了多种主曲率算法,其中最常用的算法是高斯曲率算法。
高斯曲率算法通过计算曲面高斯曲率来估计主曲率。
3. 法向夹角特征点提取的应用法向夹角特征点提取在计算机视觉和机器人领域有着广泛的应用,其中最常见的应用包括:表面重建法向夹角特征点可以用来重建曲面。
曲面重建是指从一组不规则的点云数据中恢复曲面的过程。
法向夹角特征点可以帮助确定曲面的边界和边缘,从而提高曲面重建的精度。
物体识别法向夹角特征点可以用来识别物体。
物体识别是指从一组图像或点云数据中识别物体的过程。
法向夹角特征点可以帮助确定物体的形状和轮廓,从而提高物体识别的准确率。
机器人导航法向夹角特征点可以用来帮助机器人导航。
机器人导航是指机器人自主地在环境中移动的过程。
法向夹角特征点可以帮助机器人检测障碍物和危险区域,从而提高机器人导航的安全性。
4. 总结法向夹角特征点提取是一种局部几何特征提取技术,它可以用来检测表面上的突变、褶皱和边缘等特征。
PCL 中提供了多种法向夹角特征点提取算法,其中最常用的算法是曲率估计算法和主曲率算法。
法向夹角特征点提取在计算机视觉和机器人领域有着广泛的应用,其中最常见的应用包括表面重建、物体识别和机器人导航。
AbstractCompressive sensing and sparse inversion methods have gained a significant amount of attention in recent years due to their capability to accurately reconstruct signals from measurements with significantly less data than previously possible. In this paper, a modified Gaussian frequency domain compressive sensing and sparse inversion method is proposed, which leverages the proven strengths of the traditional method to enhance its accuracy and performance. Simulation results demonstrate that the proposed method can achieve a higher signal-to- noise ratio and a better reconstruction quality than its traditional counterpart, while also reducing the computational complexity of the inversion procedure.IntroductionCompressive sensing (CS) is an emerging field that has garnered significant interest in recent years because it leverages the sparsity of signals to reduce the number of measurements required to accurately reconstruct the signal. This has many advantages over traditional signal processing methods, including faster data acquisition times, reduced power consumption, and lower data storage requirements. CS has been successfully applied to a wide range of fields, including medical imaging, wireless communications, and surveillance.One of the most commonly used methods in compressive sensing is the Gaussian frequency domain compressive sensing and sparse inversion (GFD-CS) method. In this method, compressive measurements are acquired by multiplying the original signal with a randomly generated sensing matrix. The measurements are then transformed into the frequency domain using the Fourier transform, and the sparse signal is reconstructed using a sparsity promoting algorithm.In recent years, researchers have made numerous improvementsto the GFD-CS method, with the goal of improving its reconstruction accuracy, reducing its computational complexity, and enhancing its robustness to noise. In this paper, we propose a modified GFD-CS method that combines several techniques to achieve these objectives.Proposed MethodThe proposed method builds upon the well-established GFD-CS method, with several key modifications. The first modification is the use of a hierarchical sparsity-promoting algorithm, which promotes sparsity at both the signal level and the transform level. This is achieved by applying the hierarchical thresholding technique to the coefficients corresponding to the higher frequency components of the transformed signal.The second modification is the use of a novel error feedback mechanism, which reduces the impact of measurement noise on the reconstructed signal. Specifically, the proposed method utilizes an iterative algorithm that updates the measurement error based on the difference between the reconstructed signal and the measured signal. This feedback mechanism effectively increases the signal-to-noise ratio of the reconstructed signal, improving its accuracy and robustness to noise.The third modification is the use of a low-rank approximation method, which reduces the computational complexity of the inversion algorithm while maintaining reconstruction accuracy. This is achieved by decomposing the sensing matrix into a product of two lower dimensional matrices, which can be subsequently inverted using a more efficient algorithm.Simulation ResultsTo evaluate the effectiveness of the proposed method, we conducted simulations using synthetic data sets. Three different signal types were considered: a sinusoidal signal, a pulse signal, and an image signal. The results of the simulations were compared to those obtained using the traditional GFD-CS method.The simulation results demonstrate that the proposed method outperforms the traditional GFD-CS method in terms of signal-to-noise ratio and reconstruction quality. Specifically, the proposed method achieves a higher signal-to-noise ratio and lower mean squared error for all three types of signals considered. Furthermore, the proposed method achieves these results with a reduced computational complexity compared to the traditional method.ConclusionThe results of our simulations demonstrate the effectiveness of the proposed method in enhancing the accuracy and performance of the GFD-CS method. The combination of sparsity promotion, error feedback, and low-rank approximation techniques significantly improves the signal-to-noise ratio and reconstruction quality, while reducing thecomputational complexity of the inversion procedure. Our proposed method has potential applications in a wide range of fields, including medical imaging, wireless communications, and surveillance.。
P R O F E S S I O N A LTECHNICAL DATAControlSpace EX-1280 digital signal processorApplicationsPerformance venues Stadiums and arenas Places of worshipResorts and Hospitality Venues Multipurpose spacesKey FeaturesOpen-architecture , 1RU design built for general-purpose and PA applications USB connection facilitates easy integration with PC audio sourcesDante audio networking supports 64 x 64 audio channels for connection to other Dante-enabled products, including native Dante-integrated microphones, amplifiers, mixers, and end pointsBose AmpLink port provides 8 channels of uncompressed, low-latency digital audio to AmpLink-equipped Bose amplifiersFront-panel interface features a large OLED display and rotary encoder for setting network parameters and monitoring channel activityGPIO (5 in/5 out) and Serial for interfacing with external devices and control systemsHigh-quality analog circuitry offers both mic and line-level I/O, operates with ultra-low noise and 118 dB dynamic range.Bose ControlSpace Designer software enables a large set of signal processing modules, such as automatic micmixing, multiband graphic and parametric EQs, Bose loudspeaker libraries, signal generators, routers, mixers, AGCs, duckers, gates, compressors, source selectors, delays and logicA variety of control options— compatible with the programmable Bose CC-64 and CC-16 controllers, ControlCenter digital zone controllers, and ControlSpace Remote clientsSupports industry-standard control systems using a comprehensive serial protocol through onboard RS-232 and Ethernet ports, with available drivers for AMX and Crestron-based systemsProduct OverviewWith an open-architecture, single-rack-unit design, the ControlSpace EX-1280 is a robust digital signal processor equipped for general-purpose audio processing applications. Twelve mic/line analog inputs, eight analog outputs, a Bose AmpLink output, and 64 x 64 Dante® connectivity allow for flexible configuration and high-quality sound system control. ControlSpace Designer software simplifies the setup process with drag-and-drop programming, making configuration quick and easy.Technical Specificationsq Front-panel OLED Display and Encoder – 256 x 64 display for metering and network info. Rotary/press knob for IP setupq Balanced Analog I/O – 12 inputs, 8 outputswG PIO – 5 x 5 general-purpose control eC ontrolSpace Network Port – ControlSpace/Dante secondary when configured for redundant mode r Dante Network Port – ControlSpace/Dante Primary by defaulttU SB Port – Micro-B USB for PC with stereo input and output y USB Port – Future useuB ose Amplink – 8-channel uncompressed, low-latency digital audio output iS erial Port – 3-wire RS-232C (DTE) serial interface connection oC C-16 – Supports Bose CC-16 user controls qq w r t y u i oeDante is a registered trademark of Audinate Pty Ltd. For additional specifications and application information, please visit . Specificationssubject to change without notice. 06/2019Product CodesControlSpace EX-1280 digital signal processor US-120V 834317-1110EU-230V 834317-2110JP-100V 834317-3110UK-230V834317-4110AU-240V 834317-5110AccessoriesControlSpace EX-UH USB/Headset Dante endpoint 771784-0110ControlSpace EX-4ML 4-ch mic/GPIO Dante endpoint 771783-0110ControlSpace EX-8ML 8-ch mic/GPIO Dante endpoint772045-0110。
第 22卷第 4期2023年 4月Vol.22 No.4Apr.2023软件导刊Software Guide基于改进ORB的无人机影像拼接算法张平,孙林,何显辉(山东科技大学测绘与空间信息学院,山东青岛 266590)摘要:针对传统图像拼接算法在无人机遥感影像拼接过程中速度慢、效率低、无法满足实时准确拼接要求的问题,提出一种改进ORB的图像拼接算法。
首先构建尺度金字塔并利用ORB算法提取特征点,利用BEBLID描述符对特征点进行特征描述,采用最近邻比值(NNDR)算法进行粗匹配;然后基于特征点投票构建最优化几何约束对特征点进一步优化,利用随机采样一致性(RANSAC)算法计算变换矩阵,获取高精度变换矩阵;最后利用改进的渐入渐出加权融合算法实现图像拼接。
实验结果表明,所提算法配准精度最高达到100%,配准耗时低于0.91s,拼接图像信息熵达到6.807 9。
相较于传统算法,所提算法具有更高的拼接效率,在降低图像拼接时间的同时能够获取更高质量的拼接图像,性能显著提升。
关键词:图像拼接;多尺度FAST检测;BEBLID特征;最优化几何约束DOI:10.11907/rjdk.222267开放科学(资源服务)标识码(OSID):中图分类号:TP391.41 文献标识码:A文章编号:1672-7800(2023)004-0156-06UAV Image Mosaic Algorithm Based on Improved ORBZHANG Ping, SUN Lin, HE Xian-hui(College of Geodesy and Geomatics, Shandong University of Science and Technology, Qingdao 266590, China)Abstract:Aiming at the problems of slow speed and low efficiency of traditional image stitching algorithm in UAV remote sensing image stitching process, which cannot meet the requirements of real-time and accurate stitching, an improved ORB image stitching algorithm is pro‐posed. Firstly, the scale pyramid is constructed and the feature points are extracted by ORB algorithm, and then the feature points are de‐scribed by BEBLID descriptor; The nearest neighbor ratio (NNDR) algorithm is used for rough matching, and then the optimal geometric con‐straints are constructed based on the feature point voting to further optimize the feature points. The random sampling consistency (RANSAC)algorithm is used to calculate the transformation matrix and obtain the high-precision transformation matrix; Finally, the improved gradual in and gradual out weighted fusion algorithm is used to realize image mosaic. The experimental results show that the registration accuracy of the proposed algorithm reaches 100% at the highest, the registration time is less than 0.91s, and the information entropy of mosaic image reaches 6.807 9. Compared with the traditional algorithm,the algorithm in this paper has higher splicing efficiency,and can obtain higher quality splicing images while reducing the image splicing time. The algorithm performance is significantly improved.Key Words:image mosaic; multi scale FAST detection; BEBLID feature; optimal geometric constraint0 引言近年来,无人机航拍摄影技术越来越成熟,在遥感监测[1]、电力巡检[2]、灾害勘察[3]、军事侦察[4]等领域均有广泛应用。
Navigator 600 SilicaSilica analyzerCost-effective automated monitoring of silica for a widerange of applicationsLowest cost-of-ownership—up to 90 % lower reagent consumption than competitors' analyzers—labour-saving 5 minute annual maintenance and up to 3 months unattended operation—field upgradeable from 2 to 4; 2 to 6 or 4 to 6 streamsEasy to use—familiar Windows™ menu system—built-in context-sensitive helpFull communications—web- and ftp-enabled for easy data file access, remote viewing and configuration—optional Profibus® DP V1.0Fast, accurate and reliable—automatic cleaning, calibration and zero deliver high accuracy measurements—extensive electronics, measurement and maintenance diagnostics ensure high availability—true auto-zero compensates for sample color, turbidity and background silica in reagents—temperature-controlled reaction and measurement section for optimum responseNavigator 600 Silica Silica analyzer2DS/NAV6S–EN Rev. JIntroductionMany years of experience and innovation in the design and successful application of continuous chemical analyzers has been combined with the latest electronics and production technologies to produce the Navigator 600 Series of analyzers from ABB.Developed as fully continuous analyzers offering wide dynamic ranging, the Navigator 600 Series incorporates greater simplicity and functionality than ever before. Based on colorimetric techniques, they feature a liquid handling section carefully designed to reduce routine maintenance. Utilizing powerful electronics, advanced features such as automatic calibration,continuous sample analysis and programmable multi-stream switching ensure accurate and simple measurement of silica.Process data, as well as the content of alarm and audit logs,can be saved to a removable SD card in binary and comma-delimited formats for record keeping and analysis using ABB’s DataManager data analysis software package.A very low cost of ownership has been achieved by reducing the reagent consumption and simplifying the maintenance requirements.The size of the instrument has been reduced to a compact,ergonomically-designed, wall-mounted case thus providing a very small footprint.ApplicationsTypical applications for the Navigator 600 Silica are:⏹Demineralization Plants for Power and Process Industries.–Monitoring the outlet of the anion and mixed beds for silica breakthrough, providing indication of bed exhaustion and final water quality.⏹Boiler Systems.–Monitoring boiler drum water, providing information on the contamination levels in the boiler.–Monitoring silica carryover in saturated steam thus protecting turbine blades from potentially excessive scale build up.–Monitoring the exhaustion of ion exchangers in a Condensate Polishing Plant.OperationGeneralThe Navigator 600 Silica is an on-line analyzer, designed to provide continuous monitoring of silica concentration utilizing a standard colorimetric analysis principle.Liquid HandlingThe chemistry employed for silica measurement is the industry standard Molybdenum Blue reaction. Sample and reagents are drawn into the instrument by two multichannel peristaltic pumps. These are designed and constructed to ensure only simple yearly maintenance is required. The reagents are added to the sample in a temperature-controlled reaction block and the fully reacted sample is then passed through an in-line measuring cuvette.The optical measuring system enables accurate detection of silica concentrations from 0 to 5000 ppb.The instrument includes a manual sampling facility that enables the analysis of grab samples.Solution ReplacementLiquid Handling Section ContinuousReagents3 months Calibration Standard 3 months Cleaning Solution3 monthsNavigator 600 Silica Silica analyzerDS/NAV6S–EN Rev. J 3ElectronicsThe main electronic transmitter consists of a display and key pad accessible from the front of the unit. Indication of all parameters is provided by a large backlit LCD display that is easy to read in all light conditions. Under normal operating conditions, measured values are displayed; programming data is displayed during set-up and also on demand. Units and range of measurement, alarm values and standard solution values are examples of the many programmable functions.Keeping simplicity of operation at the forefront of design, six fingertip-operated tactile membrane switches control local operation of the analyzer and provide easy access to all parameters.The Navigator 600 Silica is provided with 4 dedicated relays,6user-programmable relays and 6 current outputs as standard.Profibus DP V1.0 is available as an option.Ethernet CommunicationsThe Navigator 600 Silica can provide 10BaseT Ethernet communications via a standard RJ45 connector and uses industry-standard protocols TCP/IP , FTP and HTTP . The use of standard protocols enables easy connection into existing PC networks.Data File Access via FTP (File Transfer Protocol)The Navigator 600 Silica features F TP server functionality. The FTP server in the analyzer is used to access its file system from a remote station on a network. This requires an FTP client on the host PC. Both MS-DOS® and Microsoft® Internet Explorer version 5.5 or later can be used as an FTP client.⏹Using a standard web-browser or other FTP client, datafiles contained within the analyzer's memory or memory card can be accessed remotely and transferred to a PC or network drive.⏹Four individual FTP users' names and passwords can beprogrammed into the Navigator 600 Silica. An access level can be configured for each user.⏹All FTP log-on activity is recorded in the audit log of theinstrument.⏹Using ABB’s data file transfer scheduler program, datafiles from multiple instruments can be backed-up automatically to a PC or network drive for long-term storage, ensuring the security of valuable process data and minimizing the operator intervention required.Display and KeypadChart View DisplayNavigator 600 Silica(FTP Server)Navigator 600 Silica(FTP Server)FTP ClientEthernetNavigator 600 Silica Silica analyzer4DS/NAV6S–EN Rev. JEmbedded Web ServerThe Navigator 600 Silica has an embedded web-server that provides access to web pages created within the instrument.The use of HTTP (Hyper Text Transfer Protocol) enables standard web browsers to view these pages.⏹Accessible through the web pages are the current displayof the analyzer, detailed information on stream values, reagent and solution levels, measurement status and other key information.⏹The audit and alarm logs stored in the Navigator 600Silica's internal buffer memory and memory card can be viewed on the web pages.⏹Operator messages can be entered via the web server,enabling comments to be logged to the instrument.⏹The web pages and the information they contain arerefreshed regularly, enabling them to be used as a supervision tool.⏹The analyzer's configuration can be selected from anexisting configuration in the internal memory or a new configuration file transferred to the instrument via FTP .⏹The analyzer's real-time clock can be set via the webserver. Alternatively, the clocks of multiple analyzers can be synchronized using ABB's File Transfer Scheduler software.Email NotificationVia the Navigator 600 Silica's built-in SMTP client, the analyzer is able to email notification of important events. Emails triggered from alarms or other critical events can be sent to multiple recipients. The analyzer can also be programmed to email reports of the current measurement status or other parameters at specific times during the day.ProfibusThe Navigator 600 Silica can be equipped (option) with Profibus DP V1.0 to enable full communications and control integrationwith distributed control systems.Navigator 600 Silica Silica analyzerDS/NAV6S–EN Rev. J 5MaintenanceThe analyzer has been designed to maximize on-line availability by reducing routine maintenance to a minimum.Yearly maintenance consists of simply replacing pump capstans and pump tube assemblies, an operation that can take as little as five minutes.F ully automatic calibration, zeroing and cleaning functions enable the analyzer to keep operational with minimal manual intervention. A predictive alarm alerts the user to reagent solution replacement being required. The cleaning and calibration solutions have a sensor to detect when replacement is necessary.OptionsMulti-stream FacilityA fully programmable multi-stream option is available on the Navigator 600 Silica on-line analyzer, providing up to six-stream capability including individual current output and visual indication as well as user-programmable stream sequencing.The analyzers are designed to be easily upgradeable in the field to two, four or six streams.Simple to replace pump tube assemblesSix Streams DisplayNavigator 600 Silica Silica analyzer6DS/NAV6S–EN Rev. JSpecificationSilica MeasurementRangeFully user programmable 0 to 5000 ppb SiO 2, minimum range 0 to 50 ppbMeasurement ModesSample stream options Available as single stream ormulti-stream in 2, 4 or 6 stream configurationsSingle-stream PerformanceMeasurement methodContinuous chemistry and measurement operation.Response time<15 mins. (90% step change)Typical accuracy<±2% of reading or ±0.5 ppb (whichever is the greater) over the range 0 to 500 ppb <±5% of reading over the range 500 to 5000 ppbRepeatability<±2% of reading or ±0.5 ppb (whichever is the greater) over the range 0 to 500 ppb <±3% of reading over the range 500 to 5000 ppbMulti-stream performanceMeasurement methodContinuous chemistry with a minimum 12 minutes per stream measurement update.Sample rate programmable between 12 minutes minimum to 60minutes maximum.Response timeMinimum update time 12 minutesTypical accuracy<±2% of reading or ±0.5 ppb (whichever is the greater) over the range 0 to 500 ppb*<±5% of reading over the range 500 to 5000 ppb** Dependent on sample rate – refer to table on page 2Repeatability<±2% of reading or ±0.5 ppb (whichever is the greater) over the range 0 to 500 ppb <±3% of reading over the range 500 to 5000 ppbSolution RequirementsNumber4 reagents (2.5 l bottles)1 standard solution (0.5 l bottle)1 cleaning solution (0.5 l bottle)Reagent ConsumptionContinuous operation mode 2.5 l max. per 90 daysDisplayColor, passive matrix, liquid crystal display (LCD) with built-in backlight and brightness adjustment 76800 pixel display(a small percentage of the display pixels may be either constantly active or inactive. Max. percentage of inoperative pixels <0.01%)Dedicated operator keys⏹Group Select/Left cursor ⏹View Select/Right cursor ⏹Menu key ⏹Up/Increment key ⏹Down/Decrement key ⏹Enter keyMechanical DataIngress protectionIP31** – Wet section (critical components IP66)IP66 – Transmitter Dimensions Materials of construction Sample connections ** Not evaluated for UL or CBDiagonal display area 144 mm (5.7 in.)Height 638 mm (25.1 in.) plus constant headbracket 186 mm (7.3 in.)Width 271 mm (10.7 in.)Depth 182 mm (7.2 in.)Weight15 kg (33 lbs)Electronics enclosure 20% glass loaded polypropylene Main enclosure NorylLower tray 10% glass loaded polypropylene DoorAcrylicInlet 6 mm (1/4 in.) flexible hose connection Outlet9 mm (1/4 in.) flexible hose connectionNavigator 600 Silica Silica analyzerDS/NAV6S–EN Rev. J 7Environmental DataAmbient Operating Temperature 5 to 45 ºC (41 to 113 ºF)Sample Temperature 5 to 55 ºC (41 to 131 ºF)Sample Particulate <60 microns <10 mgl –1Sample Flow Rate>5 ml/min / <500 ml/min Sample Pressure AtmosphericStorage Temperature–20 to 75 ºC (–4 to 167 ºF)Ambient Operating Humidity Up to 95% RH non-condensingElectricalSupply ranges100 to 240 V max. AC 50/60 Hz ± 10 % (90 to 264 V AC, 45/65 Hz)18 to 36 V DC 10A power supply typical (optional)Power consumption 60 W max. – AC 100 W max. – DCAnalog OutputsSingle and multi-stream analyzers 6 isolated current outputs:⏹galvanically isolated (to 500 V dc) from each other and all other circuitry⏹fully assignable and programmable over a 0 to 20 mA range (up to 22 mA if required)⏹drives maximum 750 loadWetted MaterialsPMMA (acrylic)PP (polypropylene)PTFEPP (20% glass filled)PEEK NBR (nitrile)EPDM SantoprenePTFE (15% polysulphane)NORYLBorosilicate glass Acrylic adhesiveAlarms/Relay OutputsSingle and multi-stream instruments One per unit:⏹Out of service alarm relay ⏹Calibration in progress alarm relay ⏹Calibration failed alarm relay ⏹Maintenance/Hold alarm relaySix per unit:⏹fully user-assignable and alarm relaysRating Connectivity/CommunicationsEthernet connection Bus communicationsProfibus DP V1 (optional)Data Handling, Storage and DisplaySecurityStorageRemovable Secure Digital (SD) card – maximum size 2 GB Trend analysis Local and remote Data transfer SD card or FTPApprovals, Certification and SafetySafety Approval cULus – PendingCE MarkCovers EMC & LV Directives (including latest version EN 61010)General Safety EN 61010–1Overvoltage Class 11 on inputs and outputs Pollution category 2EMCEmissions & immunityMeets requirements of IEC61326 for an Industrial EnvironmentVoltage 250 V AC 30 V DC Current5 A AC 5 A DC Loading (non-inductive)1250 VA150 WWeb server with ftp:for real time monitoring, configuration, data file access and email capabilityMulti level security:user, configuration, calibration and maintenance pagesNavigator 600 Silica Silica analyzerOverall DimensionsNavigator 600 Silica8DS/NAV6S–EN Rev. JNavigator 600 SilicaSilica analyzerReagent bottles mounted on optional brackets (two bottles per bracket)DS/NAV6S–EN Rev. J9Navigator 600 Silica Silica analyzer10DS/NAV6S–EN Rev. JOrdering InformationSupplied with analyzer:Reagent and calibration containersSilica Analyzer AW641/XXXXXXXXRange0 ... 5000 ppb 5Number of Streams1 – Measuring 1 stream2 – Measuring 2 streams 4 – Measuring3 or4 streams 6 – Measuring5 or6 streams 1246CommunicationsNoneProfibus DP . V .101EnclosureStandardStandard + reagent shelvesStandard + reagent shelves + reagent sensors 012Power Supply100 ... 240 V AC 50/60 Hz 18 ... 36 V DC 01ReservedBuild 9ManualEnglish French Italian German Spanish 12345CertificationNoneCertificate of calibration cULus – pending012Navigator 600 Silica Silica analyzerDS/NAV6S–EN Rev. J 11Benefits summary⏹Lowest cost-of-ownership–up to 90 % lower reagent consumption than competitors' analyzers–labour-saving 5 minute annual maintenance and up to 6 months unattended operation⏹Easy to use–familiar Windows™ menu system –built-in context-sensitive help⏹Full communications–web- and ftp-enabled for easy data file access, remote viewing and configuration –optional Profibus DP V1.0⏹Fast, Accurate and Reliable–temperature-controlled reaction and measurement section for optimum response–automatic cleaning, calibration and zero deliver high accuracy measurements–extensive electronics, measurement andmaintenance diagnostics ensure high availability⏹Field upgradeable–from 2 to 4; 2 to 6 or 4 to 6 streams, each user-programmable from 0 to 5000 ppb⏹Compact size–638 mm (25.1 in.) H x 271 mm (10.7 in.) W x 182 mm (7.2 in.) D⏹Email facility–automatically email up to 6 recipients when user-selected events occur⏹Grab sample facility–for manual sampling⏹Multiple outputs and relays–6 current outputs, 4 device state and 6 user-programmable relays as standard⏹Archiving facility–SD data card for easy backup and programming⏹Auto-zero facility–true auto-zero compensates for sample color, turbidity and background silica in reagents⏹Instrument logs–alarm and audit logs for complete, secure recordsContact usD S /N A V 6S -E N R e v . J 10.2011ABB LimitedProcess Automation Oldends Lane StonehouseGloucestershire GL10 3TA UK Tel:+44 1453 826 661Fax:+44 1453 829 671ABB Inc.Process Automation 125 E. County Line Road Warminster PA 18974USA Tel:+1 215 674 6000Fax:+1 215 674 NoteWe reserve the right to make technical changes or modify the contents of this document without prior notice. With regard to purchase orders, the agreed particulars shall prevail. ABB does not accept any responsibility whatsoever for potential errors or possible lack of information in this document.We reserve all rights in this document and in thesubject matter and illustrations contained therein. Any reproduction, disclosure to third parties or utilization of its contents in whole or in parts – is forbidden without prior written consent of ABB.Copyright© 2011 ABB All rights reserved3KXA841601R1001Windows™, Microsoft™, MS-DOS™ and Internet Explorer™ are registered trademarks of Microsoft Corporation in the United States and / or other countries.PROFIBUS™ is a registered trademark of PROFIBUS corporation.。
中心切片定理证明中心切片定理(Central Slice Theorem)是计算机断层扫描图像重建(CT重建)的基础理论之一。
它是由托马斯·费博迪(Thomas S. Furry)于1948年首次提出的,也被称为费氏切片定理(Fubini's Slice Theorem)或费博迪切片定理(Fubini's Slice Theorem)。
中心切片定理是计算机断层扫描技术成像的核心原理之一,对于医学影像学、计算机辅助设计与制造等领域具有重要应用价值。
中心切片定理的基本思想是:在二维平面上对一个物体进行扫描,可以得到一组平行于扫描平面的切片图像。
这些切片图像包含了物体在不同位置上的信息,但由于切片之间的间隔较大,无法获得物体某些区域的细节信息。
而中心切片定理则提供了一种通过限制扫描角度范围来增加切片图像数量和密度的方法,从而实现对物体的更准确重建。
为了简化问题,我们首先讨论在二维平面上的情况。
假设有一个二维物体位于平面上,我们可以通过不同的角度进行扫描,得到一系列投影数据。
为了方便起见,我们假设物体在平面上是完全透明的,即完全没有吸收或衰减。
根据光学的等效原理,如果我们将透明平板放在投影数据中,它将产生与原始物体相同的投影数据。
现在,我们将考虑在一个固定角度上进行的扫描,并使用一个探测器来测量通过物体的平行光束的强度。
我们假设我们进行的扫描是从正上方进行的,即探测器位于物体的上方,光源位于物体的下方。
我们可以将探测器上各个点的强度测量数据连接起来,得到一个探测器的强度图。
现在,我们将探测器的位置旋转一定角度,并记录相应的探测器强度图。
通过不同角度的扫描,我们可以得到一系列探测器强度图,这些图像对应于不同的切片位置。
根据中心切片定理,我们可以通过这些探测器强度图来重建物体的密度分布。
中心切片定理的数学推导需要引入一些复杂的数学工具和方法,例如傅里叶变换和Radon变换等。
这超出了本文的讨论范围,我们只简单介绍一下中心切片定理的基本原理。
一维斐波那契类光子准晶体旁瓣压缩的厚度啁啾和
遗传算法研究的开题报告
导言:
光子准晶体是一种通过周期性介质的微结构构成的等离子体,具有
特殊的电磁波传输和光学性质,在光子学领域具有广泛的应用。
其中,
一维斐波那契类光子准晶体具有高质量因子、低模式交错等特点,是一
种重要的光学微腔结构。
而旋转梯度相位压缩是在一维斐波那契类光子
准晶体中实现光子数量密集的方式之一,适用于测量和量子通信等领域。
本文将探讨一维斐波那契类光子准晶体旋转梯度相位压缩的厚度啁啾和
遗传算法优化问题。
研究目的:
本研究通过理论计算和数值模拟,探讨一维斐波那契类光子准晶体
旋转梯度相位压缩的厚度啁啾和遗传算法优化问题,为光子准晶体中的
理论探究和实际应用奠定基础。
研究方法:
1. 理论分析:通过理论分析和计算,探究一维斐波那契类光子准晶
体中旋转梯度相位压缩的机制和影响因素。
2. 数值模拟:采用有限元法和时空有限差分法等数值模拟方法,对
一维斐波那契类光子准晶体中旋转梯度相位压缩过程中的电磁场分布和
光传输特性进行模拟和分析。
3. 遗传算法优化:基于遗传算法的优化思想,对一维斐波那契类光
子准晶体中旋转梯度相位压缩的厚度啁啾进行优化设计,提高压缩效果
和稳定性。
预期结果:
本研究预计得到一维斐波那契类光子准晶体旋转梯度相位压缩的理论分析和数值模拟结果,并通过遗传算法优化得到压缩效果更好、稳定性更高的厚度啁啾参数。
研究成果将在光子学领域中具有一定的理论和应用价值,为相关研究和技术开发提供参考和指导。
凸多边形窗口线裁剪的折半查找算法
李伟青
【期刊名称】《计算机辅助设计与图形学学报》
【年(卷),期】2005(017)005
【摘要】在Skala算法基础上,提出了一个更加快速的线裁剪算法.该算法将裁剪窗口分割成4条折线,依据折线的两个端点与被裁剪直线的位置关系,确定折线是否与直线相交;采用折半查找方法,快速确定与直线相交的窗口边界线,并求出交点位置.与Cyrus-Beck算法相比,该算法在乘除法次数和计算速度方面具有非常明显的优势,也比Skala算法的效率更高.
【总页数】4页(P962-965)
【作者】李伟青
【作者单位】浙江大学CAD&CG国家重点实验室,杭州,310027
【正文语种】中文
【中图分类】TP391
【相关文献】
1.计算机图形学中凸多边形窗口的线裁剪算法研究 [J], 赵晓峰
2.凸多边形窗口线裁剪的新算法 [J], 孙燮华
3.基于交点符号法的凸多边形窗口的线裁剪算法 [J], 张剑达;张全伙
4.圆形窗口的凸多边形裁剪 [J], 杜玉越
5.基于叉积法的凸多边形窗口裁剪算法 [J], 唐井林;张庆;等
因版权原因,仅展示原文概要,查看原文内容请购买。
Robust Inter-Slice Intensity Normalization usingHistogram Scale-Space Analysis.J.Dauguet1,2,3,J.-F.Mangin1,T.Delzescaux1,and V.Frouin11Service Hospitalier Fr´e d´e ric Joliot,CEA,Orsay,France2INRIA,Epidaure Project,Sophia Antipolis,France3Ecole Centrale Paris,Laboratoire de Math´e matiques Appliqu´e es aux Syst`e mes,Chˆa tenay-Malabry,FranceAbstract.This paper presents a robust method to correct for intensity differ-ences across a series of aligned stained histological slices.The method is madeup of two steps.First,for each slice,a scale-space analysis of the histogram pro-vides a set of alternative interpretations in terms of tissue classes.Each of theseinterpretations can lead to a different classification of the related slice.A simpleheuristics selects for each slice the most plausible interpretation.Then,an iter-ative procedure refines the interpretation selections across the series in order tomaximize a score measuring the spatial consistency of the classifications acrosscontiguous slices.Results are presented for a series of121baboon slices.1IntroductionHistological data often present large and discontinuous changes of intensities between slices.The sectioning process,indeed,does not assure a perfect constant thickness for all the slices.The staining intensity is sensitive to these variations of thickness,and is also sensitive to the concentration of the solution and the incubation time.Finally,the acquisition of the slices on the scanner can accentuate these differences.The design of postprocessing procedures overcoming these intensity inhomogeneities is complex for several reasons:–the number of homogenous tissues and their relative sizes vary from one slice to another;–the intensity distributions may present high discontinuities from one block of con-tiguous slices to another:actually,the brain after extration is usually divided into small blocks mainly for a more homogeneousfixation;–the relative contrasts between the intensity ranges corresponding to the different tissue classes can be radically different from one slice to another;–large spatial inhomogeneities can occur inside some slices,which disturbs the clas-sification of the different types of tissues,especially in case of low contrast.A previous attempt to overcome this kind of problems led to match the intensity dis-tribution of contiguous slices using a strategy based on a1D linear registration of his-tograms[3].This approach was successful for a dataset including two tissue classes only and presenting a good contrast.However,we observed some failures of the same approach for datasets corrupted by the various artifacts mentioned above.Some of thesefailures resulted from local maxima of the similarity measure used to register the his-tograms.In the worst cases,the global maximum of this similarity measure did not correspond to the targeted registration.We present in this paper a different strategy which aims at increasing the robustness of the intensity restoration process.Some of the difficulties of the linear-registration based approach could be overcome using some non linear normalization[6],in order to get the possibility to deal with more than two classes.Such an approach,however, may still be sensitive to minimization problems.Therefore,rather than performing a blind registration between histograms,our new strategy detects the ranges of intensities corresponding to the different tissues before matching the histograms.For the sake of robustness,an iterative algorithm updates these detections until achieving consistent tis-sue classifications across contiguous slices.This new strategy is applied on a set of121 realigned histological slices of a baboon brain acquired with a resolution of0.16mm in the sectioning incidence(coronal)and an inter-slice distance of0.72mm.The slices have been stained with a general marker which created a contrast between gray and white matter(Nissl staining),and a more specific marker of the basal ganglia and thala-mus(Acetylcholinesterase histochemistry)which produces a very intense staining.The set has been realigned following a method described in[1].2MethodThe method described in this paper is made up of two steps.First,for each slice,a multi-scale histogram analysis is performed.This analysis provides several possible interpre-tations of the histogram shape in terms of underlying tissue classes.Each interpretation can be used to compute a classification of the corresponding slice.A simple heuristics selects for each slice the most likely interpretation according to a priori knowledge about the acquisition process.The second step leads to question the spatial consistency of the resulting classifications across contiguous slices in order to detect some failures of this heuristics.This second step is based on an iterative process.For each slice,a simple score is used to rank the set of histogram interpretations according to consis-tency with the interpretations selected for the neighboring slices.If the current selected interpretation is not the best one,an update is applied.Several loops on the set of slices are performed until convergence of the process.2.1Histogram scale-space analysisBecause of the various artifacts mentioned in the introduction,few a priori knowledge on the shapes of the histograms can be used to analyse them automatically.The main one lies in the relative positions of the tissues:the marker of the basal ganglia is darker than grey matter,which is darker than white matter.Another interesting property is true for most of the slices:the contrast between basal ganglia and grey matter is higher than the contrast between grey and white matters.Otherwise,the distributions of intensities of each of the tissues vary largely across slices.In the worst cases,one given tissue can correspond to several histogram modes(several maxima),induced by some variability in the staining process or by intra-slice spatial inhomogeneities.The fact that one class of tissue can be represented by several neighboring histogram modes leads to develop a multiscale strategy.Smoothing the histogram,indeed,is supposed to mix together the modes corresponding to the same tissue before merging the different classes.Fig.1.S ome examples of the behaviour of the heuristics analysing successfully most of the slices of the dataset.left:raw slice,middle:The scale-space trajectories of the extrema used to interpret the histogram.Cyan(respectively violet)color corresponds to the second derivative minima(resp. maxima).Green and dark blue colors correspond tofirst derivative extrema.Diamond shapes mark the drift velocity minima corresponding to the scale of the detected modes.The polynomial normalizing the slice intensity is superimposed.right:normalized slice.Linear scale-space analysis is an appealing multiscale approach when dealing with 1D signals because of the simplicity of the extrema behavior induced by the causality property[7,2].This scale-space can be computed by applying the heat equation to the signal,which corresponds to smoothing it with a Gaussian kernel with increasing width.The extrema of the initial signal can be tracked throughout the scales until their extinction,which occurs when one maximum and one minimum get in touch with each other at a bifurcation point.For a1D signal,the number of maxima always decreases with scale until reaching a stage where only one maximum remains.Since extrema of the histogram and of itsfirst derivatives often have direct semantic interpretations,they allow the analysis of the histogram shape relative to the underlying tissue classes.In simple cases,smoothing the histogram until only three maxima survive is suf-ficient to detect the modes corresponding to the three classes of tissues.The general situation,however,requires a better heuristics.For instance,the slices which do not cross the basal ganglia lead to only two classes of tissues,which has to be inferred. More difficult cases occur when the contrast between grey and white matter is very low.In such cases the two related histogram modes merge very early in the scale-space, while several maxima induced by the basal ganglia may still exist.In the worst cases, this contrast is so low that the part of the histogram corresponding to grey and white matter includes only one maximum(see Fig.1).It has been shown elsewhere,however, that such situations can be overcome through the use of the second derivative minima [5].This larger set of extrema,which is related to the histogram curvature,embeds more information about the various classes of tissues making up the slice.Hence,each second derivative minimum is supposed to mark a mode of the his-togram.Moreover,the location of this minimum along the intensity axis provides an estimation of the average intensity of this mode.This location,however,varies with the scale.The mode represented by this minimum,indeed,varies also with scale.In-tuitively,a minimum may stand for an artifactual mode at low scales,and represent a tissue of interest at higher scales after several artifactual modes have been merged together by the smoothing.During this merging process,the location of the minimum along the intensity axis varies rapidly from the average intensity of the artifactual mode to the average intensity of the tissue of interest.In fact,most of the extrema trajectories in scale-space alternate such periods of high horizontal drift velocity with periods of stability which could be related to scale ranges where they are catched by some under-lying mode.Therefore,our method associates a different mode to each local minimum of the horizontal drift velocity along the trajectory of second derivative minima[5].In order to sort these modes in decreasing order of interest,two features are used: the extinction scale of the second derivative minimum trajectory they belong to,and an evaluation of the number of slice’s pixels they stand for.This evaluation is the integral of the histogram in the range defined by the locations at the mode’s scale of the second derivative minimum and the closestfirst derivative minimum[5].First derivative min-ima correspond in fact to the largest slope points of the histogram’s hills.Hence,this integral corresponds approximately to half the volume of the mode in terms of slice’s pixels.Several interpretations of the histogram shape can be derived from the modes at the top of the hierarchy.The interpretation systematicaly selected as the most plausible one results from the following heuristics:pute the scale-space until only one second derivative minimum survives;2.Detect the highest drift velocity minimum along the trajectory of this minimum,which stands for a mode GW representing the sum of grey and white matter;3.Detect the second derivative minimum trajectory with the highest extinction scalelocated on the right of thefirst derivative minimum associated to the left slope of GW;4.Detect the next drift velocity minimum along thefirst trajectory and the highestone along the second trajectory.The minimum on the left stands for a mode G representing grey matter,the one on the right stands for a mode W representing white matter;5.Detect the second derivative minimum with the highest extinction scale located onthe left of the intensity range explored previously.The highest mode M is associated to the marker of the basal ganglia if its volume is more than100pixels.The Figure1gives a few examples of the behaviour of this heuristics.For each slice, alternative interpretations are also derived from the scale-space of the histogram,to deal with potential failures of this heuristics.They correspond for instance to associating the modes G and W respectively to the leftmost and to the rightmost two biggest modes. Here is the list of the different alternative interpretations taken into account for the results presented in this paper:1.The biggest mode can stand for either“G”or“W”modes(extreme or pathologicalslices);2.The two biggest modes stand for“G and W modes”;3.The three biggest modes stand for“M,G and W modes”or“M,G and X modes”,where the mode X does not lead to a class of tissue;4.The four biggest modes stand for“M,X,G and W modes”.It should be noted that this list of interpretations does not explore all the combinatorial possibilities and results from a tuning to some events found in our data.Each interpretation can be simply converted into a classification of the respective slice:the histogram is split into classes by thresholds defined as the middle points be-tween the estimated mode’s averages.These classifications may include one,two or three classes.2.2Classification spatial consistencyThe second step of the process detects the failure of the main heuristics,using for each slice a score based on the consistency between the slice’s classification and the two clas-sifications of the contiguous slices.This score is based on the number of voxels which belong to the same class in two contiguous slices.For each slice,the score obtained by the heuristics interpretation is compared to the scores obtained by the other interpre-tations.The interpretation with the best score is selected(see Fig;2).This process is iterated until convergence,which occured in two or three iterations in our experiments, thanks to the robust behaviour of the heuristics.During afinal stage,we correct the intensity of each slice by estimating the best polynom matching the detected classes of tissues with template values selected by the user.The degree of this polynom(1,2,or3)depends on the number of tissues found in the slice.Three-degree polynoms are replaced by a linear interpolation on the[0,m] segment,where m stands for the position of the marker maximum,to avoid polynom “rebunds”problems.(a)(b)(c)(d)(e)(f)(g)(h)(i)Fig.2.A n histological stained slice(a),its heuristic histogram analysis(b),the corresponding nor-malisation.The classification of the previous slice(d),of the current slice following the heuris-tics(e)and of the next slice(f).Alternative histogram analysis with4biggest mode detected (M,X,G,W)(g),the best classification according to the consistency with the neighboring slices (h)and thefinal corrected slice(i).3Results and DiscussionWe present infigure3a slice orthogonal to the sectioning incidence,first before any processing,second after intensity normalization using only the heuristics,andfinally after the whole process.The iterative step has converged after two iterations.Thefinal histogram interpretation selected by the whole process stems from the heuristics for about85%of the slices.For thefinal intensity normalization,we set the average inten-sity of the white matter to230,the intensity of the grey matter to160andfinally80 for the marker.As far as computing time is concerned,a complete normalization of the whole volume500*440*121with2iterations on a Pentium IV1.8GHz takes about10minutes.The result is globally satisfying.In fact,the remaining problems(imperfec-tions in the marker regions)stem mainly from intra-slice spatial inhomogeneities that were not taken into account by the process described in this paper,or from pathological sections.The Figure1provides an insight into the variability of the histogram shapes to be analysed.Thefirst and the second raws show simple histograms with two or three modes.The third row shows an example of low contrast where the white matter mode does not lead to any maximum in the histogram but is recovered via the curvature-related second derivative minimum.The fourth row shows a successful estimation of the average intensity of the marker at high scale while this class of tissues is split into a lot of modes at lower scales.Thefifth row shows the detection of the marker mode in a limit case where it is made up of a few hundred pixels,namely for a slice located at the beginning of basal ganglia.Thefigure2describes the behaviour of the second step of the method for a slice leading to a failure of the heuristics.The competition between the alternative interpre-tations of the histogram leads to the selection of the(M,X,G,W)hypothesis.In fact none of the interpretations is fully satisfying in terms of classification,because of a large spatial inhomogeneity inside the slice.Therefore,our algorithm can only select the interpretation leading to the best3D consistency.This kind of situations calls for applying some intra-slice bias correction procedure somewhere into the restoration process.The very low grey/white contrast of some of our slices,however,prevented us to perform such a correction systematically before intensity normalization.Indeed,the risk of mixing definitively the range of intensities corresponding to grey and white matter is too high.To prevent such problems to occur, we plan to address the bias correction process through a two stage procedure performed on a slice by slice basis.An initial correction will be performed before intensity normal-ization using a highly regularized correctionfield preventing any mixing between grey and white matter.Afinal correction will be performed with more degrees of freedom for thefield after interpretation of the histogram.This interpretation,indeed,yields an estimation of the contrast between grey and white matter,which can be used to perform a slice by slice tuning of the biasfield regularization.First experiments using an entropy minimization approach are promising but beyond the scope of this paper[4].While our method includes some ad hoc tuning to some of our dataset features(the heuristics and the set of alternative interpretations),the strategy is generic and could be adapted easily to other kinds of dataset.Thanks to the iterative correction process, indeed,no strong robustness is required for the initial heuristics.The main requirement to assure a good behaviour of the whole process is that the set of alternative histogram interpretations is rich enough to cover all the possible cases.In our opinion,with a reasonable intra-slice bias,the hypothesis that each tissue class leads to one of the biggest scale-space mode should give the possibility to build a small successful set of alternative interpretations for various kinds of pared to a blind registration procedure prone to local maxima of the similarity measure[3],our point of view as-sumes that each of these potential local maxima corresponds to a different matching of the histograms modes.Hence our method selectsfirst the most frequent matching with a template histogram shape provided by the heuristics and iteratively propagates this choice throughout the series of slices in order to achieve3D spatial consistency of the resulting classification.(a)(b)(c)Fig.3.A n axial view of the histological volume:before intensity correction(a),after the heuristic strategy(b)and after the spatial consistency checking(c).4ConclusionWe have presented in this paper a generic strategy to perform robust histogram normal-ization throughout a series of registered images.This method can deal with variations of the number of tissue classes throughout the series and provides a3D consistent clas-sification of the data.While the approach described in the paper requires an heuristics providing an initial correct classification of most of the slices,the framework may be extended to complex situations where no such heuristics is available.The iterative step indeed could consists in a global stochastic minimization of the sum of scores evaluat-ing spatial consistency of the classification of contiguous slices.References1.T.Delzescaux,J.Dauguet,F.Cond´e,R.Maroy,and ing3D non rigid FFD-basedmethod to register post mortem3D histological data and in vivo MRI of a baboon brain.In MICCAI,Montr´e al,LNCS2879,Springer Verlag,pages965–966,2003.2.J.J.Koenderink.The structure of images.Biol.Cybernetics,50:363–370,1984.3.Gr´e goire Malandain and Eric Bardinet.Intensity compensation within series of images.InMICCAI,volume2879of LNCS,pages41–49,Montr´e al,Canada,2003.Springer Verlag. 4.J.-F.Mangin.Entropy minimization for automatic correction of intensity nonuniformity.InIEEE Work.MMBIA,pages162–169,Hilton Head Island,South Carolina,2000.IEEE Press.5.J.-F.Mangin,O.Coulon,and V.Frouin.Robust brain segmentation using histogram scale-space analysis and mathematical morphology.In Proc.1st MICCAI,LNCS-1496,pages1230–1241,MIT,Boston,Oct.1998.Springer Verlag.6.S.Prima,N.Ayache,A.Janke,S.Francis,D.Arnold,and L.Collins.Statistical Analysis ofLongitudinal MRI Data:Application for detection of Disease Activity in MS.In MICCAI, pages363–371,2002.7. A.P.Witkin.Scale-spacefiltering.In In International Joint Conference on Artificial Intelli-gence,pages1019–1023,1983.。