02 Vector Testing Solution in 2013-测试解决案
- 格式:pdf
- 大小:1.35 MB
- 文档页数:27
基于制造解的非结构二阶有限体积离散格式的精度测试与验证王年华;张来平;赵钟;赫新【摘要】随着计算机技术的飞速进步,计算流体力学得到迅猛发展,数值计算虽能够快速得到离散结果,但是数值结果的正确性与精度则需要通过严谨的方法来进行验证和确认.制造解方法和网格收敛性研究作为验证与确认的重要手段已经广泛应用于计算流体力学代码验证、精度分析、边界条件验证等方面.本文在实现标量制造解和分量制造解方法的基础上,通过将制造解方法精度测试结果与经典精确解(二维无黏等熵涡)精度测试结果进行对比,进一步证实了制造解精度测试方法的有效性,并将两种制造解方法应用于非结构网格二阶精度有限体积离散格式的精度测试与验证,对各种常用的梯度重构方法、对流通量格式、扩散通量格式进行了网格收敛性精度测试.结果显示,基于Green-Gauss公式的梯度重构方法在不规则网格上会出现精度降阶的情况,导致流动模拟精度严重下降,而基于最小二乘(least squares)的梯度重构方法对网格是否规则并不敏感.对流通量格式的精度测试显示,所测试的各种对流通量格式均能达到二阶精度,且各方法精度几乎相同;而扩散通量离散中界面梯度求解方法的选择对流动模拟精度有显著影响.%With the great improvement in computer technology, computational fluid dynamics have progressed signifi-cantly. Even though it is fast and easy to obtain discretized results via numerical simulations, the validity and accuracy of the results need to be carefully validated and verified. As an important approach in verification and validation, the method of manufactured solutions (MMS) was widely applied in code verification, accuracy analysis and verification of boundary conditions. This paper first established the procedures for the MMS with scalar manufactured solutions and vector manufactured solutions.Verification of these two procedures was performed by comparing results of accuracy testing for a typical exact solution (2D inviscid isentropic vortex). The MMS procedures were then employed to the study of unstruc-tured finite-volume discretization schemes, such as gradient reconstruction methods, convective fluxes discretization and diffusive fluxes discretization. It demonstrated that some schemes employing certain Green-Gauss based gradient degrade to 1st order on irregular meshes and discretization error increases significantly, while the least squares based gradient is insensitive to mesh irregularity. Besides, all tested convective fluxes discretization schemes were 2nd order accurate and they exhibited similar performance in terms of accuracy. But the method of computing the interface gradient was an essential factor affecting the accuracy of diffusive fluxes discretization.【期刊名称】《力学学报》【年(卷),期】2017(049)003【总页数】11页(P627-637)【关键词】验证与确认;制造解方法;精确解方法;网格收敛性研究;有限体积离散方法;数值模拟精度【作者】王年华;张来平;赵钟;赫新【作者单位】中国空气动力研究与发展中心计算空气动力研究所,四川绵阳621000;中国空气动力研究与发展中心计算空气动力研究所,四川绵阳621000;中国空气动力研究与发展中心空气动力学国家重点实验室,四川绵阳621000;中国空气动力研究与发展中心计算空气动力研究所,四川绵阳621000;中国空气动力研究与发展中心计算空气动力研究所,四川绵阳621000;中国空气动力研究与发展中心空气动力学国家重点实验室,四川绵阳621000【正文语种】中文【中图分类】V211.3自从20世纪中叶计算流体力学(computational fluidynamics,CFD)诞生以来,随着计算机技术和CFD方法的迅速发展,CFD数值模拟技术已经广泛应用于以航空航天为代表的诸多领域,革命性地改变了这些领域内传统的研究和设计方法[1].但是CFD数值模拟的先天不足在于控制流动的偏微分方程组的可解性以及解的唯一性没有得到任何理论证明,因此CFD数值结果的可信度就非常值得关注.验证与确认(verificatio and validation,V&V)是评价CFD数值结果可信度的重要手段,验证的目的在于验证离散格式、数值方法、程序代码离散并求解控制方程的正确性;而确认的意义在于确认所求解的控制方程及边界条件真实地反映了实际物理流动问题.二十世纪八九十年代,国外就开始对验证与确认的研究给予高度重视,并逐步开展相关研究工作.1998年,AIAA在总结之前工作的基础上发布了第一部系统阐述CFD验证与确认的指南[2].而国内在验证与确认方面起步较晚,2007年,国内学者建议在国内广泛开展验证与确认研究,以推动国内CFD可信度研究[3].在CFD可信度研究中,验证工作可以采用精确解方法和制造解方法,结合网格收敛性测试研究微分方程的求解精度及精度阶.传统V&V通常采用精确解方法(method of exact solutions,MES)[45],这种方法一般将流动控制方程的精确解与数值解进行比较.但是遗憾的是,复杂的非线性方程少有精确解,一般能得到的都是经过简化之后的方程解析解,而简化后的方程并不能完整地体现原控制方程中的各项,因此也不能研究相应项的数值特性以及验证代码的正确性.相反,制造解方法(method of manufactured solutions,MMS)[67]不寻找控制方程的精确解,而是人为制造一个解,并使之满足添加源项之后的修正控制方程.制造解不一定是真实物理解,而仅仅是为了研究与验证控制方程中各项的数值特性和计算精度而人为设计的,因而比精确解方法更具有实用性.文献[5]总结出了多种利用精确解进行CFD代码验证的算例,如膨胀波、斜激波、不可压层流边界层、库埃特流动、Burgers方程等.在分析了制造解方法和精确解方法的优缺点之后,指出采用制造解方法进行代码验证和数值算法测试具有很高的准确性.采用制造解方法和网格收敛性测试进行代码验证最早由Roache和 Steinberg[6]提出.Roache[78]在总结代码和计算方法验证的文章中强调,通过制造解方法和网格收敛性测试进行验证非常具有严谨性和说服力.2008年在Lisbon举办的第三届CFD Uncertainty Analysis Workshop[9]上要求首次参加研讨会的与会者必须对二维湍流近壁流动形式的制造解进行制造解方法的测试,这表明制造解方法作为代码验证和精度测试的手段已得到认可.同时,国外利用制造解方法开展V&V工作已有很多.首先,一些学者将制造解方法应用到数值方法的代码验证中.在CFD领域,针对Euler方程[10]、RANS(Reynolds averaged Navier-Stokes)方程[11]及直接数值模拟(direct numerical simulation,DNS)[1213]、高阶方法[14]求解代码的验证,MMS方法均有相关应用.此外,MMS还成功应用于浸入边界法精度验证[15],等离子体流动模拟代码验证[1617],多相流动求解验证[1819],化学非平衡流动数值模拟验证[20]等计算流体力学领域.一些In-house软件,如Wind-US[21]和Loci-CHEM[22]均通过了制造解方法的精度阶验证;Marshall[23]甚至基于MMS方法专门开发了针对代码验证的C++函数库.其次,制造解方法在边界条件的验证中也得到应用[2426].如Bond等[24]通过制造解方法验证并辨别出了边界条件和梯度重构公式上的缺陷;又如Folkner等[25]也利用MMS方法和精确解方法对格点型和格心型有限体积法的各种边界条件进行了验证.再次,采用制造解方法进行数值算法研究与测试也得到众多学者的广泛关注. 如Katz和Sankaran[2728]利用MMS方法研究了非结构网格质量对格点型和格心型有限体积算法计算精度的影响.类似的,Diskin和Thomas[2930]通过MMS方法求解线性对流方程和泊松方程分别分析了网格类型和网格质量对无黏通量和黏性通量计算精度的影响.Vedovoto等[31]通过MMS方法研究了基于压力的有限体积格式的数值精度特征.一些新算法在提出之后,通常采用MMS方法对其代码正确性和算法精度进行验证和分析[32-33].此外,针对制造解方法要在一系列网格上求得收敛解代价较大的问题,Burg和Murali[34]提出了类似于 Taylor展开的残差型制造解精度分析方法.Brglez[35]也对原始的MMS方法提出改进,以提高方法的易用性和简洁性.除了在光滑流场中得到应用之外,MMS方法还被拓展到具有间断特征的流场求解验证中[3638].以上的综述表明,制造解方法作为验证与确认的重要手段已经广泛应用于代码验证、精度分析、边界条件验证等方面,并逐步得到发展和完善.相较于国外的蓬勃发展,国内在MMS方法的研究和应用上与国外还有较大差距.王瑞利等[3940]较早开展了这方面的研究,但是要在国内真正推广使用制造解方法还需要更多工作.本文在实现一种标量制造解和分量制造解方法的基础上,通过将制造解方法精度测试结果与经典精确解(二维无黏等熵涡)精度测试结果进行对比,进一步证实了制造解精度测试方法的有效性,并重点将这两种制造解方法应用于非结构网格二阶精度有限体积离散方法的精度验证,对非结构有限体积离散方法中各种常用的梯度重构方法、对流通量格式、扩散通量格式进行了网格收敛性精度测试,得到了一些有指导意义的结论.通常需要一个精确解进行测试离散格式的计算精度.但是大多数可压缩黏性流动的精确解都过于简单,不能完整地体现控制方程的所有项.为了解决这个问题,Roache和Steinberg[6]提出了制造解方法.1.1 制造解方法理论制造解方法的基本思路是选择任意的“制造解”代入原始的控制方程(如Navier-Stokes(NS)方程或者Euler方程等).一般情况下,制造解不能满足原始的控制方程,代入控制方程后,右端项不为零,可以将引入的右端项设为源项.因此,制造解可以理解为带源项的修正方程的精确解,如下所示式中,Q为守恒变量,F=Fi+Gj+Hk为对流通量项,Fv=Fvi+Gvj+Hvk为扩散通量项.在离散网格上利用数值方法求解修正的控制方程,得到数值解,数值解与制造解的差值即为离散误差(不考虑舍入误差).结合网格收敛性测试便可以分析验证不同离散格式、计算方法的精度(或称误差)、数值精度阶以及代码编写的正确性.文献[27]指出,只要制造解源项的处理方式与控制方程的离散方法在数值精度上相容,MMS方法是评价数值格式特性的有效方法.式(1)既可以针对标量模型方程进行制造解方法研究,也可以针对流动控制方程组进行制造解方法研究.对于标量模型方程,通常采用以下定义[28]标量模型方程的最终形式为式中,φ为任意标量场,A=(a,b,c).可以看到,标量模型方程实质上是一个带源项的线性对流扩散方程.A和ν分别为线性对流项和线性扩散项的常系数.而对于NS方程,方程(1)中各项可以表示为如下形式[28]式中,ρ,u,v,w,e,h0,p分别为密度、3个速度分量、总能量、总焓和压强,此外τ和q分别为黏性应力张量和热传导矢量.制造解方法进行精度验证可以总结为以下 6步[41]:(1)选择控制方程的形式;(2)选择制造解的形式;(3)推导修正后的控制方程;(4)在多套依次加密的网格上求解离散形式的修正控制方程得到数值解;(5)计算数值解的离散误差;(6)计算得到数值精度阶,进行精度分析和验证.针对上述标量方程和NS方程,常用的制造解主要有标量制造解和分量制造解,以下给出文献[27-28,41]中所采用的几种制造解,制造解流场云图如图1所示.本文的研究仅采用式(6)和式(7)所示的Euler制造解进行测试验证工作;控制方程采用不考虑黏性项的Euler方程,以及标量对流扩散方程,不同测试算例采用的控制方程和制造解形式在算例中均有相应说明.制造解源项采用Mathematica数学软件进行公式推导精确求得,消除源项离散误差.1.2 制造解方法验证为了验证上一节中制造解精度测试方法的有效性,本节采用经典精确解——二维无黏等熵涡流动和Euler制造解方法对同一离散方法、同一网格在相同条件下进行精度对比验证.二维无黏等熵涡的初始流场由式(8)给出[42],需要指出的是,等熵涡流动实质上是制造解的一个特例,其制造解源项为零.式中涡强度ε=5.0,涡中心位于坐标原点(x0,y0)=(0,0),其初始流场如图2所示.分别在图 3所示的 4种网格上利用等熵涡流动、分量Euler制造解、标量Euler制造解对Euler方程和对流方程进行网格收敛性测试,得到离散方法的数值精度阶,并进行比较,结果如表1所示.关于数值精度阶的测试方法详见文献[43],测试的计算平台为课题组自主研制的HyperFLOW软件[4445].从表1可以看到,在4种不同类型的网格上,等熵涡流动精度测试、分量Euler制造解精度测试、标量Euler制造解精度测试得到的流动求解精度阶完全一致.这证实了前述的标量制造解方法和分量制造解方法是CFD方法验证有效工具的论断.需要说明的是,这里采用的是格心型非结构网格有限体积离散格式,流动变量的重构采用GG-Cell梯度重构方法[42,46],对流通量离散采用Roe格式.利用其他方法也能得到制造解和精确解数值精度阶一致的结果.在非结构网格有限体积离散格式中,基于梯度重构的迎风格式是最受欢迎的对流通量离散格式,如矢通量分裂格式:AUSM格式,Van Leer格式,Steger-Warming格式;通量差分分裂格式:Roe格式等.通常这些通量离散格式需要至少一阶精度的梯度重构以确保二阶精度的有限体积离散.一般来说,在非结构网格上,通常有基于Green-Gauss(GG)公式的梯度重构方法和基于最小二乘(least squares,LSQ)的梯度重构方法.根据面心值求解方式的不同GG方法又可以分为4种,如表2所示[42].LSQ方法则根据是否加权、是否采用扩充模板又分为加权的最小二乘梯度重构WLSQ,不加权的最小二乘梯度重构LSQ,以及相应扩充模板(extended)或者基本模板 (basic)的 WLSQ和 LSQ方法[42,46],如表3所示.而扩散通量计算精度主要取决于控制体交界面梯度的计算方法.根据单元梯度值的加权方式、界面值是否连续、以及是否引入差分修正项,界面梯度的计算方法分为很多种[47].本文仅考虑两种非结构网格界面梯度求解方法,以说明制造解方法在验证扩散通量计算方法中的应用,这两种界面梯度的求解方法公式分别如式(9)~式(11)所示.(1)Aver method取左右单元梯度的平均值.(2)Edge correction method[48]在取平均值的基础上,在相邻格心连线方向引入差分修正.文献[48]指出,虽然aver method简单易实现,不需要额外的数据存储,但是会在四边形网格或六面体网格上导致奇偶失联,而edge correction method则能够在四面体、三棱柱及六面体网格上形成强耦合的模板,本文将从计算精度角度对这两种方法进行对比验证.网格收敛性测试选取5套依次加密的网格,数值模拟制造解流动,边界条件为Dirichlet边界,消除边界条件离散的误差,测试离散误差随网格尺度减小时的收敛情况.通过密度离散误差的L1模随网格尺度(mesh size)的收敛结果来研究各种方法的模拟精度及数值精度阶.2.1 单元梯度重构方法的验证本节通过分量Euler制造解考察表2中4种GG方法,以及表3中的WLSQ-basic和WLSQ-extended方法在图3所示的4种网格上求解Euler方程的计算精度.对流通量采用Roe格式进行离散.图4中曲线“1st”表示采用常量重构的流动模拟结果,即认为单元内流动变量为常数,单元界面左右状态直接取为左右单元的值,而不经过梯度重构得到;“1st order ref.”和“2nd order ref.”分别代表一阶精度和二阶精度参考曲线,给出一阶精度和二阶精度曲线参考斜率,即离散误差下降的速率.图4(a)和图4(c)显示,对于规则网格Grid 1和Grid 3,所有梯度重构方法在求解Euler方程时均能达到二阶精度,满足二阶有限体积法的预期.而图4(b)和图4(d)显示,在非规则网格Grid 2和Grid 4上,采用GG-Cell和GG-Node梯度重构方法会使流动模拟降阶到一阶,在同样的网格尺度下,流动模拟精度明显下降.因此梯度重构方法的选择对流动模拟精度有非常重要的影响,甚至决定了离散方法的精度阶,而精度阶直接反应了离散误差在网格加密时的下降速率.GG-Cell及GG-Node方法流动模拟精度降阶的原因在于在非规则网格上面心值插值未能达到二阶精度,导致梯度重构精度为0阶,从而导致流动模拟只有一阶精度,而其他能够保证面心值二阶插值精度的方法如GG-LSQ和GG-WTLI,以及LSQ方法均能保持梯度一阶精度,从而保证流动模拟的二阶精度,具体分析可参见文献[49].2.2 对流通量离散的验证本节通过分量Euler制造解考察4种对流通量格式(AUSM+,Roe,Steger-Warming,Van Leer)对Euler方程模拟精度的影响.采用规则四边形网格Grid 1和非规则四边形网格Grid 2;单元梯度重构分别采用GG-Cell方法和WLSQ-basic方法,同样采用5套依次加密的网格,进行网格收敛性测试.图5显示,不同的通量格式对流动模拟精度影响不大,各种通量格式离散误差非常接近.实际上,图5的结果再次证实梯度重构方法的选择是Euler方程模拟精度的主要影响因素,图5(a)和图5(b)显示,采用GG-Cell方法在规则和非规则网格上进行梯度重构出现了精度阶上的差异,而图5(c)和图5(d)显示,WLSQ-basic方法对网格是否规则并不敏感,各种对流通量均能保持二阶精度,且绝对误差相差很小.这与上一节中关于梯度重构方法的验证结论一致.2.3 扩散通量离散的验证本节通过标量Euler制造解对标量扩散方程(方程(3)中取对流项系数 A=0)进行精度测试,验证不同界面梯度计算方法对扩散通量离散精度的影响.网格收敛性测试采用规则网格Grid 1和Grid 3,梯度重构分别采用GG-Cell方法和WLSQ梯度重构方法.需要说明的是,由于在扰动网格Grid 2和Grid 4上,网格扰动会导致GG-Cell方法单元梯度重构精度降阶到零阶[49],从而引入单元梯度重构精度这一多余影响因素,故在此不考虑Grid 2和Grid 4.图6显示,无论是规则四边形网格(Grid 1)还是规则三角形网格(Grid 3),采用aver method求解界面梯度值都会导致标量扩散方程求解精度下降.如图6(a)和图6(c)所示,在Grid 1上,当界面梯度采用aver method时,流动模拟精度随着网格加密逐渐降阶到一阶;而图6(b)和图6(d)显示在Grid 3上,采用GG-Cell单元梯度重构的aver method会降阶到一阶,而采用WLSQ-basic单元梯度重构的aver method会降阶到零阶,离散误差显著增大.相反,采用edge correction method则在两种网格上,对于两种单元梯度重构方法(GG-Cell和WLSQ-basic)均能保证扩散方程求解的二阶精度,离散的绝对误差也明显比aver method更小.因此,扩散通量中界面梯度的计算方法对流动模拟精度有较大影响.而扩散方程求解精度降阶的原因还有待进一步分析.本文在实现一种分量制造解和标量制造解方法的基础上,通过将制造解精度测试结果与典型精确解算例——二维无黏等熵涡精度测试结果进行对比,进一步证实了制造解精度测试方法进行CFD方法和代码验证的可行性和有效性.本文成功地将标量制造解和分量制造解应用于非结构网格梯度重构方法、对流通量格式、扩散通量格式的验证.结果表明,梯度重构方法的选择对Euler方程模拟精度有明显影响,在非规则网格上采用GG-Cell和GG-Node方法进行单元梯度重构会导致流动模拟精度降阶,离散误差明显增大,而LSQ方法则对网格是否规则并不敏感.本文选择的几种无黏通量格式均能够保持二阶精度,且各种格式的离散误差接近,说明在这种情况下,近似Riemann解的选取对计算结果精度的影响有限.而对扩散通量中界面梯度计算方法的验证结果显示,界面梯度计算方法对流动模拟精度有显著影响.下一步工作将针对NS方程进行制造解方法的验证与确认工作;同时,边界条件的验证、各向异性网格上“黏性”制造解流动的模拟、以及扩散方程精度降阶的原因也是进一步研究的方向.【相关文献】1阎超,于剑,徐晶磊等.CFD模拟方法的发展成就与展望.力学进展,2011,41(5):562-589(Yan Chao,Yu Jian,Xu Jinglei,et al.On the achievements and prospects for the methods of computational flui dynamics.Advances in Mechanics,2011,41(5):562-589(in Chinese))2 Porter JL,Agarwal R,Azad RS,et al.Guide for the verificatio and validation of computational flui dynamics Simulations.AIAA G-077-1998,19983邓小刚,宗文刚,张来平等.计算流体力学中的验证与确认.力学进展,2007,37(2):279-288(Deng Xiaogang,Zong Wengang,Zhang Laiping,et al.Verificatio and validation incomputational flui dynamics.Advances in Mechanics,2007,37(2):279-288(in Chinese))4 Iannelli J.An exact non-linear Navier-Stokes compressible-fl w solution for CFD code verification International Journal for Numerical Methods in Fluids,2013,72(2):157-1765 GhiaU,BayyukS,RoyC,etal.TheAIAAcodeverificatio project-Test cases for CFD code verification AIAA Paper 2010-125,20106 Roache PJ,Steinburg S.Symbolic manipulation and computational flui dynamics.AIAA Journal,1984,22(10):1390-13947 Roache PJ.Code verificatio by method of manufactured solutions.Transactions of ASME,2002,124(1):4-108 Roache PJ.Verificatio of codes and calculations.AIAA Journal,1998,36(5):696-7029 Eca L,Hoekstra M,Roache PJ,et al.Code verification solution verificatio and validation:an overview of the 3rd Lisbon Workshop.AIAA Paper 2009-3647,200910 Maruli VK,Burg COE.Verificatio of 2D Navier-Stokes codes by the method of manufactured solutions.AIAA Paper 2002-3109,200211 Eca L.Klaij CM,Vaz G,et al.On code verificatio of RANS solvers.Journal of Computational Physics,2016,310:418-43912 Silva HG,Souza LF,Medeiros MAF.Verificatio of a mixed highorder accurate DNS code for laminar turbulent transition by the method of manufactured solutions.International Journal for Numerical Methods in Fluids,2010,64(3):336-35413 Petri L,Sartori P,Rogenski J,et al.Verificatio and validation of a direct numerical simulation puter Methods in Applied Mechanics and Engineering,2015,291:266-27914 Navah F,Nadarajah S.On the verificatio of high-order CFD solvers//VII European Congress on Computational Methods in Applied Sciences andEngineering,Crete,Greece,June,201615 Brem C,Hader C,Fasel HF,A locally stabilized immerse boundary method for the compressible Navier-Stokes equations.Journal of Computational Physics,2015,295:475-50416 Ricci P,Riva F,Theiler C,et al.Approaching the investigation of plasma turbulence through a rigorous verificatio and validation procedure:A practical example.Physics of Plasma,2015,22(5):05570417 Dudson BD,Madsen J,Omotani J,et al.Verificatio of BOUT++by the method of manufactured solutions.Physics of Plasma,2016,23(6):06230318 Choudhary A,Roy CJ,Dietiker J,et al.Code verificatio for multiphase fl ws using the method of manufactured solutions,International Journals of Multiphase Flow,2014,80:150-16319 Brady PT,Herrmann M,Lopez JM.Code verificatio for finit volume multiphase scalarequations using the method of manufactured solutions.Journal of Computational Physics,2012,231(7):2924-294420 Wang L,Zhou W,Ji C.Verificatio of a chemical non-equilibrium fl ws solver using the method of manufactured solutions.Procedia Engineering,2015,99:713-72221 Nelson CC,Roy CJ.Verificatio of the Wind-US CFD code using the method of manufactured solutions.AIAA Paper 2004-1104,200422 Veluri SP,Roy CJ,S.Hebert,et al.Verificatio of the Loci-CHEM CFD Code using the Method of Manufactured Solutions.AIAA Paper 2008-661,200823 Marshall DD.A scientifi software verificatio library based on the method of manufactured solutions.AIAA Paper 2011-615,201124 Bond RB,Ober CC,Knupp PM,et al.Manufactured solution for computational fluidynamics boundary condition verification AIAA Journal,2007,45(9):2224-223625 Folkner D,Katz A,Sankaran V.Design and verificatio methodology of boundary conditions for finit volume puters and Fluids,2014,96:264-27526 Choudhary A,Roy CJ,Luke EA,et al.Code verificatio of boundary conditions for compressible and incompressible flui dynamics puters and Fluids,2016,126:153-16927 KatzA,SankaranV.Meshqualitye ff ectsontheaccuracyofCFDsolutions on unstructured meshes.Journal of Computational Physics,2011,230:7670-768628 Katz A,Sankaran V.High aspect ratio grid e ff ects on the accuracy of Navier-Stokes solutions on unstructured puters&Fluids,2012,65:66-7929 Diskin B,Thomas JL,Nielsen EJ,et parison of nodecentered and cell-centered unstructured finite-olume discretizations:viscous flu es.AIAA Paper 2009-0597,200930 Diskin B,Thomas parison of node-centered and cellcentered unstructured finite-olume discretizations:inviscid flu es.AIAA Paper 2010-1079,201031 Vedovoto JM,Neto AS,Mura A,et al.Application of the method of manufactured solutions to the verificatio of a pressure-based finite volume numericalputers and Fluids,2011,51:85-99.32 Blais B,Bertrand F.On the use of the method of manufactured solutions for the verificatio of CFD codes for the volume-averaged Navier-Stokes puters and Fluids,2015,114:121-12933 Thorne J,Katz A.Source term discretization e ff ects on the accuracy of finit volume schemes.AIAA Paper 2015-0571,201534 Burg COE,Maruli VK.Efficient code verificatio using the residual formulation of the method of manufactured solutions.AIAA Paper 2004-2628,200435 Brglez S.Code verificatio for governing equations with arbitrary functions using adjusted method of manufactured solutions.Engineering with Computers,2014,30(4):669-67836 Grier B,Alyanak E,White M,et al.Numerical integration techniques for discontinuous manufactured solutions.Journal of Computational Physics,2014,278(c):193-20337 Grier B,Figliola R,Alyanak E,et al.Discontinuous solutions using the method of manufactured solutions on finit volume solvers.AIAA Journal,2015,53(8):2369-237838 Woods CN,Starkey R.Verificatio of fluid-dynami codes in the presence of shocks and other discontinuities.Journal of Computational Physics,2015,294(c):312-32839王瑞利,林忠,袁国兴.科学计算程序的验证和确认.北京理工大学学报,2010,30(3):353-360(Wang Ruili,Lin Zhong,Yuan Xingguo.Verificatio and validation in scientifi computing code.Transactions of Beijing Institute of Technology,2010,30(3):353-360(in Chinese)) 40余云龙,林忠,王瑞利等.辐射流体力学Lagrange方程组一类人为解构造方法.应用数学和力学,2015,36(1):110-118(Yu Yunlong,Lin Zhong,Wang Ruili,et al.A method of manufacturing solutions for verificatio of Lagrangian Radiation hydrodynamics codes.Applied Mathematics and Mechanics,2015,36(1):110-118(in Chinese))41 Roy CJ.Review of code and solution verificatio procedures for computational simulation.Journal of Computational Physics,2005,205:131-15642 Sozer E,Brehm C,Kiris CC.Gradient calculation methods on arbitrary polyhedral unstructured meshes for cell-centered CFD solvers.AIAA paper 2014-1440,201443 Hebert S,Luke EA,Honey I.A new approach to CFD verificatio studies.AIAA Paper 2005-685,200544 He Xin,Zhao Zhong,Ma Rong,et al.Validation of HyperFLOW in subsonic and transonic fl w.Acta Aerodynamical Sinica,2016,34(2):267-27545 He X,He XY,He L,et al.HyperFLOW:A structured/unstructured hybridintegratedcomputationalenvironmentformulti-purposeflui simulation.Procedia Engineering,2015,126:645-64946 Mavriplis DJ.Revisiting the least-squares procedure for gradient reconstruction on unstructured meshes.AIAA Paper 2003-3986,200347 Jalali A,Sharbatdar M,Ollivier GC.Accuracy analysis of unstructured finit volume discretization schemes for di ff usive flu puter and Fluids,2014,101:220-23248 Haselbacher A,Blazek J.On the accurate and efficient discretization of the Navier-Stokes equations on mixed grids.AIAA Journal,2000,38(11):2094-210249王年华,张来平,马戎等.非结构网格质量对梯度重构及无黏流动模拟精度的影响.计算力学学报,2017已录用(Wang Nianhua,Zhang Laiping,Ma Rong,et al.Mesh quality e ff ects on the accuracy of gradient reconstruction and inviscid fl w simulation on isotropic unstructured grids.Chinese Journal of Computational Mechanics,2017 to be published(in Chinese))。
Package‘prototest’October14,2022Type PackageTitle Inference on Prototypes from Clusters of FeaturesVersion1.2Date2019-02-02Author Stephen ReidMaintainer Stephen Reid<*******************>Depends intervals,MASS,glmnetDescription Procedures for testing for group-wide signal in clusters of variables.Tests can be per-formed for single groups in isolation(univariate)or multiple groups together(multivariate).Spe-cific tests include the exact and approximate(un)selective likelihood ratio tests de-scribed in Reid et al(2015),the selective F test and marginal screening proto-type test of Reid and Tibshirani(2015).User may pre-specify columns to be included in proto-type formation,or allow the function to select them itself.A mixture of these two is also possi-ble.Any variable selection is accounted for using the selective inference framework.Op-tions for non-sampling and hit-and-run null reference distributions.License GPL(>=2)Imports Rcpp(>=0.12.1)LinkingTo Rcpp,RcppArmadilloURL /abs/1511.07839NeedsCompilation yesRepository CRANDate/Publication2019-02-0311:00:03UTCR topics documented:prototest-package (2)print.prototest (3)prototest.multivariate (5)prototest.univariate (8)Index1112prototest-package prototest-package Inference on Prototypes from Clusters of FeaturesDescriptionProcedures for testing for group-wide signal in clusters of variables.Tests can be perfromed for single groups in isolation(univariate)or multiple groups together(multivariate).Specific tests include the exact and approximate(un)selective likelihood ratio(ELR,ALR)tests described in Reid et al(2015),the selective F test and marginal screening prototype test of Reid and Tibshirani (2015).User may prespecify columns to be included in prototype formation,or allow the function to select them itself.A mixture of these two is also possible.Any variable selection is accounted for using the selective inference framework introduced in Lee et al(2013)and further developed in Lee and Taylor(2014).Options for non-sampling and hit-and-run null reference distrbutions.Tests are examples of selected model tests,a notion introduced in Fithian et al(2015).DetailsPackage:prototestType:PackageVersion: 1.0Date:2015-11-12License:GPL(>=2)Only two functions provided:prototest.univariate(for tests with a single group in isolation) and prototest.multivariate(for tests with multiple groups simultaneously).Each function pro-vides options to perform one of the ELR,ALR,F or marginal screening prototype er may specify which columns are to be used in prototype construction,or leave it for the function to select.Valid tests are performed in the event of variable er has option to use non-sampling null reference distributions(where available)or hit-and-run references.Author(s)Stephen ReidMaintainer:Stephen Reid<******************>ReferencesReid,S.and Tibshirani,R.(2015)Sparse regression and marginal testing using cluster prototypes./pdf/1503.00334v2.pdf.Biostatistics doi:10.1093/biostatistics/kxv049 Reid,S.,Taylor,J.and Tibshirani,R.(2015)A general framework for estimation and inference from clusters of features.Available online:/abs/1511.07839Lee,J.D.,Sun,D.L.,Sun,Y.and Taylor,J.E.(2013)Exact post-selection inference,with application to the lasso./pdf/1311.6238v6.pdf.Annals of Statistics(to appear)Lee,J.D.and Taylor,J.E.(2014)Exact Post Model Selection Inference for Marginal Screening./pdf/1402.5596v2.pdfFithian,W.,Sun,D.L.and Taylor,J.E.(2015)Optimal Inference After Model Selection.http: ///pdf/1410.2597v2.pdfExamplesrequire(prototest)###generate dataset.seed(12345)n=100p=80X=matrix(rnorm(n*p,0,1),ncol=p)beta=rep(0,p)beta[1:3]=0.1#three signal variables:number1,2,3signal=apply(X,1,function(col){sum(beta*col)})intercept=3y=intercept+signal+rnorm(n,0,1)###treat all columns as if in same group and test for signal#non-selective ELR test with nuisance interceptelr=prototest.univariate(X,y,"ELR",selected.col=1:5)#selective F test with nuisance intercept;non-samplingf.test=prototest.univariate(X,y,"F",lambda=0.01,hr.iter=0)print(elr)print(f.test)###assume variables occur in4equally sized groupsnum.groups=4groups=rep(1:num.groups,each=p/num.groups)#selective ALR test--select columns21-25in2nd group;test for signal in1st;hit-and-run alr=prototest.multivariate(X,y,groups,1,"ALR",21:25,lambda=0.005,hr.iter=20000) #non-selective MS test--specify first column in each group;test for signal in1st ms=prototest.multivariate(X,y,groups,1,"MS",c(1,21,41,61))print(alr)print(ms)print.prototest Print prototest objectDescriptionGeneric print method for prototest objectsUsage##S3method for class prototestprint(x,...)Argumentsx object of type prototest....other parameters passed to print function.DetailsPrints the test statistic and p-value associated with the prototest object x. Author(s)Stephen ReidSee Alsoprototest.univariate,prototest.multivariateExamplesrequire(prototest)###generate dataset.seed(12345)n=100p=80X=matrix(rnorm(n*p,0,1),ncol=p)beta=rep(0,p)beta[1:3]=2#three signal variables:number1,2,3signal=apply(X,1,function(col){sum(beta*col)})intercept=3y=intercept+signal+rnorm(n,0,1)###treat all columns as if in same group and test for signal#non-selective ELR test with nuisance interceptelr=prototest.univariate(X,y,"ELR",selected.col=1:5)print(elr)prototest.multivariatePerform Prototype or F tests for Significance of Groups of Predictorsin the Multivariate ModelDescriptionPerform prototype or F tests for significance of groups of predictors in the multivariate model.Choose either exact or approximate likelihood ratio prototype tests(ELR)or(ALR)or F test or marginal screening prototype test.Options for selective or non-selective tests.Further options for non-sampling or hit-and-run reference distributions for selective tests.Usageprototest.multivariate(x,y,groups,test.group,type=c("ELR","ALR","F","MS"), selected.col=NULL,lambda,mu=NULL,sigma=1,hr.iter=50000,hr.burn.in=5000,verbose=FALSE,tol=10^-8)Argumentsx input matrix of dimension n-by-p,where p is the number of predictors over all predictor groups of interest.Will be mean centered and standardised before testsare performed.y response variable.Vector of length n,assumed to be quantitative.groups group membership of the columns of x.Vector of length p,which each element containing the goup label of the corresponding column in x.test.group group label for which we test nullity.Should be one of the values seen in groups.See Details for further explanation.type type of test to be performed.Can select one at a time.Options include the exact and approximate likelihood ratio prototype tests of Reid et al(2015)(ELR,ALR),the F test and the marginal screening prototype test of Reid and Tibshirani(2015)(MS).Default is ELR.selected.col preselected columns selected by the user.Vector of indices in the set{1,2,...p}.Used in conjunction with groups to ascertain for which groups the user hasspecified selected columns.Should itfind any selected columns within a group,no further action is taken to select columns.Should no columns within a groupbe specified,columns are selected using either lasso or the marginal screeningprocedure,depending on the test.If all groups have prespecified columns,anon-selective test is performed,using the classical distributional assumptions(exact and/or asymptotic)for the test in question.If any selection is performed,selective tests are performed.Default is NULL,requiring the selection of columnsin all the groups.lambda regularisation parameter for the lassofit.Same for each group.Must be supplied when at least one group has unspecified columns in selected.col.Will besupplied to glmnet.This is the unstandardised version,equivalent to lambda/nsupplied to glmnet.mu mean parameter for the response.See Details below.If supplied,it isfirst subtracted from the response to yield a zero-mean(at the population level)vectorfor which we proceed with testing.If NULL(the default),this parameter is treatedas nuisance parameter and accounted for as such in testing.sigma error standard deviation for the response.See Details below.Must be supplied.If not,it is assumed to be1.Required for computation of some of the teststatistics.hr.iter number of hit-and-run samples required in the reference distribution of the a selective test.Applies only if selected.col is NULL.Default is50000.Sincedependent samples are generated,large values are required to generate goodreference distributions.If set to0,the function tries to applu a non-samplingselective test(provided selected.col is NULL),if possible.If non-samplingtest is not possible,the function exits with a message.hr.burn.in number of burn-in hit-and-run samples.These are generatedfirst so as to make subsequent hit-and-run realisations less dependent on the observed response.Samples are then discarded and do not inform the null reference distribution.verbose should progress be printed?tol convergence threshold for iterative optimisation procedures.DetailsThe model underpinning each of the tests isy=µ+Kk=1θk·ˆy k+where ∼N(0,σ2I)and K is the number of predictor groups.ˆy k depends on the particular test considered.In particular,for the ELR,ALR and F tests,we haveˆy k=P Mk (y−µ),where P Mk=X MkXM kX Mk−1XM k.X M is the input matrix reduced to the columns with indices in the set M.M k is the set of indices selected from considering group k of predictors in isolation.This set is either provided by the user (via selected.col)or is selected automatically(if selected.col is NULL).If the former,a non-selective test is performed;if the latter,a selective test is performed,with the restrictions Ay≤b, as set out in Lee et al(2015)and stacked as in Reid and Tibshirani(2015).For the marginal screening prototype(MS)test,ˆy k=x j∗where x j is the j th column of x and j∗=argmax j∈Ck|x j y|,where C k is the set of indices in the overall predictor set corresponding to predictors in the k th group.All tests test the null hypothesis H0:θk∗=0,where k∗is supplied by the user via test.group.Details of each are described in Reid et al(2015).ValueA list with the following four components:ts The value of the test statistic on the observed data.p.val Valid p-value of the test.selected.col Vector with columns selected for prototype formation in the test.If initially NULL,this will now contain indices of columns selected by the automatic columnselection procedures of the test.y.hr Matrix with hit-and-run replications of the response.If sampled selective test was not performed,this will be NULL.Author(s)Stephen ReidReferencesReid,S.and Tibshirani,R.(2015)Sparse regression and marginal testing using cluster prototypes./pdf/1503.00334v2.pdf.Biostatistics doi:10.1093/biostatistics/kxv049 Reid,S.,Taylor,J.and Tibshirani,R.(2015)A general framework for estimation and inference from clusters of features.Available online:/abs/1511.07839.See Alsoprototest.univariateExamplesrequire(prototest)###generate dataset.seed(12345)n=100p=80X=matrix(rnorm(n*p,0,1),ncol=p)beta=rep(0,p)beta[1:3]=0.1#three signal variables:number1,2,3signal=apply(X,1,function(col){sum(beta*col)})intercept=3y=intercept+signal+rnorm(n,0,1)###treat all columns as if in same group and test for signal#non-selective ELR test with nuisance interceptelr=prototest.univariate(X,y,"ELR",selected.col=1:5)#selective F test with nuisance intercept;non-samplingf.test=prototest.univariate(X,y,"F",lambda=0.01,hr.iter=0)print(elr)print(f.test)###assume variables occur in4equally sized groupsnum.groups=4groups=rep(1:num.groups,each=p/num.groups)#selective ALR test--select columns21-25in2nd group;test for signal in1st;hit-and-run alr=prototest.multivariate(X,y,groups,1,"ALR",21:25,lambda=0.005,hr.iter=20000) #non-selective MS test--specify first column in each group;test for signal in1st ms=prototest.multivariate(X,y,groups,1,"MS",c(1,21,41,61))print(alr)print(ms)prototest.univariate Perform Prototype or F Tests for Significance of Groups of Predictorsin the Univariate ModelDescriptionPerform prototype or F tests for significance of groups of predictors in the univariate model.Choose either exact or approximate likelihood ratio prototype tests(ELR)or(ALR)or F test or marginal screening prototype test.Options for selective or non-selective tests.Further options for non-sampling or hit-and-run null reference distributions for selective tests.Usageprototest.univariate(x,y,type=c("ALR","ELR","MS","F"),selected.col=NULL,lambda,mu=NULL,sigma=1,hr.iter=50000,hr.burn.in=5000,verbose=FALSE,tol=10^-8)Argumentsx input matrix of dimension n-by-p,where p is the number of predictors in a single predetermined group of predictors.Will be mean centered and standardisedbefore tests are performed.y response variable.Vector of length emphn,assumed to be quantitative.type type of test to be performed.Can only select one at a time.Options include the exact and approximate likelihood ratio prototype tests of Reid et al(2015)(ELR,ALR),the F test and the marginal screening prototype test of Reid andTibshirani(2015)(MS).Default is ELR.selected.col preselected columns specified by user.Vector of indices in the set{1,2,...,p}.If specified,a non-selective(classical)version of the chosen test it performed.Inparticular,this means the classicialχ21reference distribution for the likelihoodratio tests and the F reference for the F test.Default is NULL,which directs thefunction to estimate the selected set with the lasso or the marginal screeningprocedure,depending on the test.lambda regularisation parameter for the lassofit.Must be supplied when selected.col is NULL.Will be supplied to glmnet.This is the unstandardised version,equiva-lent to lambda/n supplied to glmnet.mu mean parameter for the response.See Details below.If supplied,it isfirst subtracted from the response to yield a mean-zero(at the population level)vectorfor which we proceed with testing.If NULL(the default),this parameter is treatedas nuisance parameter and accounted for as such in testing.sigma error standard deviation for the response.See Details below.Must be supplied.If not,it is assumed to be1.Required for the computation of some of the teststatistics.hr.iter number of hit-and-run samples required in the reference distrbution of a selec-tive test.Applies only if selected.col is NULL.Default is50000.Since depen-dent samples are generated,large values are required to generate good referencedistributions.If set to0,the function tries to apply a non-sampling selectivetest(provided selected.col is NULL),if possible.If non-sampling test is notpossible,the function exits with a message.hr.burn.in number of burn-in hit-and-run samples.These are generatedfirst so as to make subsequent hit-and-run realisations less dependent on the observed response.Samples are then discarded and do not inform the null reference distribution.verbose should progress be printed?tol convergence threshold for iterative optimisation procedures.DetailsThe model underpinning each of the tests isy=µ+θ·ˆy+where ∼N(0,σ2I)andˆy depends on the particular test considered.In particular,for the ELR,ALR and F tests,we haveˆy=P M(y−µ),where P M=X MXMX M−1XM.X M is the input matrix reduced to the columns in the set M,which,in turn,is either provided by the user(via selected.col)or selected by the lasso(if selected.col is NULL).If the former,a non-selective test is performed;if the latter,a selective test is performed,with the restrictions Ay≤b, as set out in Lee et al(2015).For the marginal screening prototype(MS)test,ˆy=x j∗where x j is the j th column of x and j∗=argmax j|x j y|.All tests test the null hypothesis H0:θ=0.Details of each are described in Reid et al(2015).ValueA list with the following four components:ts The value of the test statistic on the observed data.p.val Valid p-value of the test.selected.col Vector with columns selected.If initially NULL,this will now contain indices of columns selected by the automatic column selection procedures of the test.y.hr Matrix with hit-and-run replications of the response.If sampled selective test was not performed,this will be NULL.Author(s)Stephen ReidReferencesReid,S.and Tibshirani,R.(2015)Sparse regression and marginal testing using cluster prototypes./pdf/1503.00334v2.pdf.Biostatistics doi:10.1093/biostatistics/kxv049Reid,S.,Taylor,J.and Tibshirani,R.(2015)A general framework for estimation and inference from clusters of features.Available online:/abs/1511.07839.See Alsoprototest.multivariateExamplesrequire(prototest)###generate dataset.seed(12345)n=100p=80X=matrix(rnorm(n*p,0,1),ncol=p)beta=rep(0,p)beta[1:3]=0.1#three signal variables:number1,2,3signal=apply(X,1,function(col){sum(beta*col)})intercept=3y=intercept+signal+rnorm(n,0,1)###treat all columns as if in same group and test for signal#non-selective ELR test with nuisance interceptelr=prototest.univariate(X,y,"ELR",selected.col=1:5)#selective F test with nuisance intercept;non-samplingf.test=prototest.univariate(X,y,"F",lambda=0.01,hr.iter=0)print(elr)print(f.test)###assume variables occur in4equally sized groupsnum.groups=4groups=rep(1:num.groups,each=p/num.groups)#selective ALR test--select columns21-25in2nd group;test for signal in1st;hit-and-run alr=prototest.multivariate(X,y,groups,1,"ALR",21:25,lambda=0.005,hr.iter=20000) #non-selective MS test--specify first column in each group;test for signal in1st ms=prototest.multivariate(X,y,groups,1,"MS",c(1,21,41,61))print(alr)print(ms)Indexprint.prototest,3prototest(prototest-package),2prototest-package,2prototest.multivariate,4,5,10 prototest.univariate,4,7,811。
Package‘ConcordanceTest’October12,2022Title An Alternative to the Kruskal-Wallis Based on the Kendall TauDistanceVersion1.0.2Description The Concordance Test is a non-parametric method for testing whether two o more sam-ples originate from the same distribution.It extends the Kendall Tau correlation coeffi-cient when there are only two groups.For details,see Monge(2020)<arXiv:1912.12880v2>. Depends R(>=3.3.2)Imports Rglpk,stats,graphicsLicense GPL-3Encoding UTF-8RoxygenNote7.1.2NeedsCompilation noAuthor Javier Alcaraz[aut],Laura Anton-Sanchez[aut,cre],Juan Francisco Monge[aut]Maintainer Laura Anton-Sanchez<**************>Repository CRANDate/Publication2022-04-2017:02:29UTCR topics documented:CT_Coefficient (2)CT_Critical_Values (3)CT_Density_Plot (4)CT_Distribution (4)CT_Hypothesis_Test (5)CT_Probability_Plot (7)LOP (7)Permutations_With_Repetition (8)Index1012CT_Coefficient CT_Coefficient Concordance Coefficient and Kruskal-Wallis StatisticDescriptionThis function computes the Concordance coefficient and the Kruskal-Wallis statistic.UsageCT_Coefficient(Sample_List,H=0)ArgumentsSample_List List of numeric data vectors with the elements of each sample.H0by default.If set to1,the Kruskal-Wallis statistic is also calculated and re-turned.ValueThe function returns a list with the following elements:1.Sample_Sizes:Numeric vector of sample sizes.2.order_elements:Numeric vector containing the elements order.3.disorder:Disorder of the permutation given by order_elements.4.Concordance_Coefficient:1-relative disorder of permutation given by order_elements.5.H_Statistic:Kruskal-Wallis statistic(only if H=1).Examples##ExampleA<-c(12,13,15,20,23,28,30,32,40,48)B<-c(29,31,49,52,54)C<-c(24,26,44)Sample_List<-list(A,B,C)CT_Coefficient(Sample_List)CT_Coefficient(Sample_List,H=1)##Example with tiesA<-c(12,13,15,20,24,29,30,32,40,49)B<-c(29,31,49,52,54)C<-c(24,26,44)Sample_List<-list(A,B,C)CT_Coefficient(Sample_List,H=1)CT_Critical_Values3 CT_Critical_Values Critical Values of the Concordance and Kruskal-Wallis TestsDescriptionThis function computes the critical values and the p-values for a desired significance levels of.10, .05and.01.of the Concordance and Kruskal-Wallis tests.Critical values and p-values can be obtained exactly or by simulation(default option).UsageCT_Critical_Values(Sample_Sizes,Num_Sim=10000,H=0,verbose=TRUE)ArgumentsSample_Sizes Numeric vector(n1,...,nk)containing the number of repetitions of each element,i.e.,the size of each sample in the experiment.Num_Sim Number of simulations in order to obtain the probability distribution of the statistics.The default is10000.If set to0,the critical values and the p-valuesare obtained exactly.Otherwise they are obtained by simulation.H0by default.If set to1,the critical values and the p-values of the Kruskal-Wallis test are also calculated and returned.verbose A logical indicating if some"progress report"of the simulations should be given.The default is TRUE.ValueThe function returns a list with the following elements:1.C_results:Concordance coefficient results.Critical values and p-values for a desired signif-icance levels of0.1,.05and.01.2.H_results:Kruskal-Wallis results.Critical values and p-values for a desired significancelevels of0.1,.05and.01(only if H=1).WarningThe computational time in exact calculations increases exponentially with the number of elements and with the number of sets.ExamplesSample_Sizes<-c(3,3,3)CT_Critical_Values(Sample_Sizes,Num_Sim=0,H=1)CT_Critical_Values(Sample_Sizes,Num_Sim=1000,H=1)4CT_Distribution CT_Density_Plot Density Plot from the Concordance Coefficient and the Kruskal-WallisNormalized StatisticDescriptionThis function performs the graphical visualization of the density distribution of the Concordance coefficient and the Kruskal-Wallis statistic.UsageCT_Density_Plot(C_freq=NULL,H_freq=NULL)ArgumentsC_freq Probability distribution of the Concordance coefficient obtained with the func-tion CT_Distribution.H_freq Probability distribution of the Kruskal-Wallis statistic obtained with the function CT_Distribution.ExamplesSample_Sizes<-c(5,5,5)Distributions<-CT_Distribution(Sample_Sizes,Num_Sim=1000,H=1)C_freq<-Distributions$C_freqH_freq<-Distributions$H_freqCT_Density_Plot(C_freq,H_freq)CT_Distribution Probability Distribution of the Concordance Coefficient and theKruskal-Wallis StatisticDescriptionThis function computes the probability distribution tables of the Concordance coefficient and Kruskal-Wallis statistic.Probability distribution tables can be obtained exactly or by simulation(default option).UsageCT_Distribution(Sample_Sizes,Num_Sim=10000,H=0,verbose=TRUE)ArgumentsSample_Sizes Numeric vector(n1,...,nk)containing the number of repetitions of each element,i.e.,the size of each sample in the experiment.Num_Sim Number of simulations in order to obtain the probability distribution of the statistics.The default is10000.If set to0,the probability distribution tablesare obtained exactly.Otherwise they are obtained by simulation.H0by default.If set to1,the probability distribution table of the Kruskal-Wallis statistic is also calculated and returned.verbose A logical indicating if some"progress report"of the simulations should be given.The default is TRUE.ValueThe function returns a list with the following elements:1.C_freq:Matrix with the probability distribution of the Concordance coefficient.Each row inthe matrix contains the disorder,the value of the coefficient,the frequency and its probability.2.H_freq:Matrix with the probability distribution of the Kruskal-Wallis statistic.Each row inthe matrix contains the value of the statistic,the frequency and its probability(only if H=1). WarningThe computational time in exact calculations increases exponentially with the number of elements and with the number of sets.ExamplesSample_Sizes<-c(5,4)CT_Distribution(Sample_Sizes,Num_Sim=0)CT_Distribution(Sample_Sizes,Num_Sim=0,H=1)CT_Distribution(Sample_Sizes,Num_Sim=1000)CT_Distribution(Sample_Sizes,Num_Sim=1000,H=1)CT_Hypothesis_Test Hypothesis Test for Testing whether Samples Originate from the SameDistributionDescriptionThis function performs the hypothesis test for testing whether samples originate from the same distribution.UsageCT_Hypothesis_Test(Sample_List,Num_Sim=10000,H=0,verbose=TRUE)ArgumentsSample_List List of numeric data vectors with the elements of each sample.Num_Sim The number of used simulations.The default is10000.H0by default.If set to1,the Kruskal-Wallis test is also performed and returned.verbose A logical indicating if some"progress report"of the simulations should be given.The default is TRUE.ValueThe function returns a list with the following elements:1.results:Table with the statistics and the signification levels.2.C_p-value:Concordance test signification level.3.H_p-value:Kruskal-Wallis test signification level(only if H=1).ReferencesMyles Hollander and Douglas A.Wolfe(1973),Nonparametric Statistical Methods.New York: John Wiley&Sons.Pages115-120.Examples##Hollander&Wolfe(1973),116.##Mucociliary efficiency from the rate of removal of dust in normal##subjects,subjects with obstructive airway disease,and subjects##with asbestosis.x<-c(2.9,3.0,2.5,2.6,3.2)#normal subjectsy<-c(3.8,2.7,4.0,2.4)#with obstructive airway diseasez<-c(2.8,3.4,3.7,2.2,2.0)#with asbestosisSample_List<-list(x,y,z)CT_Hypothesis_Test(Sample_List,Num_Sim=1000,H=1)##ExampleA<-c(12,13,15,20,23,28,30,32,40,48)B<-c(29,31,49,52,54)C<-c(24,26,44)Sample_List<-list(A,B,C)CT_Hypothesis_Test(Sample_List,Num_Sim=1000,H=1)##Example with tiesA<-c(12,13,15,20,24,29,30,32,40,49)B<-c(29,31,49,52,54)C<-c(24,26,44)Sample_List<-list(A,B,C)CT_Hypothesis_Test(Sample_List,Num_Sim=1000,H=1)CT_Probability_Plot7 CT_Probability_Plot Probability Plot for the Concordance Coefficient and the Kruskal-Wallis StatisticDescriptionThis function performs the graphical visualization of the probability distribution of the Concordance coefficient and the Kruskal-Wallis statistic.UsageCT_Probability_Plot(C_freq=NULL,H_freq=NULL)ArgumentsC_freq Probability distribution of the Concordance coefficient obtained with the func-tion CT_Distribution.H_freq Probability distribution of the Kruskal-Wallis statistic obtained with the function CT_Distribution.ExamplesSample_Sizes<-c(5,5,5)Distributions<-CT_Distribution(Sample_Sizes,Num_Sim=1000,H=1)C_freq<-Distributions$C_freqH_freq<-Distributions$H_freqCT_Probability_Plot(C_freq)CT_Probability_Plot(C_freq,H_freq)LOP Linear Ordering Problem(LOP)DescriptionThis function computes the solution of the Linear Ordering Problem.UsageLOP(mat_LOP)Argumentsmat_LOP Preference matrix defining the Linear Ordering Problem.A numeric square matrix for which we want to obtain the permutation of rows/columns that maxi-mizes the sum of the elements above the main diagonal.ValueThe function returns a list with the following elements:1.obj_val:Optimal value of the solution of the Linear Ordering Problem,i.e.,the sum of theelements above the main diagonal under the permutation rows/cols solution.2.permutation:Solution of the Linear Ordering Problem,i.e.,the rows/cols permutation.3.permutation_matrix:Optimal permutation matrix of the Linear Ordering Problem. ReferencesMartí,R.and Reinelt,G.The Linear Ordering Problem:Exact and Heuristic Methods in Combina-torial Optimization.Springer,first edition2011.Examples##Square matrix####|122|##|233|##|322|####The optimal permutation of rows/cols is(2,3,1),##and the solution of the Linear Ordering Problem is8.##Te permutation matrix of the solution is##|000|##|101|##|100|mat_LOP<-matrix(c(1,2,3,2,3,2,2,3,2),nrow=3)LOP(mat_LOP)Permutations_With_RepetitionEnumerate the Permutations of the Elements of a Vector When Someof those Elements are IdenticalDescriptionThis function enumerates the possible combinations of n elements where thefirst element is re-peated n1times,the second element is repeated n2times,the third n3times,...UsagePermutations_With_Repetition(Sample_Sizes)ArgumentsSample_Sizes Numeric vector(n1,...,nk)that indicates the number of times each element is repeated.ValueReturns a matrix where each row contains a permutation.WarningThe number of permutations and the computational time increase exponentially with the number of elements and with the number of sets.ExamplesSample_Sizes<-c(2,2,2)Permutations_With_Repetition(Sample_Sizes)IndexCT_Coefficient,2CT_Critical_Values,3CT_Density_Plot,4CT_Distribution,4,4,7CT_Hypothesis_Test,5CT_Probability_Plot,7LOP,7Permutations_With_Repetition,810。
T E C H N I C A L P A P E R TESTABILITYLEVEL FAULT DIAGNOSIS CAPABILITY- AND ITS SPEED.ICT checks each component on a PCB individually, delivering highly reliable results. Itdetects defects such as wrong or missing components, solder bridges and short circuits.While investing in this solution requires a larger initial upfront cost when compared toflying probe, the cost per unit is negligible - often less than £1 - due to the swift test time(less than one minute).The ICT method comprises a test fixture and program dedicated to the target test platformand unit under test (UUT). In order to optimise the effectiveness of the test, the circuit designand layout of the PCBA itself must be suitable for ICT. Therefore, design for test (DFT) is an integral stage of PCB realisation; without taking this phase into account, you will not be ableto get the most from your test strategy.This technical paper describes what should be taken into consideration, to help maximisethe achievable test coverage - and deliver the highest quality PCBAs to your customers.TEST METHODS AND PRACTICEFirstly, let’s look at the basics of ICT, as these provide a background to the design requirements detailed later.Computer-aided design (CAD) data, usually the ASCII file for the PCBA, is processed through an appropriate software package to produce the test fixture design files. It is also used, along with the bill of materials (BOM) and circuit diagrams, to create an automatictest equipment (ATE) input, which is used to generate and optimise the ICT program.The test program is firstly set to detect short circuits wherever test access is available; itthen features routines to measure the value of all testable discrete components, ensuringtheir correct measurement and isolated performance.Digital “vector” tests are used for all ICs where templates exist - for example, standardparts such as logic gates. Wherever possible, presence and orientation tests will beapplied to all ICs. “Vectorless testing”, such as Framescan TM or TestJet TM, can be used to detect dry joints on integrated circuits and connectors.Function tests can be set for powered or unpowered analogue devices. Crystal frequencies can also be measured where test access and type makes this possible.The fixture will comprise a “bed of nails” to probe the UUT, and usually feature a “hold down” gate on the top, to ensure that PCBs with open vias or irregular profiles are not susceptible to vacuum fixture sealing problems.When ICT development is completed, it is standard to produce a documentation package detailing:•C overage report listing tested, partially tested and untested components;•D etails of fixture wiring;•D etails of probe types used;•S oftware.Design guidelines for in-circuit testability 0304 Design guidelines for in-circuit testabilityCIRCUIT DESIGN CONSIDERATIONSThe aim of ICT is to individually test components in isolation from the rest of the circuitry.A “bed-of-nails” fixture is created, ideally having access to each electrical node/net on the UUT via a test pad, providing access to all signals to an individual component. If this is not possible then “cluster” testing groups of components can be performed. However, this increases programming and test time, as well as cost, as faults will be traced to a cluster of parts instead of the individual component fault.To maximise test coverage, the following points should be considered:•T est pads should appear as components in the CAD file, with unique identifiers and XY co-ordinates.•A ll digital devices that are controlled by a chip enable/select should have this pin connected to the power supply rail through a pull up/down resistor, not directly tied.Any resistor value from 100R to 100K is acceptable.•B atteries should not be fitted at in-circuit test. If they are fitted, a removable link must be provided to isolate the battery from the rest of the circuit.•A ll chips that have a RESET pin should also have this pin available for individual test control via a pull-up/down resistor. Try to avoid directly connecting the reset line of the microprocessor/microcontroller to another bussed integrated circuit. Use a low value resistor (approximately 100 ohms). This will allow the micro to be kept in reset while testing the other device.•W here a device is driven by an external clock circuit, this should be driven via a tri-state buffer (i.e. not directly) or an AND gate with its other signal controlled by a pull-up resistor. The oscillator can then be stopped from affecting the stability of other tests.•A ny unused inputs or outputs of devices should have their signals tied individually, preferably via a resistor, rather than directly to supply rails. This enables the use of standard library tests where available, making programming quicker and therefore less expensive.•S how the spare gates and unused pins of ICs on circuit diagrams. On surface mount (SMT) designs a test pad should ideally be provided for unused pins of ICs, as otherwise short circuits to pins with no test points will not be detected.•T o keep test times as short as possible try to use low capacitance values on control lines - e.g. power on reset. This enables all digital tests to be carried out at higher speeds.•T o help avoid unstable test results due to “back-driving”, where programmable devices are to be used:• D o not tie chip enable pins to supply rails, use pull-ups;• A ttempt to include a test vector that will either cause the output pins to become tri-state or active high;• P rogram a combination of input stimuli that will drive all output signals high.FIXTURING AND PCB LAYOUT CONSIDERATIONSThe minimum centre-to-centre spacing between test points is governed by the pitchlimitations of ICT probes. The following chart shows the pitch between test point centresthat are achievable using the three standard sizes of test probes:Where possible, try to keep the distance between test points/pads to a minimum of0.100”, so standard 100mil probes can be used. When this cannot be achieved thereare smaller probes that can be used down to 0.050” and specialist probes that will affordcontact on even closer pitch - but experience has shown that the smaller the probe, thelarger the probability of contact problems and reduced long-term reliability. However, itis possible to mix the various sizes on a single fixture. Another point to remember is thatsmaller probes tend to cost considerably more.PCB test pads must be at least 0.125” away from the edge of the PCB. If the PCBis mounted in a frame, the targets need to be within the inside edge of the frame bythis amount.Test pads should preferably be 0.05” in diameter, but can be reduced if absolutelynecessary by using alternative methods of tooling and fixture design. Square test padsare preferred as they have 27 per cent more target area over the round shape.T est pads can be placed over the top of vias, though the hole must not exceed 0.02”diameter and the wall thickness must be sufficient to withstand the probe pressure.Multiple test pads should be included for ground and power supply rails, and should be distributed across the PCB.Test targets should be at least 0.100” away from any components on the probing side ofthe PCB.Test pads should be distributed evenly over the surface of the PCB. Areas of highprobing density should be avoided as this may cause flexing of the PCB. This in turnmay cause flexing damage to components. If high-density areas do exist then the boardshould be stress analysed and/or redesigned.Design guidelines for in-circuit testability0506 Design guidelines for in-circuit testabilitySpace must be left on the component side of the board to allow “pusher rods” to press the board down. This is particularly important in areas of high probe density. Pusher rods are typically 2mm diameter, and one is required for approximately every 2in2 of PCB. Note fixtures for ICT are normally actuated by vacuum, but options for mechanical and pneumatically actuated fixtures are also available.Test pads must be provided on each net where tracks interconnect SMT devices. However, on mixed technology boards connection can often be made on conventional through-hole components. Although probing the legs of through-hole components is acceptable, more reliable results can be obtained with the use of dedicated test pads. Through-hole components should have controlled lead length, ideally no greater than 0.062”. Keeping the lead lengths the same will assist with an even compression of all test probes, resulting in consistent compression forces.Ensure that the solder resist does not cover any test pads or vias. Also ensure soldered test pads and particularly vias are domed with solder above the height of the solder resist mask. To ensure accurate positioning of the board in the fixture, the tooling holes in the PCB should be at least 3 or 4mm diameter (0.125”) - though 2mm is possible if essential. Tolerances should be +0.05mm, -0.0mm. The tooling holes must not be plated. Fit two tooling holes per PCB, ideally located in diagonally opposed corners (pitch tolerance±0.125mm), keeping the distance between them as great as possible but located a minimum distance of 3mm from the edge of the PCB. Tooling holes should appear in the CAD data as unique components with XY coordinates.There should be an area of at least 5mm diameter around the tooling hole, on the underside of the board, which is free of components and tracks. This allows good board support without the possibility of track shorts to the tooling pins.The minimum PCB thickness should be 0.062” inches (1.58 mm).It is preferable not to perform “cut and strap” wiring modifications on the underside of the board, as the wiring could obscure the test pads.Any components with a standoff height greater than 0.100” on the underside of the board will require milling of the fixture, which will add to the cost. Standard SMT devices are normally less than this and provide no problem.It is mandatory to keep component bodies at least 0.050” away from adjacent test pad centres but try to achieve 0.100” wherever possible.Probing on the component legs of SMT devices is highly undesirable as this could turn a potential dry joint into a good contact during test, as the probe tip in contact with the leg can compress the contact between the component leg and the solder pad, and thus a defective joint will pass undetected.The best layout configuration is to have all the components on one side of the board and all test pads on the other. Fixtures are available that will allow probing to both sides of the board, but this will increase the fixture cost and program debug time. If the only way to incorporate test pads is to use both sides of the board, then the minimum size test pad target for topside probing is 0.040”.Where functional testing is required as well as ICT on an in-circuit tester, it is essential to withdraw the probes that are used for ICT during the functional test cycle of the board. There are three options for dual stage contact within fixturing:a) S tandard vacuum fixture technology featuring a “shuttle plate” that controls the lengthof travel of the top plate during the ICT and functional test cycle. By using a mixture of standard and “long throw” probes, access can easily be modified to restrict in-circuit probe contact during functional testing.b) S tandard and long throw probes are used but fixture actuation is performed bypneumatic cylinders with two travel lengths. This proves to be a more robust, but more expensive solution.c) U se discrete pneumatic probes for functional test point access.Whichever solution is adopted it is advisable to take special care on the design of the test pads and their spacing. Long throw probes are readily available and reliable in 0.100” format. They are also available in 0.075” format but are perhaps not as reliable.Another point to consider is that while functionally testing the board, it is generally running at higher speed than at in-circuit and is thus more susceptible to noise caused by the wiring contained in the fixture. In order to minimise this effect, it is desirable to design a board with two test pads per functional net, so only the wiring associated with functionaltesting is connected to the board during this part of the test cycle.Design guidelines for in-circuit testability 07Test Pads PCBSolder Resist above TP NO Solder Resist below TPOK08 Design guidelines for in-circuit testabilityTo summarise:1. M inimum 0.035” diameter test pads on the bottom side of the PCB.2. M inimum 0.040” diameter test pads on the top side of the PCB.3. T est point pitch preferred at 0.100” as this is the most reliable and lowest cost.4. A t least 2x 0.125”, diagonally opposite non-plated through tooling holes in the PCB.5. T est pads on one side of the board only.6. A ll test pads/points a minimum of 0.125” away from the edge of the PCB.7. N o component more than 0.100” high on solder side of the PCB.8. N o wire links on the test access side of the PCB.9. T wo test points per functional test net.10. M inimum 0.075” probes for functional testing.MANUFACTURING TOLERANCES OF PCBS AND FIXTURES In order to achieve consistent test results and contact accuracy, both fixture and PCB manufacturing tolerances need to be controlled. The chart below outlines typical tolerances that can be expected in fixturing, and the desired tolerances of PCBs:The total tolerance stack up is double the sum of the individual tolerances, as the tolerances are ± (effectively doubling the size of tolerance error). It is most critical to control and monitor the PCB artwork tolerance, as in the above scenario there is only a margin of ± 0.0015” before the tolerance stack starts to exceed the theoretical limit ofcontact accuracy in a standard ICT fixture.INFORMATION REQUIRED TO PRODUCE ICT FIXTURESAND PROGRAMS•C AD data, usually in ASCII format, with identification of the version and format of the files.•A bill of materials, preferably in electronic format, (e.g. Word/Excel).•C ircuit diagrams / schematics, ideally in electronic format (JPG, TIFF, or PDF).•P opulated and bare PCBs of the revision level required for the fixture and test program.If these are not available, it may be acceptable to use older issue boards providedacetates of the current revision are supplied with the changes clearly identified.•D etails of the target test system type and configuration.•D atasheets on any non-standard components (e.g. ASICs).•T est programming protocol specifications.•F ixture default specifications.•D etails of any health and safety/compliance issues, especially relating to fixture handling or electrical test program operation.CONCLUSIONIn-circuit test is one of the most popular types of automated test equipment (ATE) usedin medium to high volume electronic PCB assembly. And while it delivers highly reliableresults, it is vital to ensure that your PCB is designed correctly, in order to achieve theoptimum results.You should ensure that you have the correct CAD data and schematics and test padsshould be designed into the PCB up front. Manufacturing tolerances need to be controlledfor both the fixture and the PCB.ICT tests every component individually, identifying issues such as wrong or missing components, solder bridges and short circuits. And with a test time typically less than oneminute and a cost per unit typically less than £1, it is a cost-effective method – which morethan recompenses for the initial upfront cost.Investing in an ICT solution can help your business to consistently deliver fully functionalPCBs, on time and to specification. The time and money saved can be redirected backinto other core aspects of your operation.Design guidelines for in-circuit testability 09Call us on +44 (0)1455 55 55 00 to discuss any points raised in the eBook or to talk through your own outsourcing requirements. We would be delighted to hear from you.ABOUTJJS MANUFACTURING IS AN ELECTRONICS MANUFACTURING SERVICES PARTNER, OFFERING LOW RISK, END-TO-END PROCUREMENT, MANUFACTURE AND SUPPLY CHAIN SOLUTIONS.10 Design guidelines for in-circuit testabilityRESOURCESGET A DEEPER UNDERSTANDING OFOUTSOURCING MANUFACTURING BYDOWNLOADING OUR INDUSTRY EBOOKSExecutive Guide toOutsourcing YourElectronics ManufacturingDownload NowuA Step-by-Step Guideto New ProductIntroduction (NPI)Download NowuThe definitive guide totest within electronicsmanufacturingDownload Now uIndustrial Automation:Your ultimate guideto outsourcingDownload Now uDownload Now uOutsourcing: A collectionof EMS case studies fromJJS ManufacturingDownload Now uSupply ChainExcellenceDownload Now uTest and Measurement:Outsourcing with precisionand accuracyDownload Now uOutsourcingProduct DesignDownload Now uAchieving quality,consistency and deliverywithin electronicsmanufacturingDownload Now uProcess Instrumentationand Control: Your guide toimplementing a successfuloutsourcing strategyDownload Now uAn Introduction toOutsourcing YourElectronics ManufacturingDownload Now uAn alternative viewof the traditionalEMS tier systemDownload Now uOutbound Logistics:The last piece of themanufacturing jigsawDownload Now uDesign guidelinesfor in-circuittestabilityDownload Now uThe first six months:Working in partnershipwith JJSDownload Now u10 Critical Steps toOutsourcing YourElectronics ManufacturingDownload Now uLaboratory Technology:Outsourcing, in orderto innovate。
动物医学进展,021,2(2):117121ProgressinVeterinary Medicine免疫层析技术研究及应用刘畅,杨琳燕,王艺霞,张伟,李道稳,李留安,李存*(天津农学院动物科学与动物医学学院/天津市农业动物繁育与健康养殖重点实验室,天津300392)摘要:免疫层析技术(ICA)是一种将色谱原理和抗原抗体反应相结合的快速检测技术,在兽药残留检测、环境监测、疾病诊断等多个方面都有应用。
为提高传统胶体金方法的灵敏度,荧光材料、磁性纳米颗粒等新型的标记材料逐渐应用到ICA中,生物素-亲和素系统、贵金属也有信号放大的作用,有助于降低检测限。
定量分析仪器的应用促使ICA的结果判定方式发生改进,可以进一步满足定量检测的需要。
标记材料的变化和分析仪器的应用推动ICA方法向高灵敏度和定量检测的方向发展。
论文就ICA标记材料的变化和发展趋势以及该技术在兽药残留检测、食品安全、动物疾病诊断方面的应用进行概述。
关键词:免疫层析技术;标记材料;发展趋势;应用中图分类号:S854.43;S852.43文献标识码:A文章编号:1007-5038(2021)02-0117-05免疫层析技术(immunochromatographic analysis,ICA、始于20世纪后期,是一种将色谱层析和免疫技术相结合的快速检测技术[1]。
由于检测周期短、成本低、对仪器设备和人员没有特殊要求,ICA 在即时检测(point-ofcare testing,POCT)方面发挥着不可替代的作用,发展前景广阔,其中最经典的一种标记检测技术是胶体金ICA2。
随着纳米技术的发展和新材料的应用,ICA技术不断创新变革,将具有更广泛的应用范围和更广阔的开发前景。
1ICA技术简介1.1ICA技术原理以胶体金为例,ICA以胶体金标记抗原或抗体,层析过程中,胶体金标记物随样品溶液通过毛细作用在试纸条上移动,与测试线(test line,T线)上相应的受体结合,聚集后显红棕色[]。
Problem A1: Test Pattern Generation by Using Reseedable LFSRsAbstractBuilt-In-Selft-Test (BIST) is one of most popular test solutions to test the embedded cores in a chip. The BIST circuit can generate pseudo random test patterns to test these embedded cores without using expensive Automatic Test Equipments (A TEs). On the other way, if you have test vectors which are generated by the test generator, you can design a pseudo random pattern generator suitable for hitting these test vectors in short time. In this subject, try to write a program to create the pseudo random pattern generator, performed by Linear Feedback Shift Registers (LFSRs), for hitting the generated test vectors. The output of LFSRs can be rearranged before feeding to the input of Core Under Test (CUT). The lengths of LFSRs and the numbers of LFSRs are no limit. In the vectors generation procedure, the seeds of LFSRs can be changed for the purpose of saving BIST time.1.Introduction:Linear Feedback Shift Registers (LFSRs) are widely adopted as the pseudo-random test pattern generators for built-in self test (BIST) of logic circuits shown in Figure1, due to its low hardware overhead. However, there are many faults in the circuits under test (CUTs) that are hardly detected by the test patterns generated by using the traditional LFSRs, certain deterministic patterns are needed for achieved higher fault coverage. These extra patterns can be stored in the memory of an external tester or the on-chip memory.In order to reduce the time of LFSR to generate patterns for testing such hard to detect fault, storage requirements, one of the solutions is to change seeds of the LFSRs during test pattern generation. A seed refers to the initial state of an LFSR. The procedure for changing the seed of the LFSR is called reseeding. Since the seeds still need to be stored in the memory, it needs a good reseeding strategy to reduce a large test data volume into a smaller set of seeds.Test ResultFigure 1: BIST ArchitectureThe structure of an LFSR is composed of D-FFs and exclusive-OR gates. It can be expressed by a polynomial of X. Figure 2 shows the division type LFSR with polynomial X 4 + X 3 + 1:11 0 0 1 Coefficient Exponent 0 (Seed)O 4 O 3 O 2 O 1Figure 2: The division type LFSR with polynomial X 4 + X 3 + 1It is easy to figure out that the radix-N polynomial requires N D-FFs. The coefficient of each exponent denotes the insertion points of exclusive-OR gates in the shifting path. When the LFSR starts or restarts for a new seed, the LFSR is reset to zero first. Then the seed value is applied sequentially from I 0, see Figure 2. As the LFSR is operating in test pattern generation mode, the I 0 is set to 0. Therefore, there are two penalties for reseeding, one is the memory occupied by the seeds, and another is the clock cycles to reset and shift the seeds into the LFSR. 2. LFSR Design Issue and PartitionIt is not practical to implement a big LFSR to test a circuit with large number of inputs. It is because such implementation often results in lower F.C. and longer BIST time. A good partition of LFSR will improve F.C. and BIST time. Figure 3 shows a CUT with k-bit inputs (I k , I k-1, …, I 1). The LFSR is partitioned into m smaller LFSRs (LFSR m , LFSR m-1, …, LFSR 1). Each LFSR is with a size of L m , L m-1, …, L 1, respectively , and hence the total size of the LFSRs is L m +L m-1+…+ L 1 = k. Since the LFSRs nees s seeds to generate the test vectors for the CUT, it needs an (s *k)-bitmemory to store the seeds.Figure 3: LFSR partitionAssume that there are n test vectors (V1, V2, …, V n) in the test set. If you cannot find any correlation (equivalent or inversion, i.e., I i= I j or I i=I j’) between inputs of test patterns, the optimal size of LFSR is k. However, the correlations between inputs are possible in the test set. Hence, there are two cost considerations in test application:2.1The hardware cost:Figure 4: Example of a test setY ou can use both of the bit broadcast and bit inversion techniques to reduce the required length of LFSRs if the correlations, I i=I j or I i=I j’, exist between inputs of test pattern. The total length of LFSRs, L1+L2+…+ L m = L total, will smaller than k. In the Figure 4, the input length k is 10. Thetest set is composed of 13 test vectors (V1, V2, …, V13) . The minus sign denotes “don’t care”. The patterns are listed in descending order from I10 to I1. In the test set, the I8 and I5 in every test vector are equivalent or compatible. The I7and I3in every test vector are inversion or compatible. The two bits are called equivalent, inversion or compatible are list in Table 1. Y ou can combine I8 and I5 into a single bit and combine I7 and I3with adding an inverter to save 2 D-FFs of LFSRs.2.2The test time costIn some cases, if we use all bits of LFSRs to be test inputs, it will expense more test time for the purpose of high Pattern Coverage Rate (= Hit Patterns / Total Test Patterns). One of the solutions is to appropriately increase the size of LFSRs and use some of them to be test inputs. In such condition, L total will larger than k.3.Subject of this problem:Please write a program which can generate appropriate structures of LFSRs and a set of seeds for the applied test set. The program must consider four factors: (1) the hardware test cost, (2) pattern coverage rate, (3) the total data volume of seeds, and (4) BIST time. The seeds of every LFSRs can be changed values during the test. The input file to the program is the description of the test set. Its format will be described in detail in the Input Format section. Y ou must generate a output file to describe the structures of LFSRs, the cycles for applying seeds, the ending cycle, and pattern coverage. The required formats are listed in the Output Format section.The command line format for this program is<exe_filename> <vector_file>.vecWhere the <exe_filename> is the execution file of this program after compiling, and the <vector_file>.vec is the input filename with (.vec) file extension. The program must write an output file after execution. The output filename format is<vector_file>.outWhere (.out) is the file extension of the output file.4.Input FormatThe input file contains three keywords: INPUT_NO, VECTOR_NO, and VECTORS, which represent the number of input, the number of test vector, and the test vectors, respectively. In figure4, the number of input is 10 and the number of test vector is 13. The input file is written as follows:INPUT_NO10VECTOR_NO13VECTORS // I10 I9 I8 I7 I6 I5 I4 I3 I2 I11-01101000 // V1000-101111 // V2….1001100011 // V135.Output FormatThe output structure file contains 8 keywords: TOTAL_SIZE, LFSR_NO, POLY, INPUT_SEQ, SEED, CYCLE, END, and COVERAGE which represent: total size of LFSR, the number of LFSR, polynomials of LFSRs, seeds of LFSRs, input order, the number of clock cycles for reseeding, end of the pattern generation procedure, and the pattern coverage in percentage, respectively. Note that POLY and SEED represent all LFSRs’ polynomial and seed from LFSR m to LFSR n, respectively. The polynomial is expressed in a list of exponents whose coefficient is 1. For example, three LFSRs (LFSR3, LFSR2 and LFSR1) are used in the design. The polynomials of LFSR3, LFSR2and LFSR1are X4+ X3+ 1, X3+ X2+ 1 and X3+ X + 1, respectively. The corresponding arguments of POLY are 4 3 0, 3 2 0, and 3 1 0. All numbers in POLY are separated by blank or tab. The numbers are listed in a descending order for each LFSR’s polynomial.The seed is represented in the binary format. For example, if the initial (or reseeding) state of X4 + X3+ 1 is {0 0 1 0} in your design, its corresponding representation is 0010. Y ou can insert space, tab, or underlines between binary digits for readability.The LFSRs’ output IDs, which are not listed in the output file but used for INPUT_SEQ, are set from TOTAL_SIZE to 1 by default. The INPUT_SEQ specifies the connection from LFSRs’outputs to CUT’s inputs. For example, a 3-LFSR structure with X4 + X3+ 1, X3 + X2+ 1 and X3 + X+ 1 forms a 10-bit test pattern generator. Eight of them are used as inputs of CUT. The output structure file is shown as follows, and the connection between LFSRs and CUT is shown in figure 5.At the 0th clock cycle, the LFSR is reset. At the 1st clock cycles, the 1st seed will be shifted into the LFSR from I0 to all DFFs in the LFSRs, which initializes the pattern generation sequence. This initialization step takes k clock cycles for shifting a seed into the LFSRs, if the LFSR size is equal to k. When the LFSRs need a new seed, it needs to take one extra clock cycle to reset the LFSRs before it takes another k clock cycles for shifting the new seed. So the total time penalty for s seeds is:s* (k+1) clock cyclesTake the output-structure file in Figure 5 as an example. At the 0th cycle, all DFFs in the LFSRs are reset to 0. Then the seeds (0011), (001), and (010) will be shifted into the 3 LFSRs in 10 cycles. After this initialization step, the pattern sequence starts, and in fact, the first pattern is the seed itself. When reseeding, the LFSRs will be reset, and then a new seed will be shifted into the LFSRs again. After the zz th cycle, if no further generated pattern can match any of the undetected input vectors, the pattern generation procedure ends. Then, (CLOCK zz, END) will be written into the output file. Last, the program must report the pattern coverage in percentage at the last line of the output file.TOTAL_SIZE10 //the structure of LFSRsLFSR_NO 3 // partitions of LFSRsPOLY 4 3 0 3 2 0 3 1 0INPUT_SEQ8 5 4 3 2 1 – 6 –7SEED0011 001 010 // Initial seedCYCLE xx // 1st reseedingSEED xxxx xxx xxxCYCLE yy // 2nd reseedingSEED yyyy yyy yyyCYCLE zz // end of pattern generationENDCOVERAGE PC % // pattern coverageFigure 5: Example of output structure file.ExampleWe can use exhaustive methods to search the best solution for the example in figure 5, but it is time consuming when the test data is enormous. For the example in Figure 4, if we choose the following two LFSRs, X3 + X + 1 and X5 + X4 + X3 + X + 1, its output structure is:TOTAL_SIZE8LFSR_NO 2POLY 3 1 0 5 4 3 1 0INPUT_SEQ 2 4 9 7,-3 6 1 8,5 10SEED 010 11001CYCLE 51ENDCOVERAGE 100%If we use a single seed to generate test patterns, such as Case 1, it takes 51 cycles to cover all the test patterns.Case 1:cycle 1: resetcycle 2~9: seeding 010 11001cycle 10:LFSR=010 11001, LFSR out=10011100Applied pattern=1001101000, Hit pattern V1cycle 15:LFSR=101 01010, LFSR out=01101010Applied pattern=0110110110, Hit pattern V9cycle 18:LFSR=100 11101, LFSR out=10011011Applied pattern=1001100011, Hit pattern V7 V13cycle 30:LFSR=001 01110, LFSR out=01101001Applied pattern=0110110101, Hit pattern V4 V10cycle 33:LFSR=011 00110, LFSR out=01100101Applied pattern=0110011101, Hit pattern V6cycle 34:LFSR=110 01100, LFSR out=00001111Applied pattern=0000101111, Hit pattern V2cycle 36:LFSR=101 01011, LFSR out=11101010Applied pattern=1110110110, Hit pattern V8cycle 37:LFSR=001 10110, LFSR out=01110001Applied pattern=0111010001, Hit pattern V11 V12cycle 51:LFSR= 001 00010, LFSR out=01100000Applied pattern=0110010100, Hit pattern V3If we apply two seeds during the pattern generation procedure, such as Case 2, it only takes 45 cycles to cover all the test patterns. The comparison results are shown in Table 2.Case 2:TOTAL_SIZE8LFSR_NO 2POLY 3 1 0 5 4 3 1 0INPUT_SEQ 2 4 9 7,-3 6 1 8,5 10SEED 010 11001CYCLE 19SEED 001 01110CYCLE 36SEED 001 00010CYCLE 45ENDCOVERAGE 100%cycle 1: resetcycle 2~9: seeding 010 11001cycle 10:LFSR=010 11001, LFSR out=10011100Applied pattern=1001101000, Hit pattern V1cycle 15:LFSR=101 01010, LFSR out=01101010Applied pattern=0110110110, Hit pattern V9cycle 18:LFSR=100 11101, LFSR out=10011011Applied pattern=1001100011, Hit pattern V7 V13cycle 19: resetcycle 20~27: seeding 001 01110cycle 28:LFSR=001 01110, LFSR out=01101001Applied pattern=0110110101, Hit pattern V4 V10cycle 31 :LFSR=011 00110, LFSR out=01100101Applied pattern=0110011101, Hit pattern V6cycle 32:LFSR=110 01100, LFSR out=00001111Applied pattern=0000101111, Hit pattern V2cycle 34:LFSR=101 01011, LFSR out=11101010Applied pattern=1110110110, Hit pattern V8cycle 35:LFSR=001 10110, LFSR out=01110001Applied pattern=0111010001, Hit pattern V11 V12cycle 36: resetcycle 37~44: seeding 001 00010cycle 45:LFSR= 001 00010, LFSR out=01100000Applied pattern=0110010100, Hit pattern V3Table 2. The solutions for the test set in figure 4Grade criterion:1.Pattern coverage:= (Detected Pattern) / (Total Test Pattern)2.BIST time (the number of required clock cycles including reset and reseeding)= Applied test cycles + Seed number * (1 + Total size of LFSRs) = last CYCLE 3.Total LFSR size= L m + L m-1 + … + L14.The total data volume of the seeds (in bits)= Seed number * Total size of LFSRs5.The upper bound of the number of seeds in use= 0.2 * Total Test Pattern6.Performance (Run time, Memory usage)7.Program must be terminated within six hours for all cases.Reference:1. B. Koenemann, “LFSR-coded test patterns for scan designs,” Proc. Of European Test Conf.,1991, p.237-242.2.S. Hellebrand, S. Tarnick, J. Rajski, and B. Courtois, “Generation of Vector Patterns ThroughReseeding of Multiple-Polynomial Linear Feedback Shift Registers,” Proc. Of IEEE Int’l Test Conf., 1992, p.120-129.3.P. H. B ardell, W. H. McAnney, and J. Savir, “Built-In Test for VLSI: PseudorandomTechniques”, John Wiley & Sons, 1987.4.M. Abramovici, M. A. Breuer, and A. D. Friedman, “Digital Systems Testing and TestableDesign”, Computer Science Press, 1990.5.M. L. Bushnell, a nd V. D. Agrawal, “Essentials of electronic testing for digital, memory, andmixed-signal VLSI circuits”, Kluwer Academic Publishers, 2000.11AppendixDivision type LFSR with polynomial: X 3 + X 2+ 10 0 1 0 1 0 1 0 0 1 0 1 1 1 1 0 1 1 1 1 0Cyclic pattern sequence: 00 1 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 1 0 1 1 0 1 0 1 1 0 1 1 1 1 0 1 1 0 1 0 1 1 0 0 0 0 0 1 1 0 0 1 1 0 10 11 1 0 1。
BOARD TESTOVector vs. Vectorless ICT Test Techniquesby Alan Albee and Michael J. Smith, TeradyneOver the last decade, there has been a move away from powered-up digital in-circuit vector testing to unpowered analog-based (vectorless) device-pin opens testing for large and sometimes small digital devices. Two of the driving factors behind this move are the increasingly complex and custom design of digital devices that are being used and the limited digital in-circuit test (ICT) capabilities of most ICT systems. Most ICT systems on the market today simply do not have the timing and voltage accuracies required to reliably and safely test today’s generation of low-voltage, high-speed digital components.At the same time, analog vectorless measurement techniques that can be used to detect open pins have become more acceptable and widespread. The introduction of active capacitive probes and advanced software algorithms has made the open-pin defect detection provided by these techniques acceptable to most manufacturers.ICT program developers also lack the digital device models required to perform digital vector testing, and they do not have the time required to write digital vector models for complex devices. As a result, many advanced ICT systems also are using analog opens techniques. This has almost leveledFigure 1. Fixture With Probes Fitted, Amplifier, and Multiplier Cardthe defect coverage of low-cost ICT and manufac-turing defect analyzers with the high-performance ICT systems, although the advanced analog open techniques found on the high-performance ICT systems do give superior defect detection on small-geometry devices.Some manufacturers no longer feel that they need to verify that the correct device has been placed and that it is functioning correctly. They are willing to settle for analog opens vectorless testing because it can detect structural defects.All of these factors have combined to make analog opens techniques the preferred solution as manufacturers settle for less test and fast imple-mentation, although there may be potential hidden cost. Costs can arise because analog open vector-less measurement techniques have a number of limitations compared with traditional digital vector testing that have been ignored or forgotten.Additional Fixture CostsBoth digital vectors and analog opens techniques require direct, or in some cases indirect, access to each pin of the DUT. As a result, limited access is not really an issue between the two techniques. Analog opens testing requires that additional hardware be built into the test fixture. The hardware consists of a probe/plate that must be accurately placed over each device that will be tested and a signal amplifier and a central multiplexer/amplifier that interfaces directly with the ICT measurement subsystem (Figure 1).Some test systems also need additional hardware to deal with the signals generated from the fixture hardware, which adds cost to the test system. The additional hardware typically increases fixture costs by $100 to $150 per device, and that doesn’t include the fixture top gate needed to position the probes on the board.Fixture complexity and costs further increase if capacitive probes are required for both the topand bottom sides of the board. In thiscase, the probes must be designed intothe probe plates with CAD systemsto get the position and height settingcorrect. This also can be a problemfor bottom-side analog opens probesin a single-sided fixture. These costsquickly increase with 20 opens tests,on average adding about $2,500 to thecost of a fixture. With vectorless tests,these costs are incurred with each testfixture that is built.In comparison, powered-up vectortests do not require any fixture hard-ware. For that reason, manufacturershave the benefit of lower fixture costsand the possible elimination of expen-sive hold-down gates. Additional Fixture Reliability Problems and Maintenance Including additional hardware in thetest fixture decreases its reliability. Inthe case of analog opens probes, theycan be easily moved and damagedduring the rigors of production testing(Figure 1). They are not expensive toreplace at $30, but any alignment errorsin the probe can produce misleadingresults, false failures, or false passes aswell as potential damage to the UUT.This adds a higher level of fixturemaintenance and test uncertainty. Anddealing with probes mounted into theprobe plates prompts a much higherlevel of support complexity.With powered-up vector testing,there is no special fixture hardwarerequired to execute the tests. Reliabletest results can be more easily achievedbecause there are fewer variables thatmanufacturers must control and lessvariance across different test systemsand test fixtures.ThroughputOne issue not normally consideredis the difference in test time associatedwith using analog opens techniques vs.digital vector tests. Analog opens typi-cally take about 2 ms per pin to test.The amount of time required toexecute a digital test depends on thenumber of vectors and the vector rate.An ICT system applying digital testvectors at a 5-MHz data rate (200 nsper test vector) can execute 10,000 test vectors in the same amount of time ittakes to test one pin using the analogopens technique. Digital test vectorsalso can accommodate multiple pinsin parallel rather than one pin at atime. Even accounting for a typical10-ms vector load time, it is obviousthat digital vectors have a significanttest throughput advantage compared toanalog opens type measurements.A digital test usually takes less than50 ms to execute, and the test time isalmost independent of the number ofpins being tested. A device with 1,000pins uses about 2 s for the analog openstechniques to complete vs. 50 ms for adigital vector test. As a result, there is a40x to 50x faster throughput advantageif digital vectors are used.High-performance ICT systemsalso can test similar digital devicesin parallel; that is, memory devices,again speeding up vector-based test.As manufacturing beat rates increase,the amount of time allowed for ICTdecreases. Moving tests from analogopens techniques to digital tests willdecrease the test time significantlyand either match the line’s beat raterequirement or allow additional testsand device programming on the testsystem.Defect CoverageTypical defect coverage for an openstest is around 85% and can be as highas 99% or as low as 20% dependingon the package geometry and construc-tion. Power pins cannot be tested withanalog opens techniques and are notalways included in defect coveragereports.Digital test defect coverage normallyis very high, especiallywhen boundary scan isused, and we would ex-pect higher than 95%defect coverage. Un-like analog vectorlesstest techniques, the faultcoverage of digital vec-tor testing is not limitedby the device packagegeometry or construc-tion.Analog vectorlesstests utilize a learnedtechnique, and the fault coverage re-ports are just estimates of what defectsthe software thinks the test will detect.Until the defect is actually presenton the board and it gets detected, themanufacturer cannot be confident thatdefects are really being detected. Theycould have false passes on open pinsbecause of board coupling defects orunguarded nets that do not have physi-cal test access.With digital vectors, an advancedICT system can use automated faultinjection techniques to determinewhether or not open pins are detected.This can give manufacturers confi-dence that open pins will be detectedand diagnosed accurately even beforethe program and fixture are sent to themanufacturer.False Fails and Missed FaultsAs with any learned technique, theanalog opens technique can lead tomarginal measurements that can beclose to the limits. Advanced soft-ware algorithms in high-performanceICT systems can help eliminate thisproblem, but many systems use oldersoftware that can cause false failuresand false passes.Digital tests, on the other hand, usesimple logic ones and zeros, makingthem less susceptible to false failures.Digital tests also provide additionalconfidence that the device is workingand has power connected to it.Digital Pin ElectronicsICT always has been an ideal placeto carry out board customization andfunctional test. Simple programmingsuch as board serial numbers completeFigure 2. Low-Voltage Signals vs. Traditional 5-V and 3.3-V LogicBOARD TESTin-system programming,and Flash can be completed at board test but requires the appropriate digital test capability.Boundary scan tests are ideally suited to be run on an ICT system, but again the correct digital resources are required. Even complex functional test can be performed if the test system has a synchronized analog and digital subsystem. For that reason, if you are taking advantage of programming or using boundary scan, you don’t need to use analog opens techniques on those devices.Why Not Use Digital Testing? The two main problems that have stopped the continued use of digital in-circuit techniques when access is available have been the lack of digital vector models and the inadequate capa-bilities of some digital ICT systems. Digital ModelsMany companies and test program suppliers still provide test vector de-vice model programming services. Some are on the back of model devel-opment for component test, but most are related to the use of JTAG within the device design.Most large devices and program-mable devices have boundary scan to either test or program the device. With tools such as Teradyne’s Ba-sicScan, it is easy to automatically generate a digital model quickly from the BSDL, and it can even deal with configurations and tied pins. Other simple devices such as buffers, latches, and memory devices can be modeled, but manufacturers sometimes still use analog opens techniques because it is an easy development option.What Is Required to Provide Digital Vectors?To test the latest generation of digital components, the tester pin electronics must have certain characteristics. To-day’s lower voltage devices are based around 1-V logic which is very differ-ent from the 5-V and 3.3-V logic of theGenerally, measurement and test systems need to be 10 times more accurate than the UUT; in this case, better than 24-mV drive/sense ac-curacy. Dual-level logic thresholds for input and output also are required to guarantee accurate high- and low-logic levels (Figure 3). And to perform whatever function is required for test, the digital pins must be able to drive, sense, tristate, or act as pull-up/down resources for the net in case these are not on the UUT but on another board or backplane. All these features need to be programmable on a pin-by-pin basis; otherwise it becomes very dif-ficult to build fixtures to accommodate the different requirements of each logic family.The complexity of modern devices means that a number of vectors need to be applied. Timing errors between signals must be <5 ns across the pin fields, which could be thousands of channels. When the signals are applied, they need consistent and repeatable timing that only a dedicated digital controller can ensure.The reason for lower voltages is the reduction in power needed in modern electronics and the small die sizes used in device manufacturing. This means that during digital testing the amount of energy needs to be controlled to avoid damaging any of the devices. Not only do you need to limit current, but also the time spent forcing logic values must be controlled.Some devices require 600 mA to force a low logic level to a high, and others would be damaged if 100 mA were used. Some devices can take several milliseconds of backdriv-ing while others should be limited to microseconds. Consequently, the test system needs to be able to measure and control backdrive current and time characteristics on the fly in real time (Figure 3).If the test system does not have these features, then it may be easier and more prudent to use analog opens techniques than risk damaging the UUT.ConclusionAnalog opens has a place within theICT system suite of test techniques. Itis ideal for testing connectors, sockets,and devices that cannot be tested withdigital vector models. However, manu-facturers can use high-performanceICT systems to perform digital vectortesting and gain the benefits of fastertest throughput, lower cost fixtures,more reliable test measurements, andhigher fault coverage.About the AuthorsAlan Albee, the in-circuit test prod-uct manager working in Teradyne’sSystem Test Group, has worked atGenRad and Teradyne for 28 yearsin various engineering, applications,product support, and marketing po-sitions. He has authored numeroustechnical articles on topics relatedto board test and has been awardedtwo patents. Mr. Albee has a B.S. inindustrial science from Fitchburg StateCollege. Teradyne, 700 Riverpark Dr.,MS-NR-7001-1, North Reading, MA01864, 978-370-6238, alan.albee@Michael J. Smith has more than 30years experience in the automatic testand inspection equipment industrywith Marconi, GenRad, and Teradyne.The author of numerous papers andarticles also has chaired iNEMI’sTest and Inspection RoadmapingGroup for many years. Mr. Smith has aBSc(Hons) in control engineering fromLeicester University and is a memberof the Institution of Engineeringand Technology (MIET). smithmj@BOARD TEST。