Pseudo-Static and Pseudo-Dynamic Stability Analysis of Tailings
- 格式:pdf
- 大小:482.94 KB
- 文档页数:9
非对称异步移动传感网中低延时邻居发现算法
黄庭培;张亚;李世宝;刘建航
【期刊名称】《计算机与现代化》
【年(卷),期】2022()10
【摘要】邻居发现即通过一定的手段快速而有效地去感知与节点能够直接通信的一跳范围内的邻居的问题,是移动传感网(MSN)的重要部分。
非对称异步MSN中,已有的算法需要大量的时间和能量去完成相互发现。
针对此问题,基于信标与活动时隙分离的邻居发现模型,提出一种适用于异步对称场景的BMCS-A算法,信标在工作周期的不同时隙进行广播以保证邻居发现的确定性。
其次,扩展BMCS-A,提出一种持续性广播的BMCS-B算法,节点在第一个子周期内持续性广播信标,接收到该信标的节点将自适应地调整信标的发送时刻以加快邻居发现过程。
最后,实现协作式BMCS-B算法,基于已发现邻居的睡眠苏醒调度信息,节点主动发送信标去发现潜在的邻居。
仿真实验结果表明,与Searchlight、G-Nihao和Disco相比,协作式BMCS-B将最坏发现时延分别降低了84.62%、85.71%和81.82%。
【总页数】8页(P88-94)
【作者】黄庭培;张亚;李世宝;刘建航
【作者单位】中国石油大学(华东)计算机科学与技术学院;中国石油大学(华东)海洋与空间信息学院
【正文语种】中文
【中图分类】TN929.5
【相关文献】
1.移动低占空比无线传感网中低能耗的主动邻居发现算法
2.移动低占空比传感网邻居发现算法*
3.一种基于速度的移动传感网快速邻居发现算法
4.移动低占空比传感网中时延感知的邻居发现算法
5.移动低占空比传感网中基于多信标消息的低时延邻居发现算法
因版权原因,仅展示原文概要,查看原文内容请购买。
LEOPOLD-FRANZENS UNIVERSITYChair of Engineering Mechanicso.Univ.-Prof.Dr.-Ing.habil.G.I.Schu¨e ller,Ph.D.G.I.Schueller@uibk.ac.at Technikerstrasse13,A-6020Innsbruck,Austria,EU Tel.:+435125076841Fax.:+435125072905 mechanik@uibk.ac.at,http://mechanik.uibk.ac.atIfM-Publication2-407G.I.Schu¨e ller.Developments in stochastic structural mechanics.Archive of Applied Mechanics,published online,2006.Archive of Applied Mechanics manuscript No.(will be inserted by the editor)Developments in Stochastic Structural MechanicsG.I.Schu¨e llerInstitute of Engineering Mechanics,Leopold-Franzens University,Innsbruck,Aus-tria,EUReceived:date/Revised version:dateAbstract Uncertainties are a central element in structural analysis and design.But even today they are frequently dealt with in an intuitive or qualitative way only.However,as already suggested80years ago,these uncertainties may be quantified by statistical and stochastic procedures. In this contribution it is attempted to shed light on some of the recent advances in the now establishedfield of stochastic structural mechanics and also solicit ideas on possible future developments.1IntroductionThe realistic modeling of structures and the expected loading conditions as well as the mechanisms of their possible deterioration with time are un-doubtedly one of the major goals of structural and engineering mechanics2G.I.Schu¨e ller respectively.It has been recognized that this should also include the quan-titative consideration of the statistical uncertainties of the models and the parameters involved[56].There is also a general agreement that probabilis-tic methods should be strongly rooted in the basic theories of structural en-gineering and engineering mechanics and hence represent the natural next step in the development of thesefields.It is well known that modern methods leading to a quantification of un-certainties of stochastic systems require computational procedures.The de-velopment of these procedures goes in line with the computational methods in current traditional(deterministic)analysis for the solution of problems required by the engineering practice,where certainly computational pro-cedures dominate.Hence,their further development within computational stochastic structural analysis is a most important requirement for dissemi-nation of stochastic concepts into engineering practice.Most naturally,pro-cedures to deal with stochastic systems are computationally considerably more involved than their deterministic counterparts,because the parameter set assumes a(finite or infinite)number of values in contrast to a single point in the parameter space.Hence,in order to be competitive and tractable in practical applications,the computational efficiency of procedures utilized is a crucial issue.Its significance should not be underestimated.Improvements on efficiency can be attributed to two main factors,i.e.by improved hard-ware in terms of ever faster computers and improved software,which means to improve the efficiency of computational algorithms,which also includesDevelopments in Stochastic Structural Mechanics3 utilizing parallel processing and computer farming respectively.For a con-tinuous increase of their efficiency by software developments,computational procedure of stochastic analysis should follow a similar way as it was gone in the seventieth and eighties developing the deterministic FE approach. One important aspect in this fast development was the focus on numerical methods adjusted to the strength and weakness of numerical computational algorithms.In other words,traditional ways of structural analysis devel-oped before the computer age have been dropped,redesigned and adjusted respectively to meet the new requirements posed by the computational fa-cilities.Two main streams of computational procedures in Stochastic Structural Analysis can be observed.Thefirst of this main class is the generation of sample functions by Monte Carlo simulation(MCS).These procedures might be categorized further according to their purpose:–Realizations of prescribed statistical information:samples must be com-patible with prescribed stochastic information such as spectral density, correlation,distribution,etc.,applications are:(1)Unconditional simula-tion of stochastic processes,fields and waves.(2)Conditional simulation compatible with observations and a priori statistical information.–Assessment of the stochastic response for a mathematical model with prescribed statistics(random loading/system parameters)of the param-eters,applications are:(1)Representative sample for the estimation of the overall distribution.4G.I.Schu¨e ller Indiscriminate(blind)generation of samples.Numerical integration of SDE’s.(2)Representative sample for the reliability assessment by gen-erating adverse rare events with positive probability,i.e.by:(a)variance reduction techniques controlling the realizations of RV’s,(b)controlling the evolution in time of sampling functions.The other main class provides numerical solutions to analytical proce-dures.Grouping again according to the respective purpose the following classification can be made:Numerical solutions of Kolmogorov equations(Galerkin’s method,Finite El-ement method,Path Integral method),Moment Closure Schemes,Compu-tation of the Evolution of Moments,Maximum Entropy Procedures,Asymp-totic Stability of Diffusion Processes.In the following,some of the outlined topics will be addressed stressing new developments.These topics are described within the next six subject areas,each focusing on a different issue,i.e.representation of stochastic processes andfields,structural response,stochastic FE methods and parallel processing,structural reliability and optimization,and stochastic dynamics. In this context it should be mentioned that aside from the MIT-Conference series the USNCCM,ECCM and WCCM’s do have a larger part of sessions addressing computational stochastic issues.Developments in Stochastic Structural Mechanics5 2Representation of Stochastic ProcessesMany quantities involving randomfluctuations in time and space might be adequately described by stochastic processes,fields and waves.Typical ex-amples of engineering interest are earthquake ground motion,sea waves, wind turbulence,road roughness,imperfection of shells,fluctuating prop-erties in random media,etc.For this setup,probabilistic characteristics of the process are known from various measurements and investigations in the past.In structural engineering,the available probabilistic characteristics of random quantities affecting the loading or the mechanical system can be often not utilized directly to account for the randomness of the structural response due to its complexity.For example,in the common case of strong earthquake motion,the structural response will be in general non-linear and it might be too difficult to compute the probabilistic characteristics of the response by other means than Monte Carlo simulation.For the purpose of Monte Carlo simulation sample functions of the involved stochastic pro-cess must be generated.These sample functions should represent accurately the characteristics of the underlying stochastic process orfields and might be stationary and non-stationary,homogeneous or non-homogeneous,one-dimensional or multi-dimensional,uni-variate or multi-variate,Gaussian or non-Gaussian,depending very much on the requirements of accuracy of re-alistic representation of the physical behavior and on the available statistical data.6G.I.Schu¨e ller The main requirement on the sample function is its accurate represen-tation of the available stochastic information of the process.The associ-ated mathematical model can be selected in any convenient manner as long it reproduces the required stochastic properties.Therefore,quite different representations have been developed and might be utilized for this purpose. The most common representations are e.g.:ARMA and AR models,Filtered White Noise(SDE),Shot Noise and Filtered Poisson White Noise,Covari-ance Decomposition,Karhunen-Lo`e ve and Polynomial Chaos Expansion, Spectral Representation,Wavelets Representation.Among the various methods listed above,the spectral representation methods appear to be most widely used(see e.g.[71,86]).According to this procedure,samples with specified power spectral density information are generated.For the stationary or homogeneous case the Fast Fourier Transform(FFT)techniques is utilized for a dramatic improvements of its computational efficiency(see e.g.[104,105]).Advances in thisfield provide efficient procedures for the generation of2D and3D homogeneous Gaus-sian stochasticfields using the FFT technique(see e.g.[87]).The spectral representation method generates ergodic sample functions of which each ful-fills exactly the requirements of a target power spectrum.These procedures can be extended to the non-stationary case,to the generation of stochastic waves and to incorporate non-Gaussian stochasticfields by a memoryless nonlinear transformation together with an iterative procedure to meet the target spectral density.Developments in Stochastic Structural Mechanics7 The above spectral representation procedures for an unconditional simula-tion of stochastic processes andfields can also be extended for Conditional simulations techniques for Gaussianfields(see e.g.[43,44])employing the conditional probability density method.The aim of this procedure is the generation of Gaussian random variates U n under the condition that(n−1) realizations u i of U i,i=1,2,...,(n−1)are specified and the a priori known covariances are satisfied.An alternative procedure is based on the so called Kriging method used in geostatistical application and applied also to con-ditional simulation problems in earthquake engineering(see e.g.[98]).The Kriging method has been improved significantly(see e.g.[36])that has made this method theoretically clearer and computationally more efficient.The differences and similarities of the conditional probability density methods and(modified)Kriging methods are discussed in[37]showing the equiva-lence of both procedures if the process is Gaussian with zero mean.A quite general spectral representation utilized for Gaussian random pro-cesses andfields is the Karhunen-Lo`e ve expansion of the covariance function (see e.g.[54,33]).This representation is applicable for stationary(homoge-neous)as well as for non-stationary(inhomogeneous)stochastic processes (fields).The expansion of a stochastic process(field)u(x,θ)takes the formu(x,θ)=¯u(x)+∞i=1ξ(θ) λiφi(x)(1)where the symbolθindicates the random nature of the corresponding quan-tity and where¯u(x)denotes the mean,φi(x)are the eigenfunctions andλi the eigenvalues of the covariance function.The set{ξi(θ)}forms a set of8G.I.Schu¨e ller orthogonal(uncorrelated)zero mean random variables with unit variance.The Karhunen-Lo`e ve expansion is mean square convergent irrespective of its probabilistic nature provided it possesses afinite variance.For the im-portant special case of a Gaussian process orfield the random variables{ξi(θ)}are independent standard normal random variables.In many prac-tical applications where the random quantities vary smoothly with respectto time or space,only few terms are necessary to capture the major part of the randomfluctuation of the process.Its major advantage is the reduction from a large number of correlated random variables to few most important uncorrelated ones.Hence this representation is especially suitable for band limited colored excitation and stochastic FE representation of random me-dia where random variables are usually strongly correlated.It might also be utilized to represent the correlated stochastic response of MDOF-systems by few most important variables and hence achieving a space reduction.A generalization of the above Karhunen-Lo`e ve expansion has been proposed for application where the covariance function is not known a priori(see[16, 33,32]).The stochastic process(field)u(x,θ)takes the formu(x,θ)=a0(x)Γ0+∞i1=1a i1(x)Γ1(ξi1(θ))+∞i1=1i1i2=1a i1i2(x)Γ2(ξi1(θ),ξi2(θ))+ (2)which is denoted as the Polynomial Chaos Expansion.Introducing a one-to-one mapping to a set with ordered indices{Ψi(θ)}and truncating eqn.2Developments in Stochastic Structural Mechanics9 after the p th term,the above representations reads,u(x,θ)=pj=ou j(x)Ψj(θ)(3)where the symbolΓn(ξi1,...,ξin)denotes the Polynomial Chaos of order nin the independent standard normal random variables.These polynomialsare orthogonal so that the expectation(or inner product)<ΨiΨj>=δij beingδij the Kronecker symbol.For the special case of a Gaussian random process the above representation coincides with the Karhunen-Lo`e ve expan-sion.The Polynomial Chaos expansion is adjustable in two ways:Increasingthe number of random variables{ξi}results in a refinement of the random fluctuations,while an increase of the maximum order of the polynomialcaptures non-linear(non-Gaussian)behavior of the process.However,the relation between accuracy and numerical efforts,still remains to be shown. The spectral representation by Fourier analysis is not well suited to describe local feature in the time or space domain.This disadvantage is overcome in wavelets analysis which provides an alternative of breaking a signal down into its constituent parts.For more details on this approach,it is referred to[24,60].In some cases of applications the physics or data might be inconsistent with the Gaussian distribution.For such cases,non-Gaussian models have been developed employing various concepts to meet the desired target dis-tribution as well as the target correlation structure(spectral density).Cer-tainly the most straight forward procedures is the above mentioned memo-ryless non-linear transformation of Gaussian processes utilizing the spectralrepresentation.An alternative approach utilizes linear and non-linearfil-ters to represent normal and non-Gaussian processes andfields excited by Gaussian white noise.Linearfilters excited by polynomial forms of Poisson white noise have been developed in[59]and[34].These procedures allow the evaluation of moments of arbitrary order without having to resort to closure techniques. Non-linearfilters are utilized to generate a stationary non-Gaussian stochas-tic process in agreement with a givenfirst-order probability density function and the spectral density[48,15].In the Kontorovich-Lyandres procedure as used in[48],the drift and diffusion coefficients are selected such that the solutionfits the target probability density,and the parameters in the solu-tion form are then adjusted to approximate the target spectral density.The approach by Cai and Lin[15]simplifies this procedure by matching the spec-tral density by adjusting only the drift coefficients,which is the followed by adjusting the diffusion coefficient to approximate the distribution of the pro-cess.The latter approach is especially suitable and computationally highly efficient for a long term simulation of stationary stochastic processes since the computational expense increases only linearly with the number n of dis-crete sample points while the spectral approach has a growth rate of n ln n when applying the efficient FFT technique.For generating samples of the non-linearfilter represented by a stochastic differential equations(SDE), well developed numerical procedures are available(see e.g.[47]).3Response of Stochastic SystemsThe assessment of the stochastic response is the main theme in stochastic mechanics.Contrary to the representation of of stochastic processes and fields designed tofit available statistical data and information,the output of the mathematical model is not prescribed and needs to be determined in some stochastic sense.Hence the mathematical model can not be selected freely but is specified a priori.The model involves for stochastic systems ei-ther random system parameters or/and random loading.Please note,due to space limitations,the question of model validation cannot be treated here. For the characterization of available numerical procedures some classifi-cations with regard to the structural model,loading and the description of the stochastic response is most instrumental.Concerning the structural model,a distinction between the properties,i.e.whether it is determinis-tic or stochastic,linear or non-linear,as well as the number of degrees of freedom(DOF)involved,is essential.As a criterion for the feasibility of a particular numerical procedure,the number of DOF’s of the structural system is one of the most crucial parameters.Therefore,a distinction be-tween dynamical-system-models and general FE-discretizations is suggested where dynamical systems are associated with a low state space dimension of the structural model.FE-discretization has no essential restriction re-garding its number of DOF’s.The stochastic loading can be grouped into static and dynamic loading.Stochastic dynamic loading might be charac-terized further by its distribution and correlation and its independence ordependence on the response,resulting in categorization such as Gaussian and non-Gaussian,stationary and non-stationary,white noise or colored, additive and multiplicative(parametric)excitation properties.Apart from the mathematical model,the required terms in which the stochastic re-sponse should be evaluated play an essential role ranging from assessing thefirst two moments of the response to reliability assessments and stabil-ity analysis.The large number of possibilities for evaluating the stochas-tic response as outlined above does not allow for a discussion of the en-tire subject.Therefore only some selected advances and new directions will be addressed.As already mentioned above,one could distinguish between two main categories of computational procedures treating the response of stochastic systems.Thefirst is based on Monte Carlo simulation and the second provides numerical solutions of analytical procedures for obtaining quantitative results.Regarding the numerical solutions of analytical proce-dures,a clear distinction between dynamical-system-models and FE-models should be made.Current research efforts in stochastic dynamics focus to a large extent on dynamical-system-models while there are few new numerical approaches concerning the evaluation of the stochastic dynamic response of e.g.FE-models.Numerical solutions of the Kolmogorov equations are typical examples of belonging to dynamical-system-models where available approaches are computationally feasible only for state space dimensions one to three and in exceptional cases for dimension four.Galerkin’s,Finite El-ement(FE)and Path Integral methods respectively are generally used tosolve numerically the forward(Fokker-Planck)and backward Kolmogorov equations.For example,in[8,92]the FE approach is employed for stationary and transient solutions respectively of the mentioned forward and backward equations for second order systems.First passage probabilities have been ob-tained employing a Petrov-Galerkin FE method to solve the backward and the related Pontryagin-Vitt equations.An instructive comparison between the computational efforts using Monte Carlo simulation and the FE-method is given e.g.in an earlier IASSAR report[85].The Path Integral method follows the evolution of the(transition)prob-ability function over short time intervals,exploiting the fact that short time transition probabilities for normal white noise excitations are locally Gaus-sian distributed.All existing path integration procedures utilize certain in-terpolation schemes where the probability density function(PDF)is rep-resented by values at discrete grid points.In a wider sense,cell mapping methods(see e.g.[38,39])can be regarded as special setups of the path integral procedure.As documented in[9],cumulant neglect closure described in section7.3 has been automated putational procedures for the automated generation and solutions of the closed set of moment equations have been developed.The method can be employed for an arbitrary number of states and closed at arbitrary levels.The approach,however,is limited by available computational resources,since the computational cost grows exponentially with respect to the number of states and the selected closurelevel.The above discussed developments of numerical procedures deal with low dimensional dynamical systems which are employed for investigating strong non-linear behavior subjected to(Gaussian)white noise excitation. Although dynamical system formulations are quite general and extendible to treat non-Gaussian and colored(filtered)excitation of larger systems,the computational expense is growing exponentially rendering most numerical approaches unfeasible for larger systems.This so called”curse of dimen-sionality”is not overcome yet and it is questionable whether it ever will be, despite the fast developing computational possibilities.For this reason,the alternative approach based on Monte Carlo simu-lation(MCS)gains importance.Several aspects favor procedures based on MCS in engineering applications:(1)Considerably smaller growth rate of the computational effort with dimensionality than analytical procedures.(2) Generally applicable,well suited for parallel processing(see section5.1)and computationally straight forward.(3)Non-linear complex behavior does not complicate the basic procedure.(4)Manageable for complex systems.Contrary to numerical solutions of analytical procedures,the employed structural model and the type of stochastic loading does for MCS not play a deceive role.For this reason,MCS procedures might be structured ac-cording to their purpose i.e.where sample functions are generated either for the estimation of the overall distribution or for generating rare adverse events for an efficient reliability assessment.In the former case,the prob-ability space is covered uniformly by an indiscriminate(blind)generationof sample functions representing the random quantities.Basically,at set of random variables will be generated by a pseudo random number generator followed by a deterministic structural analysis.Based on generated random numbers realizations of random processes,fields and waves addressed in section2,are constructed and utilized without any further modification in the following structural analysis.The situation may not be considered to be straight forward,however,in case of a discriminate MCS for the reliability estimation of structures,where rare events contributing considerably to the failure probability should be gener-ated.Since the effectiveness of direct indiscriminate MCS is not satisfactory for producing a statistically relevant number of low probability realizations in the failure domain,the generation of samples is restricted or guided in some way.The most important class are the variance reduction techniques which operate on the probability of realizations of random variables.The most widely used representative of this class in structural reliability assess-ment is Importance Sampling where a suitable sampling distribution con-trols the generation of realizations in the probability space.The challenge in Importance Sampling is the construction of a suitable sampling distribu-tion which depends in general on the specific structural system and on the failure domain(see e.g.[84]).Hence,the generation of sample functions is no longer independent from the structural system and failure criterion as for indiscriminate direct MCS.Due to these dependencies,computational procedures for an automated establishment of sampling distributions areurgently needed.Adaptive numerical strategies utilizing Importance Direc-tional sampling(e.g.[11])are steps in this direction.The effectiveness of the Importance sampling approach depends crucially on the complexity of the system response as well as an the number of random variables(see also section5.2).Static problems(linear and nonlinear)with few random vari-ables might be treated effectively by this approach.Linear systems where the randomness is represented by a large number of RVs can also be treated efficiently employingfirst order reliability methods(see e.g.[27]).This ap-proach,however,is questionable for the case of non-linear stochastic dynam-ics involving a large set of random variables,where the computational effort required for establishing a suitable sampling distribution might exceed the effort needed for indiscriminate direct MCS.Instead of controlling the realization of random variables,alternatively the evolution of the generated sampling can be controlled[68].This ap-proach is limited to stochastic processes andfields with Markovian prop-erties and utilizes an evolutionary programming technique for the genera-tion of more”important”realization in the low probability domain.This approach is especially suitable for white noise excitation and non-linear systems where Importance sampling is rather difficult to apply.Although the approach cannot deal with spectral representations of the stochastic processes,it is capable to make use of linearly and non-linearlyfiltered ex-citation.Again,this is just contrary to Importance sampling which can be applied to spectral representations but not to white noisefiltered excitation.4Stochastic Finite ElementsAs its name suggests,Stochastic Finite Elements are structural models rep-resented by Finite Elements the properties of which involve randomness.In static analysis,the stiffness matrix might be random due to unpredictable variation of some material properties,random coupling strength between structural components,uncertain boundary conditions,etc.For buckling analysis,shape imperfections of the structures have an additional impor-tant effect on the buckling load[76].Considering structural dynamics,in addition to the stiffness matrix,the damping properties and sometimes also the mass matrix might not be predictable with certainty.Discussing numerical Stochastic Finite Elements procedures,two cat-egories should be distinguished clearly.Thefirst is the representation of Stochastic Finite Elements and their global assemblage as random structural matrices.The second category addresses the evaluation of the stochastic re-sponse of the FE-model due to its randomness.Focusingfirst on the Stochastic FE representation,several representa-tions such as the midpoint method[35],the interpolation method[53],the local average method[97],as well as the Weighted-Integral-Method[94,25, 26]have been developed to describe spatial randomfluctuations within the element.As a tendency,the midpoint methods leads to an overestimation of the variance of the response,the local average method to an underestima-tion and the Weighted-Integral-Method leads to the most accurate results. Moreover,the so called mesh-size problem can be resolved utilizing thisrepresentation.After assembling all Finite Elements,the random structural stiffness matrix K,taken as representative example,assumes the form,K(α)=¯K+ni=1K Iiαi+ni=1nj=1K IIijαiαj+ (4)where¯K is the mean of the matrix,K I i and K II ij denote the determinis-ticfirst and second rate of change with respect to the zero mean random variablesαi andαj and n is the total number of random variables.For normally distributed sets of random variables{α},the correlated set can be represented advantageously by the Karhunen-Lo`e ve expansion[33]and for non-Gaussian distributed random variables by its Polynomial chaos ex-pansion[32],K(θ)=¯K+Mi=0ˆKiΨi(θ)(5)where M denotes the total number of chaos polynomials,ˆK i the associated deterministicfluctuation of the matrix andΨi(θ)a polynomial of standard normal random variablesξj(θ)whereθindicates the random nature of the associated variable.In a second step,the random response of the stochastic structural system is determined.The most widely used procedure for evaluating the stochastic response is the well established perturbation approach(see e.g.[53]).It is well adapted to the FE-formulation and capable to evaluatefirst and second moment properties of the response in an efficient manner.The approach, however,is justified only for small deviations from the center value.Since this assumption is satisfied in most practical applications,the obtainedfirst two moment properties are evaluated satisfactorily.However,the tails of the。
第51卷第5期2020年5月中南大学学报(自然科学版)Journal of Central South University(Science and Technology)V ol.51No.5May2020伪随机编码磁性源瞬变电磁发射技术及电磁响应分析石琦1,2,3,刘丽华1,2,倪志康1,2,3,刘小军1,2,方广有1,2(1.中国科学院空天信息创新研究院,北京,100000;2.电磁辐射与探测技术院重点实验室,北京,100000;3.中国科学院大学电子电气与通信工程学院,北京,100000)摘要:为了设计一种伪随机编码磁性源瞬变电磁系统,从关键技术和电磁理论方面讨论该系统的设计与该系统在均匀半空间大地模型下所使用的正反演方法。
首先对比分析4种伪随机编码的特性,从中选择m序列作为系统的激励信号;然后,基于有源恒压钳位技术设计系统发射电路,正演出实测数据的电磁响应。
根据m序列伪随机编码系统辨识理论,响应波形经预处理后得到大地脉冲响应估计值并与标准值比较。
研究结果表明:该磁性源瞬变电磁系统可以克服感性负载的阻碍作用,发射出波形质量高的伪随机编码电流;若电流波形的自相关性好,即使电流波形出现一定程度失真,仍可以获得与理论值很接近的大地脉冲响应估计值,预处理结果不完全依赖伪随机编码电流波形。
关键词:伪随机编码;磁性源;瞬变电磁;辨识方法;维纳滤波中图分类号:P631文献标志码:A文章编号:1672-7207(2020)05-1268-11Pseudo-random coded magnetic source transient electromagnetic emission technology and electromagnetic response analysisSHI Qi1,2,3,LIU Lihua1,2,NI Zhikang1,2,3,LIU Xiaojun1,2,FANG Guangyou1,2(1.Aerospace Informatory Research Institute,Chinese Academy of Sciences,Beijing100000,China;2.Key Laboratory of Electromagnetic Radiation and Sensing Technology,Beijing100000,China;3.School of Electronic Electrical and Communication Engineering,University of Chinese Academy of Sciences,Beijing100000,China)Abstract:In order to design a pseudo-random coded magnetic source transient electromagnetic system,the design of the system and the forward and inverse methods used by the system in a uniform half-space earth model were discussed from the perspective of key technologies and electromagnetic theory.First,the characteristics of the four kinds of pseudo-random codes were compared and analyzed,and the m sequence was selected as the excitation signal of the system.Then the system transmission circuit was designed based on the active constant voltage clamping technology to perform the electromagnetic response of the measured data.According to the identification DOI:10.11817/j.issn.1672-7207.2020.05.011收稿日期:2019−07−11;修回日期:2019−09−23基金项目(Foundation item):国家重点基础研究发展规划(973计划)项目(2018YFF01013300);国家自然科学基金资助项目(61827803)(Project(2018YFF01013300)supported by the National Basic Research Development Program(973Program)of China;Project(61827803)supported by the National Nature Science Foundation of China)通信作者:刘丽华,博士,副研究员,从事地球物理电磁法探测技术与系统设计研究;E-mail:*************第5期石琦,等:伪随机编码磁性源瞬变电磁发射技术及电磁响应分析theory of the m-sequence pseudo-random coding system,the response waveform was preprocessed to obtain the estimated value of the earth impulse response and compared with the standard value.The results show that the magnetic source transient electromagnetic system can overcome the obstruction of inductive loads and emit a pseudo-random coded current with high waveform quality;if the current waveform has good autocorrelation,and even if the current waveform is distorted to a certain degree,the estimated value of the earth impulse response that is very close to the theoretical value can be obtained,and the preprocessing result does not completely depend on the pseudo-random coding current waveform.Key words:pseudo-random coding;inductive load;transient electromagnetic;identification method;wiener filtering瞬变电磁法是目前广泛应用的重要电磁探测方法之一,根据负载形式不同,可分为接地长导线电性源瞬变电磁系统以及多匝回线磁性源瞬变电磁系统。
Learning Based Super-Resolution Imaging:Use ofZoom as a CueB.Tech.Project ReportSubmitted in partial fulfillmentof the requirements forB.Tech.DegreeinElectrical EngineeringbyRajkiran Panuganti(99007034)under the guidance ofProf.Subhasis ChaudhuriDepartment of Electrical EngineeringIndian Institute of Technology,BombayApril20032Acceptance CertificateDepartment of Electrical EngineeringIndian Institute of Technology,BombayThe Bachelor of Technology project titled Learning Based Super-Resolution Imaging:Use of Zoom as a Cue and the corresponding report was done by Rajkiran Panuganti(99007034) under my guidance and may be accepted.Date:April16,2003(Prof.Subhasis Chaudhuri)AcknowledgmentI would like to express my sincere gratitude towards Prof.Subhasis Chaudhuri for his invaluable guidance and constant encouragement and Mr.Manjunath Joshi for his help during the course of the project.16th April,2003Rajkiran PanugantiAbstractWe propose a novel technique for super-resolution imaging of a scene from observations at different camera zooms.Given a sequence of images with different zoom factors of a static scene,the problem is to obtain a picture of the entire scene at a resolution corresponding to the most zoomed image in the scene.We not only obtain the super-resolved image for known integer zoom factors,but also for unknown arbitrary zoom factors.In order to achieve that we model the high resolution image as a Markov randomfield(MRF)the parameters of which are learnt from the most zoomed observation.The parameters are estimated using the maximum pseudo-likelihood(MPL)criterion.Assuming that the entire scene can be described by a homogeneous MRF,the learnt model parameters are then used to obtain a maximum aposteriori(MAP)estimate of the high resolutionfield.Since there is no relative motion between the scene and the camera,as is the case with most of the super-resolution techniques, we do away with the correspondence problem.Experimental results on synthetic as well as on real data sets are presented.ContentsAcceptance Certificate iAcknowledgment ii Abstract iii Table of Contents iv 1Introduction1 Introduction1 2Related Work4 Related Work4 3Low Resolution Image Model9 Low Resolution Image Model9 4Super-Resolution Restoration12Super-Resolution Resotoration124.1Image Field Modeling (12)4.2Parameter Learning (14)4.3MAP Restoration (15)4.4Zoom Estimation (16)5Experimental Results19Experimental Results195.1Experimentations with Known,Integer Zoom Factors (19)5.2Experiments with unknown zoom factors (25)5.3Experimental Results when parameters are estimated (28)6Conclusion33 Conclusion33 References34Chapter1IntroductionIn most electronic imaging applications,images with high spatial resolution are desired and often required.A high spatial resolution means that the pixel density in an image is high,and hence there are more details and subtle gray level transitions,which may be critical in various applications.Be it remote sensing,medical imaging,robot vision,industrial inspection or video enhancement(to name a few),operating on high-resolution images leads to a better analysis in the form of lesser misclassification,better fault detection,more true-positives,etc. However,acquisition of high-resolution images is severely constrained by the drawbacks of the limited density sensors.The images acquired through such sensors suffer from aliasing and blurring.The most direct solution to increase the spatial resolution is to reduce the pixel size (i.e.,to increase the number of pixels per unit area)by the sensor manufacturing techniques. But due to the decrease in pixel size,the light available also decreases causing more shot noise [1,2]which degrades the image quality.Thus,there exists limitations on the pixel size and the optimal size is estimated to be about40µm2.The current image sensor technology has almost reached this level.Another approach to increase the resolution is to increase the wafer size which leads to an increase in the capacitance[3].This approach is not effective since an increase in the capacitance causes a decrease in charge transfer rate.Hence,a promising approach is to use image processing methods to construct a high-resolution image from one or more available low-resolution observations.Resolution enhancement from a single observation using image interpolation techniquesis of limited application because of the aliasing present in the low-resolution image.Super-resolution refers to the process of producing a high spatial resolution image from several low-resolution observations.It includes upsampling the image thereby increasing the maximum spatial frequency and removing degradations that arise during image capture,viz.,aliasing and blurring.The amount of aliasing differs with zooming.This is because,when one captures the images with different zoom settings,the least zoomed entire area of the scene is represented by a very limited number of pixels,i.e.,it is sampled with a very low sampling rate and the most zoomed scene with a higher sampling frequency.Therefore,the larger the scene(the lesser zoomed area captured),the lower will be the resolution with more aliasing effect.By varying the zoom level,one observes the scene at different levels of aliasing and blurring. Thus one can use zoom as a cue for generating high-resolution images at the lesser zoomed area of a scene.As discussed in the next chapter,researchers traditionally use the motion cue to super-resolve the image.However this method being a2-D dense feature matching technique,re-quires an accurate registration or preprocessing.This is disadvantageous as the problem of finding the same set of feature points in successive images to establish the correspondence between them is a very difficult task.Errors in registration are reflected on the quality of the super-resolved image.Further,the methods based on the motion cue cannot handle observa-tions at varying levels of spatial resolution.It assumes that all the frames are captured at the same spatial resolution.Previous research work with zoom as a cue to solve computer vision problems include determination of depth[4,5,6],minimization of view degeneracies[7],and zoom tracking[8].We show in this paper that even the super-resolution problem can be solved using zoom as an effective cue by using a simple MAP-MRF formulation.The basic problem can be defined as follows:One continuously zooms in to a scene while capturing its images. The most zoomed-in observation has the highest spatial resolution.We are interested in gen-erating an image of the entire scene(as observed by the most wide angle or the least zoomed view)at the same resolution as the most zoomed-in observation.The details of the method are presented in this thesis.We also discuss various issues and limitations of the proposed technique.The remainder of this thesis is organized as follows.In chapter2we review some of the prior work in super-resolution imaging.We discuss how one can model the formation of low-resolution images using the zoom as a cue in chapter3.The zoom estimation,parameter learning and the MAP-MRF approach to derive a cost function for the super-resolution esti-mation is the subject matter for chapter4.We present typical experimental results in chapter5 and chapter6provides a brief summary,along with the future research issues to be explored.Chapter2Related WorkMany researchers have tackled the super-resolution problem for both still and video images, e.g.,[9,10,11,12](see[13,14]for details).The super-resolution idea wasfirst proposed by Tsai and Huang[12].They used the frequency domain approach to demonstrate the ability to reconstruct a single improved resolution image from several down-sampled noise free versions of it.A frequency domain observation model was defined for this problem which considered only globally shifted versions of the same scene.Kim et al.discuss a recursive algorithm, also in the frequency domain,for the restoration of super-resolution images from noisy and blurred observations[15].They considered the same blur and noise characteristics for all the low-resolution observations.Kim and Su[16]considered different blurs for each low-resolution image and used Tikhonov regularization.A minimum mean squared error approach for multiple image restoration,followed by interpolation of the restored images into a single high-resolution image is presented in[17].Ur and Gross use the Papoulis and Brown[18],[19] generalized sampling theorem to obtain an improved resolution picture from an ensemble of spatially shifted observations[20].These shifts are assumed to be known by the authors. All the above super-resolution restoration methods are restricted either to a global uniform translational displacement between the measured images,or a linear space invariant(LSI) blur,and a homogeneous additive noise.A different approach to the super-resolution restoration problem was suggested by Peleg et al.[10,21,22],based on the iterative back projection(IBP)method adapted from computeraided tomography.This method starts with an initial guess of the output image,projects the temporary result to the measurements(simulating them),and updates the temporary guess according to this simulation error.A set theoretic approach to the super-resolution restoration problem was suggested in[23].The main result there is the ability to define convex sets which represent tight constraints on the image to be restored.Having defined such constraints it is straightforward to apply the projections onto convex sets(POCS)method.These methods are not restricted to a specific motion charactarictics.They use arbitrary smooth motion,linear space variant blur,and non-homogeneous additive noise.Authors in[24]describe a complete model of video acquisition with an arbitrary input sampling lattice and a nonzero exposure time.They use the theory of POCS to reconstruct super-resolution still images or video frames from a low-resolution time sequence of images.They restrict both the sensor blur and the focus blur to be constant during the exposure.Ng et al.develop a regularized constrained total least square(RCTLS)solution to obtain a high-resolution image in[25].They consider the presence of ubiquitous perturbation errors of displacements around the ideal sub-pixel locations in addition to noisy observations.In[26]the authors use a maximum a posteriori(MAP)framework for jointly estimating the registration parameters and the high-resolution image for severely aliased observations. They use iterative,cyclic coordinate-descent optimization to update the registration parame-ters.A MAP estimator with Huber-MRF prior is described by Schultz and Stevenson in[27]. Other approaches include an MAP-MRF based super-resolution technique proposed by Rajan et al[28].Here authors consider an availability of decimated,blurred and noisy versions of a high-resolution image which are used to generate a super-resolved image.A known blur acts as a cue in generating the super-resolution image.They model the super-resolved image as an MRF.In[29]the authors relax the assumption of the known blur and extend it to deal with an arbitrary space-varying defocus blur.They recover both the scene intensity and the depthfields simultaneously.For super-resolution applications they also propose a general-ized interpolation method[30].Here a space containing the original function is decomposed into appropriate subspaces.These subspaces are chosen so that the rescaling operation pre-serves properties of the original function.On combining these rescaled sub-functions,theyget back the original space containing the scaled or zoomed function.Nguyen et al[31] proposed a technique for parametric blur identification and regularization based on the gener-alized cross-validation(GCV)theory.They solve a multivariate nonlinear minimization prob-lem for these unknown parameters.They have also proposed circulant block preconditioners to accelerate the conjugate gradient descent method while solving the Tikhonov-regularized super-resolution problem[32].Elad and Feuer[33]proposed a unified methodology for super-resolution restoration from several geometrically warped,blurred,noisy and down-sampled measured images by combining maximum likelihood(ML),MAP and POCS approaches.An adaptivefiltering approach to super-resolution restoration is described by the same authors in[34].They exploit the properties of the operations involved in their previous work[33] and develop a fast super-resolution algorithm in[35]for pure translational motion and space invariant blur.In[36]authors use a series of short-exposure images taken concurrently with a corresponding set of images of a guidestar and obtain a maximum-likelihood estimate of the undistorted image.The potential of the algorithm is tested for super-resolved astronomic imaging.Chiang and Boult[37]use edge models and a local blur estimate to develop an edge-based super-resolution algorithm.They also applied warping to reconstruct a high-resolution image[38]which is based on a concept called integrating resampler[39]that warps the image subject to some constraints.Altunbasak et al.[40]proposed a motion-compensated,transform domain super-resolution procedure for creating high quality video or still images that directly incorporates the trans-form domain quantization information by working in the compressed bit stream.They apply this new formulation to MPEG-compressed video.In[41]a method for simultaneously es-timating the high-resolution frames and the corresponding motionfield from a compressed low-resolution video sequence is presented.The algorithm incorporates knowledge of the spatio-temporal correlation between low and high-resolution images to estimate the original high-resolution sequence from the degraded low-resolution observation.In[42]authors pro-pose to enhance the resolution using a wavelet domain approach.They assume that the wavelet coefficients scale up proportionately across the resolution pyramid and use this property to go down the pyramid.Shechtman et al.[43]construct a video sequence of high space-time res-olution by combining information from multiple low-resolution video sequences of the same dynamic scene.They used video cameras with complementary properties like low-frame rate but high spatial resolution and high frame rate but low spatial resolution.They show that by increasing the temporal resolution using the information from multiple video sequences spatial artifacts such as motion blur can be handled without the need to separate static and dynamic scene components or to estimate their motion.Authors in[44]propose a high-speed super-resolution algorithm using the generalization of Papoulis’sampling theorem for multi-channel data with applications to super-resolving video sequences.They estimate the point spread function(PSF)for each frame and use the same for super-resolution.Capel and Zisserman[45]have proposed a technique for automated mosaicing with super-resolution zoom in which a region of the mosaic can be viewed at a resolution higher than any of the original frames by fusing information from several views of a planar surface in order to estimate its texture.They have also proposed a super-resolution technique from multiple views using learnt image models[46].Their method uses learnt image models either to di-rectly constrain the ML estimate or as a prior for a MAP estimate.Authors in[47]describe image interpolation algorithms which use a database of training images to create plausible high frequency details in zoomed images.In[48]authors develop a super-resolution algorithm by modifying the prior term in the cost to include the results of a set of recognition decisions,and call it as recognition based super-resolution or hallucination.Their prior enforces the condi-tion that the gradient of the super-resolved image should be equal to the gradient of the best matching training image.We now discuss in brief the previous work on MRF parameter estimation.In[49]au-thors use Metroplis-Hastings algorithm and gradient method to estimate the MRF parameters. Laxshmanan and Derin[50]have developed a iterative algorithm for MAP segmentation using the ML estimates of the MRF parameters.Nadabar and Jain[51]estimate the MRF line pro-cess parameters using geometric CAD models of the objects in the scene.A multiresolution approach to color image restoration and parameter estimation using homotopy continuation method was described by[52]As discussed in[47],the richness of the real world images would be difficult to captureanalytically.This motivates us to use a learning based approach,where the MRF parameters of the super-resolved image can be learnt from the most zoomed observation and hence can be used to estimate the super-resolution image for the least zoomed entire scene.In[53],Joshi et al proposed an approach for super-resolution based on MRF modeling of the intensityfield in which MRF parameters were chosen on an adhoc basis.However,a more practical situation is one in which these parameters are to be estimated.In this thesis, we simultaneously estimate the these uknown parameters and obtain the super-resolution in-tensity map.The Maximum Likelihood(ML)estimate of the parameters are obtained by an approximate version MPL estimation in order to reduce the computations.Our approach gen-erates a super-resolved image of the entire scene although only a part of the observed zoomed image has multiple observations.In effect what we do is as follows.If the wide angle view corresponds to afield of view ofαo,and the most zoomed view corresponds to afield of view ofβo(whereαβ),we generate a picture of theαofield of view at a spatial resolution com-parable toβofield of view by learning the model from the most zoomed view.The details of the method are now presented.Chapter3Low Resolution Image ModelThe zooming based super-resolution problem is cast in a restoration framework.There are p observed images Y i p i1each captured with different zoom settings and of size M1M2 pixels each.Figure3.1illustrates the block schematic of how the low-resolution observations of a same scene at different zoom settings are related to the high-resolution image.Here we consider that the most zoomed observed image of the scene Y p(p3in thefigure)has the highest resolution.A zoom lens camera system has complex optical properties and thus it is difficult to model it.As Lavest et al.[5]point out,the pinhole model is inadequate for a zoom lens,and a thick-lens model has to be used;however,the pinhole model can be used if the object is virtually shifted along the optical axis by the distance equal to the distance between the primary and secondary principal planes of the zoom lens.Since we capture the images with a large distance between the object and the camera and if the depth variation in the scene is not very significant compared to its distance from the lens,it is reasonable to assume that the paraxial shift about the optical axis as the zoom varies is negligible.Thus,we can make a reasonable assumption of a pinhole model and neglect the depth related perspective distortion due to the thick-lens behavior.We are also assuming that there is no rotation about the optical axis between the observed images taken at different zooms.However we do allow lateral shift of the optical center as explained in section4.4.Since different zoom settings give rise to different resolutions,the least zoomed scene corresponding to entire scene needs to be upsam-pled to the size of q1q2q p1M1M2N1N2pixels,where q1q2q p1are1Y Y2Y3Figure3.1:Illustration of observations at different zoom levels,Y1corresponds to the least zoomed and Y3to the most zoomed images.Here z is the high-resolution image of the same scene.the zoom factors between observed images of the scene Y1Y2Y2Y3Y p1Y p,respectively. Given Y p,the remaining p1observed images are then modeled as decimated and noisy versions of this single high-resolution image of the appropriate region in the scene.With this the most zoomed observed image will have no decimation.If ym D m z m m1p(3.1)y 1(k,l)y (k,l)y 32(k,l)Figure 3.2:Low resolution image formation model is illustrated for three different zoom lev-els.View fixation block just crops a small part of the high-resolution image z .where D is the decimation matrix,size of which depends on the zoom factor.For an integer zoom factor of q ,decimation matrix D consists of 1q 21110111...0111(3.2)Here p is the number of observations,n m 12exp1T m ngiven yChapter4Super-Resolution RestorationIn order to obtain a regularized estimate of the high resolution image zprovides the necessary prior.4.1Image Field ModelingThe MRF provides a convenient and consistent way of modeling context dependent entities such as pixel intensities,depth of the object and other spatially correlated features.This is achieved through characterizing mutual influence among such entities using conditional probabilities for a given neighborhood.The practical use of MRF models is largely ascribed to the equivalence between the MRF and the Gibbs distributions(GRF).We assume that the high resolution image can be represented by an MRF.Let Z be a randomfield over an arbitrary N N lattice of sites L i j1i j N.For the GRF,we have P Z z Ze U zpis a realization of Z,Z p is the partition function given by∑zθ,θis the parameter that defines the MRF model and U zθ∑c C V c z θdenotes the potential function associated with a clique c and C is the set of all cliques. The clique c consists of either a single pixel or a group of pixels belonging to a particular neighborhood system.In this paper we consider only the symmetricfirst order neighborhoods consisting of the four nearest neighbors of each pixel and the second order neighborhoods consisting of the eight nearest neighbors of each pixel.In particular,we use the following twoβ12β2(a)(b)Figure4.1:Cliques used in modeling the image.(a)First order,and(b)Second order neigh-borhood.and four types of cliques shown in Figure4.1.In thefigure,βi is the parameter specified for clique c i.The Gibbs energy prior for zθZ pexp U zθN2∑k1N2∑l1β1z k l z k l12z k l z k l12β2z k l z k1l2z k l z k1l2for two parametersβ1β2,orU z4.2Parameter LearningWe realize that in order to enforce the MRF priors while estimating the high resolution image zθ(4.2) The probability in equation(4.2)can be expressed asP Z zθP Z k l z k l Z m n z m nθ(4.4)θ∆∏k lwhere m nηk l form the given neighborhood model(thefirst order or the second order neighborhood as chosen in this study).Further it can be shown that equation(4.4)can bewritten asˆP Z z∑zk l G exp∑C:k l C V c z k lθ(4.5)where G is the set of intensity levels used.Considering the fact that thefield zpθ(4.6) We maximize the log likelihood of the above probability by using Metropolis-Hastings algorithm as discussed in[49]and obtain the parameters.4.3MAP RestorationHaving learnt the model parameters,we now try to super-resolve the entire scene.We use the MAP estimator to restore the high resolutionfield zis given byˆz arg maxz y2yP y2y P zm’s are independent,one can show that the high resolutionfield zp ∑m1y2θ(4.9)Since the model parameterθhas already been estimated,a solution to the above equation is,indeed,possible.The above cost function is convex and is minimized using the gradient descent technique.The initial estimate zincreasing zoom factors.Finally the most zoomed observed image with the highest resolution is copied with no interpolation.In order to preserve discontinuities we modify the cost for prior probability term as dis-cussed in section4.1.The cost function to be minimized then becomesˆz argminzmD m z2σ2ηV z∑i jµe zsγe zp(4.11)On inclusion of binary linefields in the cost function,the gradient descent technique cannot be used since it involves a differentiation of the cost function.Hence,we minimize the cost by using simulated annealing which leads to a global minima.However,in order to provide a good initial guess and to speed up the computation,the result obtained using the gradient descent method is used as the initial estimate for simulated annealing.The computational time is greatly reduced upon using mean-field annealing,which leads to a near optimal solution.4.4Zoom EstimationWe now extend the proposed algorithm to a more realistic situation in which the successive observations vary by an unknown rational valued zoom factor.Further,considering a real lens system for the imaging process,the numerical image center can no longer be assumed to be fixed.The zoom factor between the successive observations needs to be estimated during the process of forming an initial guess(as discussed in the chapter3)for the proposed super-resolution algorithm.We,however,assume that there is no rotation about the optical axis between the successive observations though we allow a small amount of lateral shift in the optical axis.The image centers move as lens parameters such as focus or zoom are varied[55, 56].Naturally,the accuracy of the image center estimation is an important factor in obtaining the initial guess for our super resolution algorithm.Generally,the rotation of a lens system will cause a rotational drift in the position of the optical axis,while sliding action of a lens group in the process of zooming will cause a translation motion of the image center[55].Theserotational and translational shifts in the position of the optical axis cause a corresponding shifting of the camera’sfield of view.In variable focal length zoom lenses,the focal length is changed by moving groups of lens elements relative to one another.Typically this is done by using a translational type of mechanism on one or more internal groups.These arguments validate our assumption that there is no rotation of the optical axis in the zooming process and at the same time stress the necessity of accounting for the lateral shift in the image centers of the input observations obtained at different zoom settings.We estimate the relative zoom and shift parameters between two observations by minimiz-ing the mean squared distance between an appropriate portion of the digitally zoomed image of the wide angle view and the narrower view observation.The method searches for the zoom factor and the lateral shift that minimizes the distance.We do this by heirarchially searching for the global minima byfirst zooming the wide angle observation and then searching for the shift that corresponds to a local minima of the cost function.The lower and upper bounds for the zooming process needs to be appropriately defined.Naturally,the efficiency of the algo-rithm is constrained by the closeness of the bounds to the solution.It can be greatly enhanced byfirst searching for a rough estimate of the zoom factor and slowly approaching the exact zoom factor by redefining the lower and upper bounds as the factors that correspond to the least cost and the next higher cost.We do this byfirst searching for a discrete zoom factor (say1.4to2.3in steps of0.1)At this point,we need to note that the digital zooming of an image by a rational zoom factor q mFigure4.2:Illustration of zoom and alignment estimation.’A’is the wide angle view and’B’is the narrower angle view.shift in the optical axis in the zooming process is usually small(2to3pixels).The above discussed zoom estimation and the alignment procedure is illustrated in the Figure4.2.。
I.J.Mathematical Sciences and Computing,2018, 2, 12-21Published Online April 2018 in MECS ()DOI: 10.5815/ijmsc.2018.02.02Available online at /ijmscA Systematic Expository Review of Schmidt-Samoa CryptosystemQasem Abu Al-Haija a*, Mohamad M.Asad b, Ibrahim Marouf a,b, a,b c Department of Electrical Engineering, King Faisal University, Hufof 31982, Saudi Arabia Received: 21 November 2017; Accepted: 13 February 2018; Published: 08 April 2018AbstractPublic key cryptographic schemes are vastly used to ensure confidentiality, integrity, authentication and non-repudiation. Schmidt-Samoa cryptosystem (SSC) is a public key cryptosystem, which depends on the difficulty of large integer factorization problem. The implementation of SSC to secure different recent communication technologies such as cloud and fog computing is on demand due to the assorted security services offered by SSC such as data encryption/decryption, digital signature and data integrity. In this paper, we provide a systematic review of SSC public key cryptosystem to help crypto-designers to implement SSC efficiently and adopt it in hardware or software-based applications. According to the literature, the effective utilization and design SSC can place it as a viable alternative of RSA cryptosystems and many others.Index Terms: Information Security, Public Key Cryptography, Schmidt-Samoa Cryptosystem, Integer Factorization.© 2018 Published by MECS Publisher. Selection and/or peer review under responsibility of the Research Association of Modern Education and Computer Science1.IntroductionIn the last decades, the communication system over the world has been extremely enlarged where millions of computers were connected to networks and internet to exchange a huge amount of information. This information is vulnerable to interrupt, change, or even seen by unwanted people (i.e. unauthorized). Because of that, secure communication channels were introduced to prevent any third party from reading or changing information. Such prevention is accomplished by setting rules for accessing the confidential data known collectively as Cryptography. Cryptography is the science that concern with encrypting and decrypting data to provide secure transactions between communication parties. Cryptography provides the secure communication networks by a means of cryptographic primitives [1] (listed in table 1) which contributed along with the crypto-* Corresponding author. Tel.: +966-13-589-5400; fax: +966-13-581-7068E-mail address: Qalhaija@.saalgorithms to provide many services such as: confidentiality: To help protect a user's identity or data from being read, data integrity: To help protect data from being changed, authentication: To ensure that data is originated from a certain user, and non-repudiation: To prevent a certain party from being denied of sending messages [1].Table 1. Cryptographic Primitive and Their UseCryptographic primitive UseSecret-key encryption (symmetric cryptography) Performs a transformation on data to keep it from being read by third parties. This type of encryption uses a single shared, secret key to encrypt and decrypt data.Public-key encryption (asymmetric cryptography) Performs a transformation on data to keep it from being read by third parties. This type of encryption uses a public/private key pair to encrypt and decrypt data.Cryptographic signing (Digital Signatures) Helps verify that data originates from a specific party by creating a digital signature that is unique to that party. This process also uses hash functions.Cryptographic hashes (Fixed Size Digesting) Maps data from any length to a fixed-length byte sequence. Hashes are statistically unique; a different two-byte sequence will not hash to the same value.Based on encryption/decryption process, cryptographic algorithms are categorized as Symmetric key algorithms and Public key algorithms (Asymmetric key). Symmetric Key Cryptography (SKC) is a field of cryptography where the same key is shared between both sender and receiver to be used for encryption and decryption processes. SKC ciphers can either be stream cipher which encrypt and decrypts data as bit-by-bit process using bit operations (such as XOR) or block cipher which deals with blocks of fixed length of bits encrypted/decrypted with a key. An examples of stream cipher is LFSR encryption [2] and examples of block cipher are DES, 3DES, Blowfish, and AES. Modern symmetric algorithms such as AES or 3DES are very secure. However, there are several drawbacks associated with symmetric-key scheme like key distribution problem, number of keys or the lack of protection against cheating [3]. In symmetric key algorithms, the key must be established in a secure channel which does not exist in communication channels. Even if this problem solved, substantial number of keys will be needed when each pair needs a separate key in a network. Moreover, any party can cheat and accuse the other party. Hence, asymmetric key algorithms are needed to solve these problems.Public Key Cryptography (PKC) where the two parties (sender and receiver) have two different keys; one public shared key for encryption and one private key for decryption. Public-key algorithms are used mainly for Key Establishment, Identification and Encryption. Diffie-Hellman Key Exchange (DHKE) [4] is an example of an asymmetric key algorithm used for key exchange and RSA is an encryption public-key algorithm [5]. PKC algorithms are fundamental security component in many cryptosystems, applications and such as Transport Layer Security (TLS) protocol [6]. Public key algorithms provide data encryption, key exchange, and digital signatures [7].PCK algorithms can be categorized based on the mathematical problem used in the scheme into [4]: Integer-factorization based schemes such RSA and McEliece [8] algorithms and discrete logarithm-based schemes such as Diffie–Hellman key exchange and ELGamal encryption scheme [4]. Integer factorization is the process where an integer is decomposed to the product of smaller numbers. If these numbers are prime numbers, then it is called prime factorization. The complexity in this method arises when factoring a very large number because there no such known efficient algorithm. However, not all number with the same length are equal in complexity. When the number is the product of two coprime numbers, it is infeasible to factor this kind of numbers using the current technology [9]. Most non-RSA public-key algorithms with practical relevance are based on another one-way function, the discrete logarithm problem [3]. The security of many cryptographic schemes relies on the computational intractability of finding solutions to the Discrete Logarithm Problem (DLP). The discrete logarithm problem is defined in what are called cyclic groups. However, there are four families of alternative public key schemes [10] that are potentially interesting for use in practice: hash based, code-based, lattice-based and multivariate quadratic (MQ) public-key algorithms.Practically, public key schemes are preferred to use due to many reasons such as the non-exitance of thesecure communication channels. Therefore, the efficient implementation of public key cryptosystems is on demand especially if its implemented with appropriate technology with high precision design. In this paper, Schmidt-Samoa Cryptosystem (SSC) [11] will be used analyzed as efficient and comparable alternative to RSA which is a well-known secure and practicable public key scheme that can be used to protect information during the transmission over the insecure channels. SSC Cryptosystem is heavily based on modular arithmetic involving large prime numbers.The remaining of this paper is organized as follows: Section 2 discusses the Schmidt-Samoa Cryptosystem (SSC) in details including SSC crypto-algorithm, the SSC factoring, numerical example of how SSC works, some possible attacks of SSC, and the underlying design issues and requirements followed by conclusions.2.Schmidt-Samoa Cryptosystem (SSC)Schmidt-Samoa Cryptosystem (SSC) is an asymmetric cryptographic technique (public key algorithm) in which security depends on the difficulty of integer factorization problem used for data encryption and decryption. Just like RSA, SSC uses very large prime numbers and modular arithmetic to provide different security services such as conditionality, integrity, and non-repudiation.2.1.SSC AlgorithmTo start the secure communication session, the receiver, who is Alice in this case, starts by choosing two large prime numbers (p, q) and then compute her public key 2=. Alice then share the public key (N) withN p qBob (and even other senders) who will use it to encrypt the plaintext messages communicated with Alice. Again, Alice computes her private key (d) to be used for decryption processes 1=. Next, using the privated N-key, Alice decrypts the ciphertext.Fig.1. Complete Diagram of Schmidt-Samoa Algorithm.Fig.1, shows the complete SSC algorithm diagram which is divided into three stages: key generation stage, Encryption stage, and Decryption stage. The challenge in SSC is the ability to factor out the public key which is the product of two very large primes. As the size of the key is increases, the factorization problem becomes even more complicated [9]. Factoring a number means defining that number as a product of prime numbers. InSSC, factoring the public key (N ) means as breaking the cryptosystem. If an attacker can factor out the public key, he can easily calculate the private key (d ) and decrypt any data. As public key 2N p q =, is known to everyone, therefore factoring (N ) leads to compute p and q . Then the private key can be computed using congruent (1) (where LCM is the least common multiple of two numbers):1mod (1,1)d N LCM p q -≡-- (1) For better understanding, we provide the following simplified numerical example. Let’s assume that the plaintext message m = 2 and the domain parameters (p = 11, q = 17, m = 2), then we run SSC (11,17,2) as follows:22057N p q ==11mod (10,16)2057mod8073d N LCM --≡==2057mod 20571855c m ==731855mod1872m ==2.2. Possible Attacks of SSCReasonably, there is no such a perfect system, but there are systems hard to be attacked. SSC is proved to be very secure [11], however, it is vulnerable to some known attacks such as Brute-force attack, Man-in-the-Middle attack, and Side Channel attack. Generally, all public key cryptography algorithms suffer from these attacks [3].∙ Exhaustive search of SSC: In computer science, brute-force search or exhaustive search, also known asgenerate and test, is a very general problem-solving technique that consists of systematically generating all possible candidates for the solution and checking whether each candidate satisfies the problem's statement. For instance, finding the factorization of a very large number by trying all the numbers less than the asked number. In cryptography, an exhaustive search attack involves checking all possible keys until the correct key is found [12]. This strategy theoretically can be used against any cryptosystem by an attacker who is unable to take advantage of any weakness in the system that would make breaking the system easier. The length of the used key in the encryption process determines the practical feasibility of performing a brute force attack, with larger keys exponentially more difficult to break than smaller ones. One of the measures of the strength of an encryption system is how long it would theoretically take an attacker to successfully mount a brute force attack against it. In Schmidt-Samoa cryptosystem, as the bit size of the key is increased, the time needed to perform an exhaustive search would increase exponentially. It is believed that a 1024-bit key can be factored in period of 10-15 years, where it is possible for some intelligence agencies to compute the key earlier [12]. However, for 2048- bit or more, it is not feasible to factor out SSC key relying on the current technology (computers). Sample example of exhaustive search algorithm (brute force) is illustrated in figure 2 as it shows the possible trial values of simple 4-bit key.Fig.2. Example of Brute Force Attack of 4 bit KeyMan-in-the-Middle Attack [13]: it is a type of cyberattack where a malicious actor inserts him/herself into a conversation between two parties, impersonates both parties and gains access to information that the two parties were trying to send to each other. It allows a malicious actor to intercept, send and receive data meant for someone else, or not meant to be sent at all, without either outside party knowing until it is too late. Man-in-the-middle attacks can be abbreviated in many ways, including MITM, MitM, MiM or MIM. An example of MITM by using SSC scheme is shown in Fig.3 where Alice generates her public and private keys and sends the public key over unsecure channel. However, Trudy interrupts the communication and generates new public key then sends it to Bob. Bob now encrypts data and sends it back to Alice on the unsecure channel, however, only Trudy who can decrypt the message. Trudy can generate new false message and send it to Alice, pass the original message, or just block it where Alice and Bob thinking they are communicating with each other securely.a = K prAA = αa mod(n) = K pubAb = K prBB = αb mod(n) = K pubBA’ = αT1 mod(n)B’ = αT2 mod(n)A BK AT = (B’)a = (αT2)amod(n)K BT = (A’)a = (αT1)bmod(n)A’B’K AT = A T2 = (αa)T2 mod(n)K BT = B T1 = (αb)T1 mod(n) Fig.3. MITM Attack Scheme for SSC.Side Channel Attack: In cryptography, a side-channel attack is an attack based on analyzing the physical implementation gained information of a cryptosystem, rather than a brute-force of any theoretical weakness [12]. They exploit information about the private key which is leaked through physical channels such as the power consumption or the timing behavior. However, to observes such channels, an attacker must have access to the cipher implementation, e.g., in cell phones or smart card. Fig.4 shows the power trace of an RSA implementation on a microprocessor [12], or the drown electric power by the processor to be more precise. The attacker goal is to extract the private key d which is used during the RSA decryption. It can be differentiated between the high and low activity from the graph, this behavior is explained by the square-and-multiply algorithm. If an exponent bit has the value 0, only a squaring is per formed. If an exponent bit has the value 1, a squaring together with a multiplication is computed.Fig.4. The Power Trace of an RSA Implementation.2.3.SSC ServicesSSC is very flexible and can provide the four main cryptographic services: confidentiality, integrity, authentication, and non-repudiation. As for RSA algorithm, SSC algorithm can be used to encrypt and decrypt private message providing, confidentiality and non-repudiation. Also, SSC can be implemented to be used as digital signature (DSA-SSC) as shown in Fig.5, providing integrity. PKI and alternative schemes; hashed-based, coded-based, etc., can be implemented using SSC.2.4. several digital arithmetic and modular arithmetic algorithms as well as different number theory schemes. It employs the properties of prime numbers alongside the congruent to produce a very secure hard to break cryptosystem. Arithmetic operation like multiplication and squaring, and modular exponentiation and modular inverse are involved in the algorithm to add complexity to the cipher. Thus, implementing a SSC coprocessor requires the contribution of many design components as seen in the diagram of figure 6.Fig.6. SSC Underlying Design Requirements Diagram.Number Theory Algorithms: Because of the modular factors (p, q) must be prime, therefore, twocomponents are contributing here generate test a prime number with desired length: a random number generator (RNG) [2] and a prime number tester PNT) [14]. Also, to test the co-prime relativity, a greatest common devisor (GCD) unit [15] is required in Schmidt-Samoa. In addition, to generate the private key modulus, a Least common multiple (LCM) [15] unit is needed.∙Digital Arithmetic Algorithms: in order to compute the public key (N) which is also used as the encryption algorithm modulus, efficient arithmetic digital multiplier (used for squaring as well) unit is required to generate N, such as Karatsuba multiplier [16]. The multiplier is built from fast two operand adder units such as Kogge Stone adder (KSA) [17] as an efficient Parallel prefix adder [18], fast three operand adder such as Carry save adder [18] and multi-operand addition trees such as Wallace trees [18]. ∙Modular Arithmetic Algorithms: As for SSC encryption and decryption processes, an efficient modular expatiation such as [19] should be carefully selected as this operation consumes most of the time in the SSC system. Similarly, another costly operation is needed in the generation of decryption key which is the modular inverse (division by modulus) operation [9] which is well known to be one of the long-time operations performed by the Cryptoprocessor.∙Hardware/Software design tools: SSC Cryptoprocessor can be implemented either in software environment or in hardware platform. However, it’s noted that building Cryptoprocessor via hardware is more secure and efficient than in software [20]. Nowadays, reconfigurable hardware devices are commonly spread to implement various digital applications such as cryptographic coprocessor and embedded systems design. It’s largely recommended to implement SSC using the field programmable gate arrays (FPGA) [21] which provide wide range of flexibility and dynamic control of several design factors such as delay, area and power consumption. The reconfigurability feature of FPGA devices attracted many cryptographic researchers to implement their designs using FPGA devices benefiting from the spacious libraries and modules offered by Computer Aided Design (CAD) [22] tools as well as the flexibility of Hardware description languages (HDLs) [23].Eventually, the adequate adoption of the efficient accelerated built-in units and component along with affordable high technology design platform will result in undoubtedly robust SSC cryptosystem that is comparable and competitive with RSA and many other well-known secure cryptosystems. It can replace RSA Cryptosystem in many applications such as in design of the cryptography system with multi-level crypto-algorithms [24], in the design an effective parallel digital signature algorithm for GPUs [25], in the design of robust image Steganography [26], in the design of an alternative equations for Guillou-Quisquater Signature scheme which is based originally on RSA [27], or many other valid applications.3.Conclusions and RemarksSchmidt-Samoa cryptosystem public key cryptosystem (SSC) with numerical example and sample possible attacks as well as the cryptosystem's design issues has been methodologically analysed and investigated in this paper. Thus, even if you use the best possible random number generators to create candidates for the primes that are needed to make SSC secure, the security of SSC encryption/decryption depends critically on the difficulty of factoring large integers which become easier for shorter key sizes due the existence of powerful computers. Therefore, SSC cryptography has had to rely on increasingly larger values for the integer modulus and, Hence increasingly longer encryption keys. As for RSA, these days you are unlikely to use a key whose length is shorter than 1024 bits for SSC as many people recommended to use 2048 or even 4096-bit keys. References[1]Denning, D.E.R.E, “Cryptography and data security”, Reading, MA: Addison-Welsey.[2]Q. A. Al-Haija, N. A. Jebril, and A. AlShua'ibi. (2015). Implementing variable length Pseudo RandomNumber Generator (PRNG) with fixed high frequency (1.44 GHZ) via Vertix-7 FPGA family. Network Security and Communication Engineering, CRC press, Pp. 105 -108.[3] C. Paar, J. Pelzl, (2010) ‘Understanding Cryptography’. Springer-Verlag Berlin Heidelberg Publisher.https:///10.1007/978-3-642-04101-3.[4]Menezes, A.J., van Oorschot, P.C. and Vanstone, S.A., (1996), 'Handbook of applied cryptography',CRC Press, http://cacr.uwaterloo.ca/hac/[5]Q. Abu Al-Haija, et. al, (2014) 'Efficient FPGA Implementation of RSA Coprocessor using ScalableModules', 9th International Conference on Future Networks & Communications (FNC), Elsevier, Canada. https:///10.1016/j.procs.2014.07.092[6]Dierks and Rescorla, (2008), Standards Track: The Transport Layer Security (TLS) Protocol Version1.2', The IETF Trust, RFC 5246.[7]Developer Network (2017). 'Cryptographic Services', Microsoft. https:///en-us/dotnet/standard/security/[8]H. Sun. Enhancing the Security of the McEliece Public-Key Cryptosystem. Journal of InformationScience and Engineering 16, pages 799-812, 2000.[9]W. Trappe and L. C. Washington, (2002) 'Introduction to Cryptography with Coding Theory', PrenticeHall, vol. 1: p.p. 1-176, /citation.cfm?id=560133[10]Daniel J. Bernstein, Johannes Buchmann, Erik Dahmen, (2009), 'Post-Quantum Cryptography',Springer-Verlag Berlin Heidelberg, DOI: 10.1007/978-3-540-88702-7[11]Katja Schmidt-Samoa, (2006) ‘A New Rabin-type Trapdoor Permutation Equivalent to Factoring’,Electronic Notes in Theoretical Computer Science, Elsevier, vol.157, issue 3, p.p.79-94.https:///2005/278.pdf[12]Mark Burnett, (2007), ‘Blocking Brute Force Attacks', UVA Computer Science, University of Virginia(UVA). /~csadmin/gen_support/brute_force.php[13]Desmedt, Y. Man in the middle attack. In: van Tilborg, H.C.A. (ed.) Encyclopedia of Cryptography andSecurity, p. 368. Springer, Heidelberg (2005) Xx[14]M. M. Asad, I. Marouf, Q. Abu Al-Haija, " Investigation Study of Feasible Prime Number TestingAlgorithms", Acta Technica Napocensis Electronics and Telecommunications, 58 (3), Pp. 11– 15, 2017 [15]I. Marouf, M. M. Asad, Q. Abu Al-Haija, " Reviewing and Analyzing Efficient GCD/LCM Algorithmsfor Cryptographic Design", International Journal of New Computer Architectures and their Applications (IJNCAA), By Society of Digital Information and Wireless Communications (SDIWC), 7(1), Pp. 1-7, 2017.[16]M. M. Asad, I. Marouf, Q. Abu Al-Haija, Qasem Abu Al-Haija, " Review of Fast MultiplicationAlgorithms for Embedded Systems Design ", International Journal of Scientific & Technology Research (IJSTR), 6 (8), Pp., 238 – 242, 2017.[17]Kogge, P. & Stone, H. "A Parallel Algorithm for the Efficient Solution of a General Class of RecurrenceEquations". IEEE Transactions on Computers, 1973, C-22, 783-791Xx[18]M. D. Ercegovac and T. Lang, “Digital Arithmetic," Morgan Kaufmann Publishers, Elsevier, Vol1, Ch2,pages (51-136), 2004.[19]I. Marouf, M. M. Asad, Q. Abu Al-Haija, "Comparative Study of Efficient Modular ExponentiationAlgorithms", COMPUSOFT, An international journal of advanced computer technology, 6 (8), Pp.2381– 2389, 2017[20]L. Tawalbeh and Q. Abu Al-Haija," Enhanced FPGA Implementations for Doubling Oriented andJacobi-Quartics Elliptic Curves Cryptography,” Journal of Information Assurance and Security (JIAS), By Dynamic Publishers Inc., Vol 6 (3), Pp. 167-175, 2010[21] C. Maxfield, " The Design Warrior’s Guide to FPGAs: Devices, Tools and Flows", Mentor GraphicsCorporation and Xilinx, Elsevier, 2004.[22]Nicos Bilalis, (2000), 'Computer Aided Design CAD', INNOREGIO Project: dissemination ofinnovation and knowledge management techniques, Technical University of Crete.[23]David Harris Sarah Harris, (2012), ‘Digital Design and Computer Architecture’, Imprint: MorganKaufmann, ISBN: 9780123944245, Elsevier.[24]Surinder Kaur, Pooja Bharadwaj, Shivani Mankotia,"Study of Multi-Level Cryptography Algorithm:Multi-Prime RSA and DES", International Journal of Computer Network and Information Security(IJCNIS), Vol.9, No.9, pp.22-29, 2017.DOI: 10.5815/ijcnis.2017.09.03.[25]Sapna Saxena, Neha Kishore," PRDSA: Effective Parallel Digital Signature Algorithm for GPUs ",International Journal of Wireless and Microwave Technologies(IJWMT), Vol.7, No.5, pp. 14-21, 2017.DOI: 10.5815/ijwmt.2017.05.02.[26]M.I.Khalil,"Medical Image Steganography: Study of Medical Image Quality Degradation whenEmbedding Data in the Frequency Domain", International Journal of Computer Network and Information Security(IJCNIS), Vol.9, No.2, pp.22-28, 2017.DOI: 10.5815/ijcnis.2017.02.03[27]J. Ettanfouhi, O. Khadir," Alternative Equations for Guillou-Quisquater Signature Scheme ",International Journal of Computer Network and Information Security, 2016, 9, 27-33, DOI:10.5815/ijcnis.2016.09.04/Authors’ ProfilesQasem Abu Al-Haija is a senior lecturer of Electrical and Computer Engineering at KingFaisal University. Eng. Abu Al-Haija received his B.Sc. in ECE from Mu’tah University inFeb-2005 and M.Sc. in computer engineering from Jordan University of Science &Technology in Dec-2009. His current research Interests: Information Security &Cryptography, Coprocessor & FPGA design, Computer Arithmetic, Wireless SensorNetworks.Muhammad M. Asad is a senior student of Electrical Engineering Department at KingFaisal University. He is a Syrian resident born on Jan-01-1994 and excellent in bothlanguages Arabic and English. His research interests include (but not limited to): PublicKey Cryptography, FPGA Design, Digital Arithmetic, Microcontroller Design, ElectronicDesign.Ibrahim A. Marouf is a senior student of Electrical Engineering Department at KingFaisal University. He is a Syrian resident born on Aug -15-1995 and excellent in bothlanguages Arabic and English. His research interests include (but not limited to): PublicKey Cryptography, FPGA Design, Digital Arithmetic, Microcontroller Design, ElectronicDesign.How to cite this paper: Qasem Abu Al-Haija, Mohamad M.Asad, Ibrahim Marouf,"A Systematic Expository Review of Schmidt-Samoa Cryptosystem", International Journal of Mathematical Sciences and Computing(IJMSC), Vol.4, No.2, pp.12-21, 2018.DOI: 10.5815/ijmsc.2018.02.02。
2-dimensional space3D mapabstractaccess dataAccessibilityaccuracyacquisitionad-hocadjacencyadventaerial photographsAge of dataagglomerationaggregateairborneAlbers Equal-Area Conic projection (ALBER alignalphabeticalphanumericalphanumericalalternativealternativealtitudeameliorateanalogue mapsancillaryANDannotationanomalousapexapproachappropriatearcarc snap tolerancearealAreal coverageARPA abbr.Advanced Research Projects Agen arrangementarrayartificial intelligenceArtificial Neural Networks (ANN) aspatialaspectassembleassociated attributeattributeattribute dataautocorrelationautomated scanningazimuthazimuthalbar chartbiasbinary encodingblock codingBoolean algebrabottombottom leftboundbreak linebufferbuilt-incamouflagecardinalcartesian coordinate system cartographycatchmentcellcensuscentroidcentroid-to-centroidCGI (Common Gateway Interface) chain codingchainscharged couple devices (ccd) children (node)choropleth mapclass librariesclassesclustercodecohesivelycoilcollinearcolumncompactcompasscompass bearingcomplete spatial randomness (CSR) componentcompositecomposite keysconcavityconcentricconceptual modelconceptuallyconduitConformalconformal projectionconic projectionconnectivityconservativeconsortiumcontainmentcontiguitycontinuouscontourcontour layercontrol pointsconventionconvertcorecorrelogramcorrespondencecorridorCostcost density fieldcost-benefit analysis (CBA)cost-effectivecouplingcovariancecoveragecoveragecriteriacriteriacriterioncross-hairscrosshatchcross-sectioncumbersomecustomizationcutcylindrical projectiondangledangle lengthdangling nodedash lineDATdata base management systems (DBMS) data combinationdata conversiondata definition language (DDL)data dictionarydata independencedata integritydata itemdata maintenancedata manipulationData manipulation and query language data miningdata modeldata representationdata tabledata typedatabasedateDBAdebris flowdebugdecadedecibeldecision analysisdecision makingdecomposededicateddeductiveDelaunay criterionDelaunay triangulationdelete(erase)delineatedemarcationdemographicdemonstratedenominatorDensity of observationderivativedetectabledevisediagonaldictatedigital elevation model (DEM)digital terrain model (DTM) digitizedigitizedigitizerdigitizing errorsdigitizing tablediscrepancydiscretediscretedisparitydispersiondisruptiondissecteddisseminatedissolvedistance decay functionDistributed Computingdividedomaindot chartdraftdragdrum scannersdummy nodedynamic modelingeasy-to-useecologyelicitingeliminateellipsoidellipticityelongationencapsulationencloseencodeentity relationship modelingentity tableentryenvisageepsilonequal area projectionequidistant projectionerraticerror detection & correctionError Maperror varianceessenceet al.EuclideanEuclidean 2-spaceexpected frequencies of occurrences explicitexponentialextendexternal and internal boundaries external tablefacetfacilityfacility managementfashionFAT (file allocation table)faultyfeaturefeaturefeedbackfidelityfieldfield investigationfield sports enthusiastfields modelfigurefile structurefillingfinenessfixed zoom infixed zoom outflat-bed scannerflexibilityforefrontframe-by framefreefrom nodefrom scratchfulfillfunction callsfuzzyFuzzy set theorygantrygenericgeocodinggeocomputationgeodesygeographic entitygeographic processgeographic referencegeographic spacegeographic/spatial information geographical featuresgeometricgeometric primitive geoprocessinggeoreferencegeo-relational geosciences geospatialgeo-spatial analysis geo-statisticalGiven that GNOMONIC projection grain tolerance graticulegrey scalegridhand-drawnhand-heldhandicaphandlehand-written header recordheftyheterogeneity heterogeneous heuristichierarchical hierarchicalhill shading homogeneoushosthouseholdshuehumichurdlehydrographyhyper-linkedi.e.Ideal Point Method identicalidentifiable identification identifyilluminateimageimpedanceimpedanceimplementimplementimplicationimplicitin excess of…in respect ofin terms ofin-betweeninbuiltinconsistencyincorporationindigenousinformation integration infrastructureinherentinheritanceinlandinstanceinstantiationintegerintegrateinteractioninteractiveinteractiveinternet protocol suite Internet interoperabilityinterpolateinterpolationinterrogateintersectintersectionIntersectionInterval Estimation Method intuitiveintuitiveinvariantinventoryinvertedirreconcilableirreversibleis adjacent tois completely withinis contained iniso-iso-linesisopleth mapiterativejunctionkeyframekrigingKriginglaglanduse categorylatitudelatitude coordinatelavalayerlayersleaseleast-cost path analysisleftlegendlegendlegendlength-metriclie inlightweightlikewiselimitationLine modelline segmentsLineage (=history)lineamentlinearline-followinglitho-unitlocal and wide area network logarithmiclogicallogicallongitudelongitude coordinatemacro languagemacro-like languagemacrosmainstreammanagerialmanual digitizingmany-to-one relationMap scalemarshalmaskmatricesmatrixmeasured frequencies of occurrences measurementmedialMercatorMercator projectionmergemergemeridiansmetadatameta-datametadatamethodologymetric spaceminimum cost pathmirrormis-representmixed pixelmodelingmodularmonochromaticmonolithicmonopolymorphologicalmosaicmovemoving averagemuiticriteria decision making (MCDM) multispectralmutually exclusivemyopicnadirnatureneatlynecessitatenestednetworknetwork analysisnetwork database structurenetwork modelnodenodenode snap tolerancenon-numerical (character)non-spatialnon-spatial dataNormal formsnorth arrowNOTnovicenumber of significant digit numeric charactersnumericalnumericalobject-based modelobjectiveobject-orientedobject-oriented databaseobstacleomni- a.on the basis ofOnline Analytical Processing (OLAP) on-screen digitizingoperandoperatoroptimization algorithmORorderorganizational schemeoriginorthogonalORTHOGRAPHIC projectionortho-imageout ofoutcomeoutgrowthoutsetovaloverdueoverheadoverlapoverlayoverlay operationovershootovershootspackagepairwisepanpanelparadigmparent (node)patchpath findingpatternpatternpattern recognitionperceptionperspectivepertain phenomenological photogrammetric photogrammetryphysical relationships pie chartpilotpitpixelplanarplanar Euclidean space planar projection platformplotterplotterplottingplug-inpocketpoint entitiespointerpoint-modepointspolar coordinates polishingpolygonpolylinepolymorphism precautionsprecisionpre-designed predeterminepreferences pregeographic space Primary and Foreign keys primary keyprocess-orientedprofileprogramming tools projectionprojectionproprietaryprototypeproximalProximitypseudo nodepseudo-bufferpuckpuckpuckPythagorasquadquadrantquadtreequadtree tessellationqualifyqualitativequantitativequantitativequantizequasi-metricradar imageradii bufferrangelandrank order aggregation method ranking methodrasterRaster data modelraster scannerRaster Spatial Data Modelrating methodrational database structureready-madeready-to-runreal-timerecordrecreationrectangular coordinates rectificationredundantreference gridreflexivereflexive nearest neighbors (RNN) regimeregisterregular patternrelationrelationalrelational algebra operators relational databaseRelational joinsrelational model relevancereliefreliefremarkremote sensingremote sensingremote sensingremotely-sensed repositoryreproducible resemblanceresembleresemplingreshaperesideresizeresolutionresolutionrespondentretrievalretrievalretrievalretrieveridgerightrobustrootRoot Mean Square (RMS) rotateroundaboutroundingrowrow and column number run-length codingrun-length encoded saddle pointsalientsamplesanitarysatellite imagesscalablescalescanscannerscannerscannerscarcescarcityscenarioschemascriptscrubsecurityselectselectionself-descriptiveself-documentedsemanticsemanticsemi-automatedsemi-major axessemi-metricsemi-minor axessemivariancesemi-variogram modelsemi-varogramsensorsequencesetshiftsillsimultaneous equations simultaneouslysinusoidalskeletonslide-show-stylesliverslope angleslope aspectslope convexitysnapsnapsocio-demographic socioeconomicspagettiSpatial Autocorrelation Function spatial correlationspatial dataspatial data model for GIS spatial databaseSpatial Decision Support Systems spatial dependencespatial entityspatial modelspatial relationshipspatial relationshipsspatial statisticsspatial-temporalspecificspectralspherical spacespheroidsplined textsplitstakeholdersstand alonestandard errorstandard operationsstate-of-the-artstaticSTEREOGRAPHIC projection STEREOGRAPHIC projection stereoplotterstorage spacestovepipestratifiedstream-modestrideStructured Query Language(SQL) strung outsubdivisionsubroutinesubtractionsuitesupercedesuperimposesurrogatesurveysurveysurveying field data susceptiblesymbolsymbolsymmetrytaggingtailoredtake into account of … tangencytapetastefullyTelnettentativeterminologyterraceterritorytessellatedtextureThe Equidistant Conic projection (EQUIDIS The Lambert Conic Conformal projection (L thematicthematic mapthemeThiessen mapthird-partythresholdthroughputthrust faulttictiertiletime-consumingto nodetolerancetonetopographic maptopographytopologicaltopological dimensiontopological objectstopological structuretopologically structured data set topologytopologytrade offtrade-offTransaction Processing Systems (TPS) transformationtransposetremendousTriangulated Irregular Network (TIN) trimtrue-direction projectiontupleunbiasednessuncertaintyunchartedundershootsunionunionupupdateupper- mosturban renewaluser-friendlyutilityutility functionvaguevalidityvarianceVariogramvectorvector spatial data model vendorverbalversusvertexvetorizationviablevice versavice versaview of databaseview-onlyvirtualvirtual realityvisibility analysisvisualvisualizationvitalVoronoi Tesselationvrticeswatershedweedweed toleranceweighted summation method whilstwithin a distance ofXORzoom inzoom out三维地图摘要,提取,抽象访问数据可获取性准确,准确度 (与真值的接近程度)获得,获得物,取得特别邻接性出现,到来航片数据年龄聚集聚集,集合空运的, (源自)航空的,空中的艾伯特等面积圆锥投影匹配,调准,校直字母的字母数字的字母数字混合编制的替换方案替代的海拔,高度改善,改良,改进模拟地图,这里指纸质地图辅助的和注解不规则的,异常的顶点方法适合于…弧段弧捕捉容限来自一个地区的、 面状的面状覆盖范围(美国国防部)高级研究计划署排列,布置数组,阵列人工智能人工神经网络非空间的方面, 方向, 方位, 相位,面貌采集,获取关联属性属性属性数据自动扫描方位角,方位,地平经度方位角的条状图偏差二进制编码分块编码布尔代数下左下角给…划界断裂线缓冲区分析内置的伪装主要的,重要的,基本的笛卡儿坐标系制图、制图学流域,集水区像元,单元人口普查质心质心到质心的公共网关接口链式编码链电荷耦合器件子节点地区分布图类库类群编码内聚地线圈在同一直线上的列压缩、压紧罗盘, 圆规, 范围 v.包围方位角完全空间随机性组成部分复合的、混合的复合码凹度,凹陷同心的概念模型概念上地管道,导管,沟渠,泉水,喷泉保形(保角)的等角投影圆锥投影连通性保守的,守旧的社团,协会,联盟包含关系相邻性连续的轮廓,等高线,等值线等高线层控制点习俗,惯例,公约,协定转换核心相关图符合,对应走廊, 通路费用花费密度域,路径权值成本效益分析有成本效益的,划算的结合协方差面层,图层覆盖,覆盖范围标准,要求标准,判据,条件标准,判据,条件十字丝以交叉线作出阴影截面麻烦的用户定制剪切圆柱投影悬挂悬挂长度悬挂的节点点划线数据文件的扩展名数据库管理系统数据合并数据变换数据定义语言数据字典与数据的无关数据的完整性数据项数据维护数据操作数据操作和查询语言数据挖掘数据模型数据表示法数据表数据类型数据库日期数据库管理员泥石流调试十年,十,十年期分贝决策分析决策,判定分解专用的推论的,演绎的狄拉尼准则狄拉尼三角形删除描绘划分人口统计学的说明分母,命名者观测密度引出的,派生的可察觉的发明,想出对角线的,斜的要求数字高程模型数字地形模型数字化数字化数字化仪数字化误差数字化板,数字化桌差异,矛盾不连续的,离散的不连续的,离散的不一致性分散,离差中断,分裂,瓦解,破坏切开的,分割的发散,发布分解距离衰减函数分布式计算分割域点状图草稿,起草拖拽滚筒式扫描仪伪节点动态建模容易使用的生态学导出消除椭球椭圆率伸长包装,封装围绕编码实体关系建模实体表进入,登记想像,设想,正视,面对希腊文的第五个字母ε等积投影等距投影不稳定的误差检查和修正误差图误差离散,误差方差本质,本体,精华以及其他人,等人欧几里得的,欧几里得几何学的欧几里得二维空间期望发生频率明显的指数的延伸内外边界外部表格(多面体的)面工具设备管理样子,方式文件分配表有过失的,不完善的(地理)要素,特征要素反馈诚实,逼真度,重现精度字段现场调查户外运动发烧友场模型外形, 数字,文件结构填充精细度以固定比例放大以固定比例缩小平板式扫描仪弹性,适应性,机动性,挠性最前沿逐帧无…的起始节点从底层完成,实现函数调用模糊的模糊集合论构台,桶架, 跨轨信号架通用的地理编码地理计算大地测量地理实体地理(数据处理)过程地理参考地理空间地理信息,空间信息地理要素几何的,几何学的几何图元地理(数据)处理过程地理坐标参考地理关系的地球科学地理空间的地学空间分析地质统计学的假设心射切面投影颗粒容差地图网格灰度栅格,格网手绘的手持的障碍,难点处置、处理手写的头记录重的,强健的异质性异构的启发式的层次层次的山坡(体)阴影图均匀的、均质的主机家庭色调腐植的困难,阻碍水文地理学超链接的即,换言之,也就是理想点法相同的可识别的、标识识别阐明图像,影像全电阻,阻抗阻抗实现,履行履行,实现牵连,暗示隐含的超过…关于根据…在中间的嵌入的,内藏的不一致性,矛盾性结合,组成公司(或社团)内在的,本土的信息集成基础设施固有的继承,遗传, 遗产内陆的实例,例子实例,个例化整数综合,结合相互作用交互式的交互式的协议组互操作性内插插值询问相交交集、逻辑的乘交区间估值法直觉的直觉的不变量存储,存量反向的,倒转的,倒置的互相对立的不能撤回的,不能取消的相邻完全包含于包含于相等的,相同的线族等值线图迭代的接合,汇接点主帧克里金内插法克里金法标签,标记间隙,迟滞量土地利用类别纬度 (B)纬度坐标熔岩,火山岩图层图层出租,租用最佳路径分析左图例图例图例长度量测在于小型的同样地限制,限度,局限线模型线段谱系,来源容貌,线性构造线性的,长度的,直线的线跟踪的岩性单元局域和广域网对数的逻辑的逻辑的经度 (L)经度坐标宏语言类宏语言宏主流管理人的, 管理的手工数字化多对一的关系地图比例尺排列,集合掩膜matrix 的复数矩阵实测发生频率量测中间的合并墨卡托墨卡托投影法合并合并,融合子午线元数据元数据,也可写为 metadata元数据方法学,方法论度量空间最佳路径镜像错误表示混合像素建模模块化的单色的,单频整体的垄断, 专利权, 专卖形态学镶嵌, 镶嵌体移动移动平均数多准则决策分析多谱线的,多谱段的相互排斥的短视,没有远见的最低点,天底,深渊,最底点本性,性质整洁地成为必要嵌套的、巢状的网络网络分析网状数据库结构网络模型节点节点节点捕捉容限非数值的(字符)非空间的非空间数据范式指北针非新手,初学者有效位数数字字符数值的数值的基于对象的模型客观的,目标的面向对象的模型面向对象的数据库阻碍全能的,全部的以…为基础在线分析处理屏幕数字化运算对象,操作数算子,算符,操作人员优化算法或次,次序组织方案原点,起源,由来直角的,直交的正射投影正射影像缺少结果长出,派出,结果,副产物开头 ,开端卵形的,椭圆形的迟到的管理费用重叠,叠加叠加叠置运算超出过头线软件包成对(双)地,两个两个地平移面,板范例、父节点补钉,碎片,斑点路径搜索图案式样,图案, 模式模式识别感觉,概念,理解力透视图从属, 有关, 适合现象学的,现象的摄影测量的摄影测量物理关系饼图导航洼坑象素平面的平面欧几里得空间平面投影平台绘图仪绘图仪绘图插件便携式,袖珍式,小型的点实体指针点方式点数,分数极坐标抛光多边形多义线,折线多形性,多态现象预防措施精确, 精度(多次测量结果之间的敛散程度) 预定义的,预设计的预定、预先偏好先地理空间主外键主码面向处理的纵剖面、轮廓编程工具投影投影所有权,业主原型,典型最接近的,近侧的接近性假的, 伪的伪节点缓冲区查询(数字化仪)鼠标数字化鼠标鼠标毕达哥拉斯方庭,四方院子象限,四分仪四叉树四叉树方格限定,使合格定性的量的定量的、数量的使量子化准量测雷达影像以固定半径建立缓冲区牧场,放牧地等级次序集合法等级评定法栅格栅格数据模型栅格扫描仪栅格空间数据模型分数评定法关系数据结构现成的随需随运行的实时记录娱乐平面坐标纠正多余的,过剩的, 冗余的参考网格自反的自反最近邻体制,状态,方式配准规则模式关系关系关系代数运算符关系数据库关系连接中肯,关联,适宜,适当地势起伏,减轻地势的起伏评论,谈论,谈到遥感遥感遥感遥感的知识库可再产生的相似,相似性,相貌相似类似,像重取样调整形状居住, 驻扎调整大小分辨率分辨率回答者,提取检索检索检索高压脊右稳健的根部均方根旋转迂回的舍入的、凑整的行行和列的编号游程长度编码行程编码鞍点显著的,突出的,跳跃的,凸出的样品, 标本, 样本卫生状况卫星影像可升级的比例尺扫描扫描仪扫描仪扫描仪缺乏,不足情节模式脚本,过程(文件)灌木安全, 安全性选择选择自定义的自编程的语义的,语义学的语义的,语义学的半自动化长半轴半量测短半轴半方差半变差模型半变差图传感器次序集合、集、组改变, 移动基石,岩床联立方程同时地正弦的骨骼,骨架滑动显示模式裂片坡度坡向坡的凸凹性咬合捕捉社会人口统计学的社会经济学的意大利面条自相关函数空间相互关系空间数据GIS的空间数据模型 空间数据库空间决策支持系统空间依赖性空间实体空间模型空间关系空间关系空间统计时空的具体的,特殊的光谱的球空间球状体,回转椭圆体曲线排列文字分割股票持有者单机标准误差,均方差标准操作最新的静态的极射赤面投影极射赤面投影立体测图仪存储空间火炉的烟囱形成阶层的流方式步幅,进展,进步结构化查询语言被串起的细分,再分子程序相减组, 套件,程序组,代替,取代叠加,叠印代理,代用品,代理人测量测量,测量学野外测量数据免受...... 影响的(地图)符号符号,记号对称性给...... 贴上标签剪裁讲究的考虑…接触,相切胶带、带子风流地,高雅地远程登录试验性的术语台地,露台领域,领地,地区棋盘格的,镶嵌的花样的纹理等距圆锥投影兰伯特保形圆锥射影专题的专题图主题,图层泰森图第三方的阈值生产量,生产能力,吞吐量逆冲断层地理控制点等级,一排,一层,平铺费时间的终止节点允许(误差)、容差、容限、限差色调地形图地形学拓扑的拓扑维数拓扑对象拓扑结构建立了拓扑结构的数据集拓扑关系拓扑交替换位,交替使用,卖掉交换,协定,交易事务处理系统变换,转换转置,颠倒顺序巨大的不规则三角网修整真方向投影元组不偏性不确定性海图上未标明的,未知的欠头线合并并集、逻辑的和上升级最上面的城市改造用户友好的效用, 实用,公用事业效用函数含糊的效力,正确,有效性方差,变差变量(变化记录)图矢量矢量空间数据模型经销商言语的, 动词的对,与…相对顶点 (单数)矢量化可实行的,可行的反之亦然反之亦然数据库的表示只读的虚拟的虚拟现实通视性分析视觉的可视化,使看得见的重大的沃伦网格顶点(复数)分水岭杂草,野草 v.除草,铲除清除容限度加权求和法同时在 ...... 距离内异或放大缩小。
131M. Kitamura et al.: Genetic algorithm to optimize structural design J Mar Sci Technol (2000) 5:131–146Application of a genetic algorithm to the optimal structural design of a ship’s engine room taking dynamic constraints into considerationMitsuru Kitamura, Hisashi Nobukawa, and Fengxiang YangDepartment of Naval Architecture, Ocean Engineering, and Engineering Systems, Hiroshima University, 1-4-1 Kagamiyama,Higashi-hiroshima 739-8527, Japan1IntroductionThe dynamic response as well as static analysis must be considered in the design of the structure of the engine room of a ship. The optimization of the engine room structure under static and dynamic constraints is com-plex because of the implicit characteristics of the constraints. The engine room structure is subject to intense vibration when the natural frequencies of the structure are close to the exciting frequency. Some effective methods are needed to control this dynamic response in order to optimize the design of the engine room structure.Genetic algorithms (GAs)1–3 are powerful and broadly applicable stochastic search and optimization techniques based on the principle of evolution theory.Recently, GAs have received considerable atten-tion owing to their potential as novel optimization techniques.4–10 By using a GA, the global optimum can be reached more easily than by some traditional optimi-zation techniques. One other major advantage is that they can be applied to optimization problems with discrete design variables. H owever, there are some difficulties in optimization processes which include both GA and traditional techniques, because reason-able convergence might not be obtained. It is necessary to investigate robustness and convergency before apply-ing a GA to the optimal design of an engine room structure.In practice, it is not appropriate to reject a design absolutely just because the stresses of few members or the accelerations of few points are slightly larger than their allowable values. Because many uncertainties ex-ist in a large structural design such as a ship many con-straints are fuzzy in some sense. There are no well-defined boundaries between safe and unsafe de-signs. It is more reasonable to consider that there should be transitional stages from absolutely safe to unsafe designs.Abstract The genetic algorithm, known as GA, is used to optimize engine room structure, not only under static con-straints, but also under dynamic constraints. A penalty func-tion method is used to handle the complicated constraint conditions based on the numerical results of dynamic and static analyses. There are several ways to take the dynamic effect into account in the optimum design of ship structure.First, the inequality constraint condition is applied to separate the natural frequency and the exciting frequency. Second,generalized design variables are introduced in order to trans-fer not only the dynamic but also the static equilibrium equa-tions into the equality constraints, resulting in the optimal structural design without the need to solve these equilibrium equations. Third, the magnitudes of the acceleration and dis-placement are constrained instead of applying the natural frequency constraint condition. In order to achieve better con-vergency in the optimization with least resources, several op-erators and methods are considered and then introduced into the structural design of the engine room. The new operator,called either objective elitism or fitness elitism, is introduced to improve the efficiency of the method. The effect of bound-ary mutation and nonuniform mutation on the performance of the GA is examined. Not only binary representation but also floating-point representation are used to express the design gene in the GA. Fuzzy theory is applied in the GA to handle the uncertainty of the constraint conditions. Two ways of solv-ing fuzzy optimization are investigated in order to obtain a fuzzy solution and a crisp solution.Key words Genetic algorithm · Optimal structural design ·Engine room of ship · Dynamic constraint · Finite element methodAddress correspondence to: M. Kitamura (kitamura@naoe.hiroshima-u.ac.jp)Received: October 2, 2000 / Accepted: November 30, 2000Updated version of articles that appeared in the Journal of the Society for Naval Architects of Japan, vols. 183, 184 (1998),185, and 186 (1999): The original articles won the SNAJ prize,which is awarded annually to the best papers selected from the SNAJ Journal, JMST, or other quality journals in the field of naval architecture and ocean engineering2Application of a GA to the optimal design of an engine room structure 2.1Illustration of the problemIn this study, the objective is to find the design variables X = (x 1, x 2,..., x m )T to minimize the construction cost of an engine room structure, f (X ), under the constraint conditions g (X ). This kind of problem can be expressed by the formulaf X ()Æminimum (1)subject tog i n i X ()£=()0123for ,,,...,(2)Where n is the number of constraint conditions. The constraint conditions considered here for the design of an engine room structure are described below.Constraint for bending and shear stresses.Static bending and shear stress constraints for an engine room structure, such as web frames and web beams, can be expressed ass i i i n £=()s 12,,...,s (3)where n s is the number of stress evaluation points, and s i is the allowable stress for member i .Constraint for natural frequencies.In order to avoid resonance of the structure with the exciting force, the following constraint condition for the natural frequency is imposed.w w w w i ii I i I ££()≥>()12for for (4)Here, w i is the i -th natural frequency of the engine roomstructure, and w 1 and w 2 are the frequencies given to define the forbidden frequency zone.Constraint of design variable ranges.For optimal de-sign problems in engineering, there are physical limita-tions for the design variables.x x x i m i i i ,min ,max ,,,££=()12L (5)where m is the number of design variables.In Eq. 3, s i can be obtained by solving the following structural static equilibrium equation:KY = P(6)Here, K is the stiffness matrix of a finite-element model of the structural design, Y is the nodal displace-ment solution vector, and P is the given force vector.w i (= ͱx —i ) in Eq. 4 is calculated by solving the eigenvalue problemKU i = x i MU i(7)where M is the mass matrix, x i is the i -th eigenvalue, and U i is the i -th eigenvector.2.2Fitness function and constraintMost real problems of function optimization involve constraints. A constrained problem can be transformed into an unconstrained problem by associating a penalty with all constraint violations. Minimizing the objective function, f (X ) is transformed into the optimization of the functionF f i i i nX X X ()∫()+()Æ=Âd F minimum 0(8)where d i is a penalty coefficient and F i (X ) is a penalty term related to the i -th constraint. d i is a scalar multi-plier intended to control the penalty imposed when con-sidering points that violate the constraints, and n is the total number of constraint conditions. A small value of d i will allow a wide exploration of the constraint viola-tion space, while a large value of d i places strong restric-tions on the constraint.There is a variety of possible penalty functions which can be applied. In this study, the following form is used as penalty function:F i i r i i g g g X X X X ()=()()£()>ÏÌÔÓÔ000(9)In practical applications, r can be selected as 0, 1, or 2.Generally speaking, the objective term and the pen-alty term should be of the same order of magnitude. If the objective is too large compared with the penalty term, the process of optimization will drive all the chromosomes into the infeasible domain. H owever, if the penalty terms are much larger than the objective,the selection pressure will become very high, and as a result a few super chromosomes will dominate the selec-tion process, which will result in a premature and to that process.The objective function, f , the stress, s , the constraint,g , the penalty term, d F , and the extended objective function, F , for three cases are listed in Table 1, in which the allowable stress is 18kgf/mm 2, the penalty coefficient d is 1.0, and r is set to be 1. Two designs are compared, and in all cases f in one design is 10% larger than the other, and s is set at 18.1kgf/mm 2 in one case,and at 18.0kgf/mm 2 in the other. Design A 1 gives f =1.0kgf and s = 18.1kgf/mm 2, while design A 2 gives f =1.1kgf and s = 18.0kgf/mm 2. Since the difference in the objective function, f , and the difference in the penalty term, d F , between A 1 and A 2 are both 0.1, the extended objective functions, F , for these two designs are both1.1. Although design A 1 does not satisfy the constraint condition, this design should be considered since its objective is good, and the degree of violation of the design constraint is small. In this sense, the values of the objectives and the penalty terms for case A are well balanced.The objective functions in case B are 1000 times larger than those in case A, while the stresses and con-straint conditions are the same. The extended objective function for design B 2 is worse than that for design B 1,and this design may die out. The size of the extended objective function for design B 2 is almost 110% of that for design B 1, which is derived from the value of the objective function directly. Hence, the effect of the ob-jective function is too strong, and the constraint condi-tions do not affect the selection of a design in case B.However, it should be noted that the ratio of the two objectives in case B is the same as in case A. If “tonf”units are used instead of “kgf” for f in case B, exactly the same F as in case A is obtained. If s = 30kgf/mm 2 in design B 1, this design will still be evaluated as the better solution in this case, although the solution is in the infeasible domain.The objective functions in case C are 1000 times smaller than those in case A. In this case, the effect of the penalty term is too strong, and design C 1 may be rejected since the extended objective is 100 times larger than that of C 2. Even if f = 0.1kgf in design C 2, which is 100 times larger than that in design C 1, this design will still be a better solution than C 1. Satisfying constraint conditions is most important in this case, and the differ-ences in the values of the objective function do not affect the extended objective function which introduces a premature solution.A minimum problem like that in Eq. 8 can be trans-formed into a maximum problem by adding a negative sign. The fitness function V (X ) can be formed by adding a constant C .V C F X X ()=-()(10)C should be selected to be as small as possible on condi-tion that V (X ) > 0. If C >> F (X ), the function V (X ) willsuffer from a much slower convergence than the func-tion F (X ). In this study, C is fixed by the largest value of F (X ) for each generation.2.3Elitism and mutationIn engine room-optimization, the traditional simple GA has two drawbacks in the selection process. First, be-cause the offspring in each generation replace their parents soon after they are born, they may lose the best chromosomes of the older generation. Second, it is dif-ficult to know when the optimal point is found, because after several generations, the values of individuals oscil-late and take larger and smaller values over subsequent generations. In order to solve these problems, elitist selection is introduced to ensure that the best chromo-somes are passed to the new generation. The best chro-mosomes in a generation are selected according to their fitness and objective values in the infeasible and feasible domains, respectively.Mutation arbitrarily alters one or more genes of a selected chromosome by a random change in the muta-tion rate. This operation maintains chromosome diver-sity and prevents the search from stopping prematurely.The mutation rate is usually held constant throughout the calculations of the GA. H owever, this is not very efficient, because at the beginning of a run of GA, a full exploration of the search space is needed. Therefore,the mutation rate should be higher at the start of a run.However, after the most promising regions of the search space have been found, exploitation becomes more important and the mutation rate should be lowered.Therefore, the mutation rate, p m , should bep p i i i m m +()()=1h (11)where h i is the decreasing rate coefficient, which should be less than 1, and i is the generation number.2.4Numerical example 1The principal dimensions of the ship under consider-ation in the first numerical example areTable 1.Sample values of f , s , d F , and F in cases A, B, and C Case Design f (kgf)s (kgf/mm 2)g (kgf/mm 2)d F F Selection A A 1 1.018.10.10.1 1.1Even A 2 1.118.00.00.0 1.1Even B B 1100018.10.10.11000.1Better B 2110018.00.00.01100.0Worse CC 10.001018.10.10.10.1010Worse C 20.001118.00.00.00.0011BetterLength of perpendicular Breadth moulded = 12m = 70mDraught = 4.14mDepth moulded = 7.12m Weight of generator = 2Weight of main tonf ¥ 2 sets engine = 31 tonf Blade number = 4SHP = 1800 h.p.Engine revolution = 284Propeller D p = 2.40m r.p.m.where SHP is shaft horse power.Design variables include the cross-sectional sizes of the web frame and web beam members, x 1 and x 2, respec-tively, the web frame spacings, x 3–x 6, and the hull thick-ness, x 7, as shown in Fig. 1. Usually, the web frames and the web beams are selected from a limited number of standard shape steel members available commercially,such as the ones listed in Table 2. Four of the param-eters shown in Fig. 1, t 0, t 1, b , and h , cannot be taje~as independent design variables which represent the cross section. Therefore, the cross section represented by these four parameters should be taken as one design variable. In order to achieve this goal, the serial num-bers of standard shape steel members are introduced as the design variables. Once a serial number is fixed,the corresponding sizes of the cross sections can be obtained from the database.A simplified three-dimensional finite element model is shown as in Fig. 2. In this model, the details of the engine room are modeled into the three-dimensional frame as shown in Fig. 3. The rest of this figure is simpli-fied to show the varying cross-sectional beams with vir-tual added mass. The numbers of elements and nodes in this model are 408 and 208, respectively. The loads imposed on the ship are shown in Fig. 4. The allowable bending stresses are 18kgf/mm 2 and 10kgf/mm 2 for the web frame and web beam elements, and the longitudi-nal members, respectively. The main engine has fourblades and revolves at 284 r.p.m., and therefore the blade frequency is 4.733 ¥ 4 = 18.93Hz. To avoid struc-tural resonance, the forbidden frequency band is fixed as [17, 21]Hz.The unit prices of plate steel and shaped steel are 90000yen/ton and 105000 yen/ton, respectively. The price of fillet welding per unit welded length in meters,with unit welded leg length in centimeters, is 4200 yen.Table 3 and curve I in Fig. 5 show the results of theoptimization process on this structure. Here, the popu-Fig. 1.Design variables for the engine room structureTable 2.Serial number and corresponding cross section of standard steel members Serial Serial number h ¥ b ¥ t 1/t 0number h ¥ b ¥ t 1/t 01200 ¥ 120 ¥ 8/817400 ¥ 200 ¥ 14/122200 ¥ 125 ¥ 12/818400 ¥ 250 ¥ 14/123200 ¥ 150 ¥ 12/819400 ¥ 300 ¥ 14/104250 ¥ 200 ¥ 12/820400 ¥ 300 ¥ 14/125300 ¥ 125 ¥ 12/1021500 ¥ 200 ¥ 12/106300 ¥ 150 ¥ 12/1022500 ¥ 200 ¥ 15/127300 ¥ 200 ¥ 12/1023500 ¥ 250 ¥ 12/108300 ¥ 200 ¥ 12/1224500 ¥ 250 ¥ 15/129350 ¥ 150 ¥ 12/1025500 ¥ 300 ¥ 15/1210350 ¥ 150 ¥ 12/1226550 ¥ 200 ¥ 16/1111350 ¥ 200 ¥ 12/1027550 ¥ 250 ¥ 16/1212350 ¥ 200 ¥ 12/1228550 ¥ 300 ¥ 16/1213350 ¥ 250 ¥ 12/1229550 ¥ 400 ¥ 16/1214400 ¥ 150 ¥ 10/1030600 ¥ 200 ¥ 16/1115400 ¥ 150 ¥ 10/1231600 ¥ 300 ¥ 16/1116400 ¥ 200 ¥ 12/1232600 ¥ 400 ¥16/16Fig. 2.Structural model of the whole shipFig. 3.Structural model of the engine roomFig. 4.Loads imposed on web frame 17Table 3.Serial number and corresponding cross section Generation Cost s T maxs L maxf 13f 14No.(yen · 103)(kgf/mm 2)(kgf/mm 2)(Hz)(Hz)1678911.55 6.36015.40821.3235670615.67 5.83015.40521.1207611616.83 5.41315.40321.10420605717.01 5.01515.40021.08235601517.20 5.02715.40021.08445600017.235.03115.40021.084s T max , maximum bending stress of transverse elements; s Lmax , maximum bending stress of longitudi-nal elementsTable 4.Evolution of design values x 1x 2x 3x 4x 5x 6x 7Generation (#)(#)(cm)(cm)(cm)(cm)(cm)12031228339599724115132422031257471710762122333857272910204211773265427011035421158353517695104542115835649669510Fig. 5.Influence of the penalty coefficientlation size P size = 20, the probability of crossover p c = 0.7,and the probability of mutation p m = 0.01. The optimal values were obtained after about 43 generations, in which the minimal objective value was 6000000yen.Table 4 shows the history of the design variable modifi-cations in the GA. The optimal cross-sectional sizes for the web beams and web frames are 250 ¥ 200 ¥ 12/8 and 500 ¥ 200 ¥ 12/10, respectively. Web frame spacings, x i (i = 3, 4, 5, 6) were 158cm, 356cm, 496cm, and 695cm,respectively. The hull thickness, x 7, was 10mm. The maximum stress of the transverse elements and longitu-dinal members was 17.23kgf/mm 2 and 5.031kgf/mm 2,respectively. The natural frequencies of the engine room structure near the boundary of the dynamicconstraints were 15.40Hz. and 21.08Hz, which are not in the forbidden band. From these results, it can be seen that the values of the structural stresses generally move toward the boundary of the constraints, but do not reach it because of the discrete design variables.Figure 5 shows the influence of the penalty term on the optimization process. Curve II converges prema-turely because the penalty terms are too large compared with the objective values. The chromosomes in the in-feasible domain die out quickly with successive genera-tions. A few superchromosomes dominate the process.Curve III represents the results of the GA with very small penalty terms. The dotted part means that the solutions are not in a feasible domain. After the second generation, all the chromosomes are driven into the infeasible domain. The GA process avoids violations of the constraints because the penalty terms are too small.In Fig. 6, P size = 20, p c = 0.7, p m = 0.01, and fitness elitism is introduced for both curve I and curve II.As well as fitness elitism, objective elitism is also intro-duced for curve I. The figure shows that objective elit-ism significantly improves the convergence. From curve II, it can be seen that the objective oscillates, especially when the cost value is near the optimal value. From the same curve, it can also be seen that the objective value generally converges to a constant.The influence of population size, P size , can be seen in Fig. 7, in which p c = 0.7, p m = 0.01, and both fitness elitism and objective elitism are introduced. Obviously curve I terminates prematurely. In this study, it seems that at least 20 chromosomes in one generation are required for this structural optimization.The effect of varying the probability of mutation on the performance of the GA is shown in Fig. 8, in which p c = 0.7, P size = 20, and both fitness elitism and objec-tive elitism are introduced. Curve I converges quickly at first, but after several generations, curve II is better than curve I. The mutation rate for curve II is high at the beginning of the optimization process, and therefore the convergence is not as fast as for curve I. However, be-cause the range of curve II is more efficient than that of curve I, it later converges faster than curve I.Eight trials, with p c = 0.3, 0.4,..., 1.0, were investi-gated in this study. The cases of p c = 0.3, 0.4, 0.5, 0.6converge prematurely. The other cases all converge tothe optimal value. Because of this problem, p c = 0.7gives the best probability of crossover; here the GA reaches the objective minimum at generation 43, while the others do so at about generation 55.3Application of generalized design variables in the genetic algorithm for the optimum design of a ship’s structure3.1Illustration of the problemIn calculus-based optimization techniques, the searchproceeds from one point to a better one. For structuralFig. 6.Influence of objective elitismFig. 7.Influence of the population size, PsizeFig. 8.Influence of the probability of mutation, p moptimization problems, most of the algorithms require a large number of structural reanalyses. These repeated analyses tend to be too expensive for practical prob-lems. A GA is fundamentally different from traditional optimization techniques, and has advantages in some aspects. For example, calculations of the gradient or the Hessian matrix of the objective function and constraints are not required, the solution converges to the global optimal point more easily, and discrete problems can be handled.The total number of structural reanalyses in a GA is much larger than that in the multiplier method, and consequently takes more computing time. In this sec-tion, an attempt is made to eliminate the need to solve the static and dynamic equations by using the concept of generalized design variables and the penalty method.3.2A GA without conventional reanalyses of structure In this study, the generalized design variable vector Z is defined asZ X Y =ÏÌÔÓÔ¸˝Ô˛Ôx (12)H ere the dependent variables X , Y , x are the vectors containing the ordinary design variables, nodal dis-placements, and eigenvalues, respectively. With the generalized design variables Z , the structural optimiza-tion problem can be rewritten asf Z ()Æminimum (13)subject to the following constraints.1.Constraint for bending and shear stressess s i i i n Z ()£=()12,,...,s (14)2.Constraint for the static equilibrium equationK Z Y P ()-=0(15)3.Constraint for the dynamic equilibrium equationdet K Z M ()-=x i 0(16)4.Constraint for design variable rangesz z z i m i i i ,min,max ,,...,££=()12g (17)Here, m g is the number of generalized design variables.Since Z includes the variables x , the forbidden fre-quency zone discussed in the previous section can be covered by the constraint for the design variable range.When the equality constraints of Eqs. 15 and 16 with Z are satisfied, the design variables, X , Y , and x , are no longer independent. H ence, conventional analysesof structure, such as solving Eqs. 6 and 7, are not required in the optimization. This leads to a GA being less time-consuming than calculus-based optimization techniques.3.3Floating-point representation, boundaries,and nonuniform mutationsThe applicability of a GA with binary representation of strings to the engine room optimization problem under the static stress and dynamic frequency constraints was considered in the previous section. H owever, for a multidimensional problem with highly precise numeri-cal computation, the search space is too large. Even though theoretically the binary alphabet offers the maximum number of schemata per bit of information for any coding, the schema theory is not based on binary alphabets alone. In order to remove this drawback,floating-point representation is introduced.Since the optimal solution in structural optimizations often lies on or near the boundary of the constraints,as shown in the first numerical example, it is essential to introduce the operator of the boundary mutation in order to hasten the rate of convergence. With this muta-tion, x k is mutated to be either a right-bound value or a left-bound value with equal probability.Nonuniform mutation is incorporated in this study,since this mutation is responsible for the fine-tuning capabilities of the solution. The design variable x k is mutated, and the result of this mutation isx x t k x x t x k k kk k k ¢=+()-()+-()()ÏÌÔÓÔ==D ,,right left if if D g g 1101(18)where g 1 is a random binary digit, and t denotes the number of generations. Right(k ) and left(k ) are the right bound and left bound of x k . The function D (t , y ) is taken as being of the following form:D t y y t T b,()=◊-ÊËÁˆ¯˜g 21(19)where g 2 is a random number between 0 and 1. T is the maximal generation number, and b is the coefficient for this mutation which determines the degree of nonuniformity.3.4Numerical example 2The structural model in Fig. 9 is taken as the example to test the performance of the generalized design variables.3.4.1Optimization under dynamic constraintsThe cross-sectional areas of the members determine the volume as well as the stiffness and mass of the structureat which the optimization process is stopped according to the termination conditions.Table 6 shows that there is no significant difference among the solutions obtained by the four paring the results of A and B shows that the gener-alized design variable method with floating-point repre-sentation can reduce computational time in spite of the large generation number. The CPU time used for each generation in B is only one-twentieth of that in A. It is also shown that the boundary and nonuniform muta-tions can improve the convergency of GA. The CPU time used in D is only 4.2% of that in A.Table 7, however, shows that the proposed method may violate several constraints; the sum of the penalties for any method is very small when the GA approaches the optimal point. Table 7 also represents the eigenvalue error caused by the equality constraints. x is obtaineddirectly by the proposed method, while˜x is calculated from x 1, x 2, and x 3 obtained by the GA methods. It is clear that the proposed method does force the equality constraints to be satisfied with an acceptable accuracy.3.4.2Optimization under static constraintsThe structure shown in Fig. 9 is investigated here under static constraints. b 1, b2, and b 3 are all taken as 0.02m.Fig. 9.Structural model for the second problemTable 5.The four GA approaches tested Structural Boundary Nonuniform Approach RepresentationreanalysismutationmutationA Binary ᭺¥¥B Float ¥¥¥C Float ¥᭺¥DFloat¥᭺᭺Table 6.Results of the four approaches under dynamic constraints x 1x 2x 3x Objective Gen.CPU Approach (m)(m)(m)(rad/s)2(m 3)number time (s)Exact 0.10000.14100.10004014.30.4913——A 0.10000.13850.10334023.40.49662121852B 0.10900.14080.10004017.60.5010867396C 0.10360.14200.10004051.90.4966233108D0.10000.14210.10004053.40.492815378when the material and geometry are fixed. In this study,b 1, b 2, and b 3 shown in Fig. 9 are fixed as 0.1m, 0.1m,and 0.18m, respectively. x 1, x 2, and x 3 are selected as the design variables whose ranges are [0.1, 0.7]m. The for-bidden eigenvalue band for this example is selected as [2000, 4000]. The goal here is to find the set of general-ized design variables, namely x 1, x 2, and x 3, and the corresponding eigenvalue x i , under the dynamic fre-quency constraints in order to minimize the volume of the structural members by using the method proposed.The four approaches shown in Table 5 are calculated with the following GA parameters: P size = 50, p m = 0.8,the probability of boundary mutation p bm = 0.06, the probability of nonuniform mutation p nm = 0.05, and the coefficient for nonuniform mutation b = 2. The results of the four approaches are summarized in Table 6. The “gen. number” in Table 6 is the final generation numberTable 7.Error of eigenvaluesDifferencePenalty Approach x ˜ξ(%)(10-8)A 4023.44023.40.000.00B 4017.64016.40.03 1.22C 4051.94051.90.000.01D4053.44052.40.025.12。
RESEARCH ARTICLEPseudo-Static and Pseudo-Dynamic Stability Analysis of Tailings Dam Under Seismic ConditionsDebarghya Chakraborty •Deepankar ChoudhuryReceived:30January 2012/Revised:8January 2013/Accepted:19January 2013/Published online:12February 2013ÓThe National Academy of Sciences,India 2013Abstract In this paper the seismic slope stability analyses are performed for a typical section of 44m high water retention type tailings earthen dam located in the eastern part of India,using both the conventional pseudo-static and recent pseudo-dynamic methods.The tailings earthen dam is analyzed for different upstream conditions of reservoir like filled up with compacted and non-compacted dumped waste materials with different water levels of the pond tailings portion.Phreatic surface is generated using seepage analysis in geotechnical software SEEP/W and that same is used in the pseudo-static and pseudo-dynamic analyses to make the approach more realistic.The minimum values of factor of safety using pseudo-static and pseudo-dynamic method are obtained as 1.18and 1.09respectively for the chosen seismic zone in India.These values of factor of safety show clearly the demerits of conventional pseudo-static analysis compared to recent pseudo-dynamic analy-sis,where in addition to the seismic accelerations,duration,frequency of earthquake,body waves traveling during earthquake and amplification effects are considered.Keywords Tailings earthen dam ÁSlope stability ÁSeismic load ÁAmplification ÁSeepage ÁSafety factorIntroductionSeismic slope stability analysis of embankments/dams is one of the most important tasks before construction.Especially,for important safety related structures,like tailings dams,it becomes very much essential.Available literature indicates that a significant numbers of tailings earthen dams have failed during earthquakes because of slope failure.Seismic slope stability analysis of embank-ments/dams is one of the most important tasks before construction.Especially,for important structures like tail-ings dams it becomes very much essential;since the failure of tailings dam which stores waste material will release the toxic and/or corrosive stored tailings waste into the sur-rounding locality,and causing a disaster for mankind in that vicinity.To reduce such phenomenon,the static as well as the seismic analyses must be performed before constructing the tailings dam.Since 1920s,the seismic stability of earthen dams has been analyzed by pseudo-static approach in which the effects of earthquakes are represented by constant hori-zontal and/or vertical seismic accelerations.Choudhury et al.[1]have provided some guidelines regarding the seismic design of earthen dams.Though the pseudo-static analysis is simple and straightforward,the representation of the complex,dynamic effect of earthquake shaking by constant horizontal and/or vertical accelerations is actually pretty crude.These horizontal and vertical seismic forces are expressed as the product of the horizontal and vertical seismic acceleration coefficients (k h and k v )and weight of the potential sliding mass.In pseudo-static case,the values of k h and k v are assumed to be constant.Moreover,this method does not consider the effects of time,frequency and body waves traveling through the soil during the earthquake.D.ChakrabortyDepartment of Civil Engineering,Indian Institute of Science,Bangalore 560012,Indiae-mail:debarghya@civil.iisc.ernet.inD.Chakraborty ÁD.Choudhury (&)Department of Civil Engineering,Indian Institute of Technology,Bombay,Powai,Mumbai 400076,Indiae-mail:dc@civil.iitb.ac.in;dchoudhury@iitb.ac.inProc.Natl.Acad.Sci.,India,Sect.A Phys.Sci.(January–March 2013)83(1):63–71DOI 10.1007/s40010-013-0069-5In order to investigate the dynamic response of earthen embankment,Clough and Chopra[2]performed a two-dimensional plane-strain analysis with the usage of thefinite element method.Few researchers have used software pack-ages which are primarily based on Finite Element or Finite Difference Method.Seid-Karbasi and Byrne[3]carried out seismic analysis of tailings earthen dam using FLAC[4].By using PLAXIS[5]and TELDYN[6],Zhu et al.[7]obtained the seismic stability for a levee embankment.In order to investigate the seismic behaviour along with liquefaction susceptibility of a tailings dam,Chakraborty and Choudhury [8–10]used FLAC3D[11]and TALREN4[12].Zeng et al.[13]conducted centrifuge model tests for determining the stability of coal-waste tailings dams.To make the seismic analytical approach more realistic, Steedman and Zeng[14]proposed a method which considers the phase difference effect within the backfill behind retaining wall using a simple pseudo-dynamic analysis of seismic earth pressure considering only horizontal seismic acceleration.In this method it is assumed that the shear modulus is constant with the depth of soil.Choudhury and Nimbalkar[15,16]further developed this pseudo-dynamic method of analysis and proposed a theory to compute the seismic earth pressure by considering both the shear and the primary waves propagating through the soil with variation in time by considering harmonic horizontal and vertical seis-mic accelerations.Application of pseudo-static method for design of waterfront retaining wall was described by Choudhury and Ahmad[17].Researchers like Choudhury and Ahmad[18],Choudhury and Nimbalkar[19]had shown the successful application of pseudo-dynamic method over conventional pseudo-static method for retaining wall design. Moreover,Choudhury and Ahmad[17,18]had highlighted the effect of hydrodynamic water pressure during earthquake events on the design of waterfront retaining wall.Hence, incorporation of all dynamic parameters in seismic analysis makes the pseudo-dynamic method more practical.In this paper the seismic slope stability analyses are performed for a typical section of44m high water retention type tailings earthen dam,which stores non-radioactive nuclear waste material,located in the eastern part of India,which comes under seismic zone II of Indian seismic zone map(as per[20]).The seismic slope stability analyses are performed using both the conventional pseudo-static and recent pseudo-dynamic methods. Method of AnalysisThe seismic slope stability analysis is performed for a typical section of water retention type tailings earthen dam storing non-radioactive nuclear waste material in the upstream side.Figure1shows the tailings earthen dam, which is44m high,224m wide,and655m in length with a4m wide horizontal crest.The slopes of the upstream and the downstream sides are1V:2.5H.Table1 presents the properties of various components of the tailings dam.The tailings earthen dam is analyzed for two different conditions of the reservoir.Those are:(i)C-1:Water table is at3.5m below the existing groundlevel,(ii)C-2:Water level in the reservoir is up to the top surface of the pond tailings portion.Selection of appropriate seismic coefficient is a very important part of the pseudo-static and pseudo-dynamic analysis.Since the location of dam comes under seismic zone II of Indian seismic zone map(as per[20]),it is considered that the value of horizontal seismic acceleration coefficient(k h)=0.15and vertical seismic acceleration coefficient(k v)= k h=0.075(as per[21]).Limit equilibrium method is used to determine the factor of safety(FS)against slope failure.As per limit equilib-rium method,FS of a slope is given as,FS¼resisting forcedriving forceð1Þ64 D.Chakraborty,D.ChoudhuryWater Table is at 3.5m Below the Existing Ground Level (C-1)Pseudo-static analysis for C-1For this condition the details of forces along with the failure surface for the pseudo-static analysis of the tailings dam is shown in Fig.2.From Fig.2,for pseudo-static analysis it can be written that,Resisting force ¼c l AB cos b þl BC cos a ðÞþW 1þW 2ðÞÀF v 1þF v 2ðÞf g cos a ½ÀF h 1þF h 2ðÞsin a tan u cos aþW 3þW 4ðÞÀF v 3þF v 4ðÞf g cos b½ÀF h 3þF h 4ðÞsin b tan u cos bDriving force ¼W 1þW 2ðÞÀF v 1þF v 2ðÞf g sin a cos a½þF h 1þF h 2ðÞcos 2aÃþW 3þW 4ðÞÀF v 3þF v 4ðÞf g sin b cos b½þF h 3þF h 4ðÞcos 2a Ãwhere c =cohesion of soil in the shell,/=soil friction angle in the shell.l AB ¼44Àh sin b ;l BCD ¼h sin a ;l BF ¼44Àhtan b À2:544Àh ðÞ;l BE¼l BF 2:5;l CE ¼l BEtan aAgain,l BF can be determined as,l BF =2.5h -h tan(90°-a )Equating both l BF we get,a =90°-tan -1{(110tan b -(44-h ))/h tan b }Now,F hi =k h W i and F vi =k v W i (here,i =1,2,3and 4)where W 1=0.5c l CE (h -l BE ),W 2=0.5c l CE l BE ,W 3=0.5c l BF l BE ,W 4=0.5c l BF (44-h ),c is the unit weight of soil in the shell portion of the dam.Maximum values of b and a should be as follows,b max \tan À144=110ðÞ¼21:8 ;a max \90Using Eq.(1),for b from 2°to 19°and different values of h (as shown in Fig.2)from 15to 42m the FS values are determined.Pseudo-dynamic analysis for C-1In the present study,the pseudo-dynamic method is also used,in which the finite shear and primary wave velocities (V s and V p )are considered.The assumptions in the pseudo-dynamic method are as follows:(a)The shear modulus is constant with depth,(b)The seismic accelerations acting on the slope isassumed to be harmonic sinusoidal accelerations.Under earthquake condition,the shear wave velocity (V s )and primary wave velocity (V p )can be expressed as V s =(G /q )1/2and V p =(G (2-2m )/q (1-2m ))1/2Table 1Properties of various components of the tailings dam (modified after Chakraborty and Choudhury [8])ParametersDam soil at various locations CoreShell Compacted tailings Pond tailings Foundation soil layer Density (kg/m 3)16401830190019001830Cohesion (c )(kPa)3531.2514.714.731.25Friction angle (/)28°28°15.2°12°28°Shear modulus (MPa)53.56190.2595.3945.64217.35Poisons ratio 0.40.30.350.350.2Porosity0.250.30.250.250.3Permeability (m/s)1910-101910-81910-81910-81910-8Pseudo-Static and Pseudo-Dynamic Stability Analysis65respectively.Here,G is the maximum shear modulus of soil =190.25MPa (for the shell portion),q is the density of soil =1830kg/m 3(for the shell portion),m is the Poisson’s ratio of soil =0.3(for the shell portion).So,in the present analysis,V s =322m/s and V p =603m/s.Fundamental period (T )is determined by T ¼2:9H t ffiffiffiffiffiffiffiffiffiq =G p (where H t =height of the dam above toe of the slopes =44m)and computed as 0.39s [21].It is assumed that both the horizontal and vertical vibrations,with accelerations a h (=k h g )and a v (=k v g ),respectively,start exactly at the same time,and there is no phase shift between these two vibrations [15,16].For this condition the details of forces along with the failure surface for the pseudo-dynamic analysis of the tailings dam is shown in Fig.3.From Fig.3for pseudo-dynamic analysis it can be written that,Resisting force ¼c l AB cos b þl BC cos a ðÞþW 1þW 2ðÞÀQ v 1ðt ÞþQ v 2ðt ÞðÞf g cos a ½ÀQ h 1ðt ÞþQ h 2ðt ÞðÞsin a tan u cos a þW 3þW 4ðÞÀQ v 3ðt ÞþQ v 4ðt ÞðÞf g cos b ½ÀQ h 3ðt ÞþQ h 4ðt ÞðÞsin b t tan u cos bDriving force ¼W 1þW 2ðÞÀQ v 1ðt ÞþQ v 2ðt ÞðÞf g½sin a cos a þQ h 1ðt ÞþQ h 2ðt ÞðÞcos 2a ÃþW 3þW 4ðÞÀQ v 3ðt ÞþQ v 4ðt ÞðÞf g½sin b cos b þQ h 3ðt ÞþQ h 4ðt ÞðÞcos 2aÃNow,for segment 1horizontal and vertical seismic inertia forces are,Q h 1ðt Þ¼Z h 1m 1z ðÞa h 1z ;t ðÞdz and Q v 1ðt Þ¼Z h 1m 1z ðÞa v 1z ;t ðÞdzrespectively.The mass of a thin element of the failing mass in seg-ment 1of thickness dz at a depth z from the crest level is m 1={2.5z -z tan(90°-a )}(c /g )dz ,a h 1z ;t ðÞ¼a h sin x t Àðh 1Àz Þ=V s ðÞanda v 1z ;t ðÞ¼a v sin x t Àðh 1Àz Þ=V pÀÁQ h 1ðt Þ¼c k h =4p 2ÀÁ2:5À1=tan a ðÞðÞÂk 2sin2p t =T ðÞÀk 2sin2pn 1À2p h 1k cos2p t =T ðÞÂÃQ v 1ðt Þ¼c k v =4p 2ÀÁ2:5À1=tan a ðÞðÞg 2sin2p t =T ðÞÀg 2sin2pw 1À2p h 1g cos2p t =T ðÞÂÃwhere,k ¼TV s ;n 1¼ðt =T ÞÀðh 1=V s T Þ;g ¼TV p and w 1¼ðt =T ÞÀðh 1=V p T Þ:Similarly for segments 2,3and 4the horizontal inertia forces (Q h 2ðt Þ;Q h 3ðt Þ;Q h 4ðt Þ)and vertical inertia forces (Q v 2(t ),Q v 3(t ),Q v 4(t ))are determined.Using Eq.(1),for various values of b and h (as shown in Fig.3)FS values are determined.Finally using the opti-mization technique the minimum value of pseudo-dynamic FS is obtained.Water Level in the Reservoir is up to the Top Surface of the Pond Tailings Portion (C-2)Pseudo-static analysis for C-2After seepage analysis using SEEP/W [22]software package the actual phreatic surface through the dam section is obtained as shown in Fig.4.In Fig.4the co-ordinates of few points on the phreatic line are shown considering the middle of the core portion of the dam at the existing ground level as (0,0)point of the co-ordinate system.In order to calculate the factor of safety using the pseudo-static and the pseudo-dynamic method the phreatic surface is modified as shown in Fig.5(the firm line).It is considered that the failure surface is passing beyond the phreatic surface through the dam section.Therefore,for66 D.Chakraborty,D.Choudhurythis case of analysis when the water level in the reservoir is up to the top of the pond tailings portion the value of h 1and h 2(Figs.6and 7)are taken for the analysis as,h 1C (44–25)=19m,b B tan -1(25/114)=12.3°and h 2C 20m.It should be mentioned that,because of the incorpora-tion of the seepage effect on the seismic stability of the tailings earthen dam,the problem has become complicated.In order to reduce the complexity of the problem,the soil properties of the tailings portion of the dam is consideredas same as that of the shell portion of the dam,though their actual properties are marginally different from each other.This can be considered as a limitation of the present analysis.For this condition,the details of forces along with the failure surface for the pseudo-static analysis of the tailings dam is shown in Fig.6and for pseudo-static analysis it can be written that,where,c 01(=31.25kN/m 2)and c 02(=35kN/m 2)are the effective cohesion of the soil in the shell portion and coreResisting force ¼c 01l ABC þc 02l CD þc 01l DEÀÁþW 1þW 2þW 5ðÞÀF v 1þF v 2þF v 5ðÞf g cos b ÀF h 1þF h 2þF h 5ðÞsin b ÀU 1½ tan u 01þW 3þW 6þW 8ðÞÀF v 3þF v 6þF v 8ðÞf g cos b ÀF h 3þF h 6þF h 8ðÞsin b ÀU 2½ tan u 01þW 4þW 7þW 9þW 10ðÞÀF v 4þF v 7þF v 9þF v 10ðÞf g cos b ÀF h 4þF h 7þF h 9þF h 10ðÞsin b ÀU 3"#tan u 01Driving force ¼W 1þW 2þW 5ðÞÀF v 1þF v 2þF v 5ðÞf g sin b þF h 1þF h 2þF h 5ðÞcos b ½þW 3þW 6þW 8ðÞÀF v 3þF v 6þF v 8ðÞf g sin b þF h 3þF h 6þF h 8ðÞcos b ½ þW 4þW 7þW 9þW 10ðÞÀF v 4þF v 7þF v 9þF v 10ðÞf g sin b þF h 4þF h 7þF h 9þF h 10ðÞcos b"#Pseudo-Static and Pseudo-Dynamic Stability Analysis67portion of the dam respectively.u 01=28°and u 02=28°are the effective soil friction angle of the soil in the shell portion and core portion of the dam respectively.h 1=44-114tan b and h 2=44-110tan b ,l ABC ¼110=cos b ;l CD ¼4=cos b ;l DE ¼h 1=sin b and l EF¼h 1=tan b :Now,F hi =k h W i and F vi =k v W i (here,i =1,2,3,4,…10)W DEF ¼W 1þW 2þW 3¼0:5c 1;sat l EF h 1;W CDFG¼W 3þW 6þW 8¼0:5c 2;sat h 1þh 2ðÞ4;W GHI ¼W 4¼0:5c 1;bulk 19ð2:5Â19Þ;W ABCJ ¼W 10¼0:5c 1;sat ð2:5Âh 2Þð44Àh 2Þand W CHIJ ¼W 7þW 9¼0:5c 1;sat ð2:5Âh 2À2:5Â19Þðh 2À19Þ;where,c 1,sat (=21.3kN/m 3)and c 2,sat (=18.9kN/m 3)are the saturated unit weight of soil in the shell and core portion of the dam respectively.c 1,bulk =bulk unit weight of soil in the shell por-tion =18.3kN/m 3.U is the force due to pore water pressure.Where,U 1¼0:5h 1c w ðÞl DE ;U 2¼0:5h 2c w þh 1c w ðÞl CD ;U 3¼c w =2cos b ðÞ2h 2À19ðÞþ47:5Âtan b f g 47:5½þ47:5Âtan b þh 2À19ðÞf g 62:5c w =unit weight of water =10kN/m 3.Using Eq.(1),for b from 2°to 12.3°the FS values are calculated.Pseudo-dynamic analysis for C-2For this condition the details of forces along with the failure surface for the pseudo-dynamic analysis of the tailings dam is shown in Fig.7.From Fig.7for pseudo-dynamic analysis it can be written that,Resisting force ¼c 01l ABC þc 02l CD þc 01l DEÀÁþW 1þW 2þW 5ðÞÀQ v 1ðt ÞþQ v 2ðt ÞþQ v 5ðt ÞðÞf g cos b ÀQ h 1ðt ÞþQ h 2ðt ÞþQ h 5ðt ÞðÞsin b ÀU 1½ tan u 01þW 3þW 6þW 8ðÞÀQ v 3ðt ÞþQ v 6ðt ÞþQ v 8ðt ÞðÞf g cos b ÀQ h 3ðt ÞþQ h 6ðt ÞþQ h 8ðt ÞðÞsin b ÀU 2½ tan u 02þW 4þW 7þW 9þW 10ðÞÀQ v 4ðt ÞþQ v 7ðt ÞþQ v 9ðt ÞþQ v 10ðt ÞðÞf g cos b ÀQ h 4ðt ÞþQ h 7ðt ÞþQ h 9ðt ÞþQ h 10ðt ÞðÞsin b ÀU 3!tan u 01Driving force ¼W 1þW 2þW 5ðÞÀQ v 1ðt ÞþQ v 2ðt ÞþQ v 5ðt ÞðÞf g sin b þQ h 1ðt ÞþQ h 2ðt ÞþQ h 5ðt ÞðÞcos b ½þW 3þW 6þW 8ðÞÀQ v 3ðt ÞþQ v 6ðt ÞþQ v 8ðt ÞðÞf g sin b þQ h 3ðt ÞþQ h 6ðt ÞþQ h 8ðt ÞðÞcos b ½ þW 4þW 7þW 9þW 10ðÞÀQ v 4ðt ÞþQ v 7ðt ÞþQ v 9ðt ÞþQ v 10ðt ÞðÞf g sin b þQ h 4ðt ÞþQ h 7ðt ÞþQ h 9ðt ÞþQ h 10ðt ÞðÞcos b!68 D.Chakraborty,D.ChoudhuryNow,for segment 1horizontal and vertical seismic inertia forces are,Q h 1ðt Þ¼c 1;sat k h =4p 2tan bÀÁ½k 21sin 2pn 1Àk 21sin 2p t =T ðÞþ2p ð19Þk 21cos 2pn 1 Q v 1ðt Þ¼c 1;sat k v =4p 2tan bÀÁ½g 21sin 2pw 1Àg 21sin 2p t =T ðÞþ2p ð19Þg 21cos 2pw 1Ãwhere,k 1=TV s 1,n 1=(t /T )-(19/V s 1T ),g 1=TV p 1,w 1=(t /T )-(19/V p 1T )and V s 1=322m/s and V p 1=603m/s for shell,V s 2=182m/s and V p 2=338m/s for core.Similarly for segments 2–10the horizontal inertia forces (Q h 2(t )-Q h 10(t ))and vertical inertia forces (Q v 2(t )-Q v 10(t ))are determined.Using Eq.(1),for b from 2°to 12.3°,the FS values are determined and using the optimization technique the min-imum value of pseudo-dynamic FS is obtained.Results and DiscussionsWhen water table is at 3.5m below the existing ground level (C-1)the results of pseudo-static and pseudo-dynamic analysis are presented using Figs.8and 9respectively.From Fig.8the minimum value of pseudo-static FS is obtained as 1.52for b =3°and h =41m,and from Fig.9the minimum value of pseudo-dynamic FS is obtained as 1.49for b =4°and h =40m.When water level in the reservoir is up to the top surface of the pond tailings portion (C-2)the results of pseudo-static and pseudo-dynamic analysis are presented using Fig.10.It should be mentioned that,under this condition the seismic stability analysis is carried out with the assumption of a single planer failure surface.As a conse-quence of that assumption,the distance of the failure sur-face from the crest of the dam will not come into consideration;the factor of safety will depend only on angle b (i.e.the angle between failure plane and horizontalground surface).However,for case C-1,the seismic sta-bility analysis is carried out by considering a bilinearFig.8Variation of FS values with the variation of the values of h for different values of b ,with k h =0.15and k v =0.075(for C-1)using pseudo-static analysisFig.9Variation of FS values with the variation of the values of h for different values of b ,with k h =0.15and k v =0.075(for C-1)using pseudo-dynamic analysisPseudo-Static and Pseudo-Dynamic Stability Analysis 69failure surface.As a consequence of that assumption,the distance (h ,refer Fig.2)of the failure surface from the crest of the dam will come into consideration,and the factor of safety will depend on both the b angle and h .For a particular angle b there will be a particular h value for which factor of safety will become lowest or critical.So,for the case C-1,a similar figure (as Fig.10)is not possible to draw.From Fig.10,the minimum value of pseudo-static FS is obtained as 1.18for b =10°,and the minimum value of pseudo-dynamic FS is obtained as 1.09for b =11.5°.The static factor of safety of the slope of the dam is obtained by considering the values of k h and k v equal to zero (0,0).The minimum value of static factor of safety of slope for the tailings earthen dam when the water table is 3.5m below the existing ground surface is obtained as 2.28for b =5°and h =40m.Similarly,the minimum value of static factor of safety of slope for the tailings earthen dam when the water level in the reservoir is up to the top of the pond tailings portion is obtained as 2.2for b =12.3°.It can be noted that,with the consideration of seepage effect,the reduction in the factor of safety values from static to seismic analysis is about 46.4%(in case of pseudo-static analysis)to 50.5%(in case of pseudo-dynamic analysis).However,in absence of seepage,the reduction in the factor of safety value is about 33.3%(in case of pseudo-static analysis)to 34.7%(in case of pseudo-dynamic analysis).It clearly indicates the importance of considering the seepage effect at the time of designing a tailings earthen dam under seismic load-ing conditions.This observation is on the similar line as was reported by Choudhury and Ahmad [17,18]for the design of waterfront retaining wall under seismic conditions.ConclusionsPseudo-static and pseudo-dynamic methods of analyses are used for the seismic stability analysis of a 44m high water retention type tailings dam at eastern India which stores non-radioactive nuclear waste material in the upstream side.For the case when the water table is at 3.5m below the existing ground level,the pseudo-static factor of safely value is obtained as 1.52and the pseudo-dynamic factor of safety is obtained as 1.49for k h and k v values of 0.15and 0.075respectively.When the water level in the reservoir is up to the top of the pond tailings portion,the factor of safety values are 1.18and 1.09respectively using pseudo-static and pseudo-dynamic approach.Now,following Seed [23],for safe design under seismic loading condition,the factor of safety against slope failure should be greater than 1.15.Hence,when the water level in the reservoir is up to the top of the pond tailings portion,though the pseudo-static analysis indicate that the dam is safe under seismic loading condition,pseudo-dynamic condition results infer that the dam is unsafe in presence of seismic forces.It clearly shows the demerits of conventional pseudo-static analysis compared to recent pseudo-dynamic analysis,where in addition to the seismic accelerations,duration,frequency of earthquake,body waves traveling during earthquake are considered.Acknowledgement Authors would like to acknowledge the finan-cial support received from Atomic Energy Regulatory Board (AERB),Mumbai,Government of India for funding the sponsored research project no.AERB/CSRP/31/07to carry out the present research study.References1.Choudhury D,Sitharam TG,Subba Rao KS (2004)Seismic design of earth retaining structures and foundations.Curr Sci 87:1417–14252.Clough RW,Chopra AK (1966)Earthquake stress analysis in earth dams.J Eng Mech ASCE 92(EM2):197–2113.Seid-Karbasi M,Byrne P (2004)Embankment dams and earth-quakes.Hydropower Dams 2:96–1024.FLAC (2000)Fast lagrangian analysis of continua user’s guide,version 4.Itasca Consulting Group,Minneapolis5.PLAXIS (2002)PLAXIS 2D:Reference manual,version 8.0.Plaxis BV,Delft6.TELDYN (1998)TELDYN—users’manual.TAGAsoft Limited,Lafayette7.Zhu Y,Lee K,Collison GH (2005)A 2D seismic stability and deformation analysis.In:Geo-frontiers 2005,ASCE,Austin,pp 1–158.Chakraborty D,Choudhury D (2009)Investigation of the behavior of tailings earthen dam under seismic conditions.American J Eng Appl Sci 2(3):559–5649.Chakraborty D,Choudhury D (2011)Seismic behavior of tailings dam using FLAC 3D .ASCE Geotechnical Special Publication No.211:3138–3147Fig.10Variation of the pseudo-static and pseudo-dynamic FS values with the variation of the values of b ,for k h =0.15and k v =0.075(for C-2)70 D.Chakraborty,D.Choudhury10.Chakraborty D,Choudhury D(2012)Seismic stability and liq-uefaction analysis of tailings dam.Disaster Adv5(3):15–25 11.FLAC3D(2006)Fast lagrangian analysis of continua in3dimensions User’s guide,version3.1.Itasca Consulting Group, Minneapolis12.TALREN4(2007)TALREN4user’s guide,version2.0.3.Ter-rasol Geotechnical Consulting Engineers,Montreuil Cedex 13.Zeng X,Wu J,Rohlf RA(1998)Seismic stability of coal-wastetailings dams.ASCE Geotechnical Special Publication No75: 950–96114.Steedman RS,Zeng X(1990)The influence of phase on thecalculation of pseudo-static earth pressure on retaining wall.Geotechnique40(1):103–11215.Choudhury D,Nimbalkar S(2005)Seismic passive resistance bypseudo-dynamic method.Geotechnique55(9):699–70216.Choudhury D,Nimbalkar S(2006)Pseudo-dynamic approach ofseismic active earth pressure behind retaining wall.Geotech Geol Eng24(5):1103–111317.Choudhury D,Ahmad SM(2007)Design of waterfront retainingwall for the passive case under earthquake and tsunami.Appl Ocean Res29:37–4418.Choudhury D,Ahmad SM(2008)Stability of waterfront retainingwall subjected to pseudodynamic earthquake forces.J Waterway Port Coastal Ocean Eng ASCE134(4):252–26019.Choudhury D,Nimbalkar S(2008)Seismic rotational displace-ment of gravity walls by pseudodynamic method.Int J Geomech ASCE8(3):169–17520.IS:1893—Part1(2002)Indian standards criteria for earthquakeresistant design of structures.Bureau of Indian Standard(BIS), New Delhi21.IS:1893(1984)Indian standard criteria for earthquake resistantdesign of structures.Reaffirmed2003,fourth revision.Bureau of Indian Standard(BIS),New Delhi22.SEEP/W(2007)SEEP/W user’s guide,version7.03.Geo-SlopeInternational Ltd.,Calgary23.Seed HB(1979)Considerations in the earthquake-resistant designof earth and rockfill dams.Geotechnique29(3):215–263Pseudo-Static and Pseudo-Dynamic Stability Analysis71。