The Design of SMS Based Heterogeneous Mobile Botnet
- 格式:pdf
- 大小:761.01 KB
- 文档页数:9
Xiaoping Du Department of Mechanical and AerospaceEngineering,University of Missouri–Rolla,Rolla,MO65409e-mail:dux@Agus SudjiantoFord Motor Company,Dearborn,MI48121-4091e-mail:asudjian@Wei Chen* Department of Mechanical Engineering,Northwestern University,Evanston,IL60208-3111 e-mail:weichen@ An Integrated Framework for Optimization Under Uncertainty Using Inverse Reliability Strategy In this work,we propose an integrated framework for optimization under uncertainty that can bring both the design objective robustness and the probabilistic design constraints into account.The fundamental development of this work is the employment of an inverse reliability strategy that uses percentile performance for assessing both the objective ro-bustness and probabilistic constraints.The percentile formulation for objective robustness provides us an accurate evaluation of the variation of an objective performance and a probabilistic measurement of the robustness.We can obtain more reasonable compound noise combinations for a robust design objective compared to using the traditional ap-proach proposed by Taguchi.The proposed formulation is very efficient to solve since it only needs to evaluate the constraint functions at the required reliability levels.The other major development of this work is a new search algorithm for the Most Probable Point of Inverse Reliability(MPPIR)that can be used to efficiently evaluate percentile perfor-mances for both robustness and reliability assessments.Multiple strategies are employed in the MPPIR search,including using the steepest ascent direction and an arc search.The algorithm is applicable to general non-concave and non-convex performance functions of random variables following any continuous distributions.The effectiveness of the MPPIR search algorithm is verified using example problems.Overall,an engineering example on integrated robust and reliability design of a vehicle combustion engine piston is used to illustrate the benefits of our proposed method.͓DOI:10.1115/1.1759358͔1IntroductionRecent years have seen many developments of methods for design under uncertainty.Among these developments,Robust de-sign͓1–2͔and reliability-based design͓3͔represent two major paradigms for design under uncertainty.It should be pointed out that the emphases of these two paradigms are different.Robust design is a method for improving the quality of a product through minimizing the effect of variation without eliminating the causes ͓4͔.It emphasizes on achieving the robustness of performance͑for design objective͒.On the other hand,the reliability-based designapproach focuses on maintaining design feasibility͑for designconstraints͒at expected probabilistic levels.It is our belief that fordesign under uncertainty,the needs of robustness and reliabilityshould be integrated.A common challenge that designers face when using either ro-bust design or reliability-based design is the computational ex-pense.Although methods have been developed for improving thecomputational efficiency of either robust design or reliability-based design,these methods are distinctly different because of thedifferent emphases these two paradigms have.Specifically,underthe robust design paradigm,mean and variance of performanceare evaluated for assessing the design objective and the probabi-listic constraints are simplified using either the worst-case sce-nario͑sensitivity analysis based͒or the moment-matching formu-lation͓4–10͔.The commonly used method to evaluate theperformance deviation͑or variance͒in robust design is thefirstorder Taylor expansion.If the variances of random variables arelarge and the performance function is highly nonlinear,this ap-proach may result in large errors.The use of Monte Carlo simu-lation for evaluating probabilistic characteristics is generally notaffordable in many design applications.Under the reliability-based design paradigm,methods have been developed for efficiently assessing the probability of con-straints being feasible͑or called reliability͒.Many of these meth-ods are based on the concept of the Most Probable Point͓11–15͔, which emphasizes on assessing the tail performance of a probabi-listic constraint.In reliability-based design,deterministic objec-tives such as the performance at the mean values of random vari-ables are often used.To overcome the difficulties and inefficiency associated with nested double-loop procedures,sequential single-loop methods that separate the inner probabilistic assessment loop and the outer optimization loop have been proposed͓16–19͔.It is our belief that both robustness and reliability are desired characteristics for design under uncertainty.Therefore these two paradigms need to be integrated in a unified probabilistic optimi-zation formulation.Although attempts have been made to inte-grate the robustness into the reliability-based design͓20,21͔,none of the existing works addresses the development of efficient com-putational techniques to facilitate the assessments of both robust-ness and reliability characteristics in searching the probabilistic optimal solution.In this work,we propose an integrated frame-work for optimization under uncertainty that can efficiently bring both the design objective robustness and the probabilistic design constraints into account.Two major developments are involved. The fundamental development is the employment of an inverse reliability strategy͓17,18,22–24͔that uses percentile performance for assessing both the robustness objective and probabilistic con-straints.The probabilistic constraints are formulated as inverse reliability constraints,which are assessed by equivalent percentile performances͑inverse reliability formulation͒;while the robust-ness is achieved through a design objective in which the variation of a design performance is approximately evaluated through the percentile performance difference between the right and left tails of performance distribution.Corresponding to the use of percen-tile performance,the other major development is a new search algorithm of Most Probable Point of Inverse Reliability͑MPPIR͒. The new algorithm is used to efficiently evaluate the robustness and reliability in the proposed formulation.In the remaining part*Corresponding author.Contributed by the Design Automation Committee for publication in the J OUR-NAL OF M ECHANICAL D ESIGN.Manuscript received January2003;revised January2004.Associate Editor:G.M.Fadel.562ÕVol.126,JULY2004Copyright©2004by ASME Transactions of the ASMEof this paper,we will demonstrate the benefits of the proposed integrated framework for optimization under uncertainty and the effectiveness of the MPPIR search algorithm.2A General Design Model Under ConcertaintyA typical design model under uncertainty is given by:Minimize:f͑v gob j͒Design Variable DVϭ͕d,x͖(1) Subject to:Prob͕g i͑d,X,P͒р0͖у␣i,iϭ1,2,...,m, In the above model,f is the design objective,which is a func-tion of the probabilistic characteristic v gob j of the objective per-formance g ob j(d,X,P).The probabilistic characteristic v gob j couldinclude the mean,the standard deviation of g ob j,or the combina-tion of both.The probabilistic design objective is to minimize f.dis the vector of deterministic design variables or deterministiccontrol factors.X is the vector of random design variables orrandom control factors.P is the vector of random design param-eters or noise factors.The difference between a design variable ͑either deterministic or random͒and a design parameter is that the former is changeable and controllable by a designer in a designprocess while the latter is not.The decision variables in Eq.͑1͒are d and the distribution parametersx of random design vari-ables X.Examples of the distribution parametersx include the meanx,the standard deviationx,etc.g i(d,X,P)(i ϭ1,2,...,m)are constraint functions;Prob͕•͖denotes a prob-ability while␣i(iϭ1,2,...,m)stand for desired probabilities of constraint satisfaction;m is the number of constraints.Note that both the objective performance g ob j and constraint performance g i are performance variables.In the remainder of this paper,we use g to denote any performance variables.In the above design model,the design feasibility is formulatedas the probability of constraint satisfaction g(d,X,P)р0largerthan or equal to a desired probability␣.Usually this probability Prob͕g i(d,X,P)р0͖is called reliability.As shown in Fig.1,the probability of g(d,X,P)р0is the area underneath the curve of probability density function͑PDF͒of g for gр0,and this area should be greater than or equal to␣.In a robust design,the robustness of a design objective can beachieved by simultaneously‘‘optimizing the mean performance ob j’’and‘‘minimizing the performance varianceob j’’͓25͔.The performance g ob j(d,X,P)is a function of all random variables.Its mean valueob j and varianceob j2are to be minimized.The form of the objective can be expressed asmin f͑ob j,ob j͒.(2) Different from robust design,the emphasis of the reliability-based design is on maintaining the reliability of a constraint͑de-sign feasibility requirement͒.Usually only nominal values are considered for the objective,which is calculated at the means of random variables.Therefore in reliability-based design,the objec-tive is often represented by the nominal value of g ob j,i.e.,min g ob j͑d,X,P͒(3) In this work,a unified probabilistic optimization formulation is used to integrate the robustness and reliability considerations.As shown in the following model,the robustness requirement is cap-tured by the design objective while the reliability considerations are modeled with probabilistic constraints.min f͑ob j,ob j͒Design Variable DVϭ͕d,x͖(4) Subject to:Prob͕g i͑d,X,P͒р0͖у␣i,iϭ1,2,...,m, Equation͑4͒represents a multicriteria optimization problem where the tradeoff needs to be made between optimizing the mean performance and minimizing the performance variation͓25͔.How to construct an objective function representing designer’s prefer-ence in making the tradeoff is not the focus of this study.Here we assume that a single objective function is constructed based on both the mean and variance criteria.In this work,an inverse reli-ability strategy is proposed to reformulate the above optimization model,so that the robustness and reliability assessments can be treated in a unified manner.3An Inverse Reliability Strategy for Reformulating the Optimization Model Under UncertaintyAn inverse reliability strategy is proposed in this work to refor-mulate the probabilistic optimization formulation shown in Eq.͑4͒.This development is motivated by the need for developingcomputationally efficient techniques for solving the integrated probabilistic optimization model and the need for providing a more accurate assessment of performance dispersion in improving system robustness.In conventional reliability analysis,given a prespecified performance,which is called limit state͓26͔in the field of structural reliability,one is interested infinding the prob-ability(reliability)of the performance greater or less than that prespecified performance.With inverse reliability or called per-centile formulation,we will focus onfinding a specific perfor-mance that corresponds to a given reliability.This task is consid-ered as solving an inverse reliability problem͓18,19͔.In this work,we employ inverse reliability formulations for assessing both the robustness objective and the probabilistic constraints. 3.1Modeling Design Feasibility by Inverse Reliability Strategy.Du and Chen͓6͔discussed commonly used tech-niques for modeling design feasibility under uncertainty and they concluded that the ideal technique is the probabilistic formulation presented in Eq.͑4͒.However,to use Eq.͑4͒,we need to evaluate the reliability Prob͕g i(d,X,P)р0͖for each probabilistic function g i(d,X,P).In presence of multiple constraints,some constraints may never be active and consequently their reliabilities are ex-tremely high͑approaching1.0͒.Although these constraints are the least critical,the evaluations of these reliabilities will unfortu-nately dominate the computational effort in probabilistic optimi-zation.The solution to this problem is to perform the reliability assessment only up to the necessary level͓24͔.Hence,a formula-tion of percentile performance͑inverse reliability͒has been pro-posed to replace the reliability formulation͓17–19,24͔.The per-centile performance formulation is shown as:g␣р0,(5) where g␣is the␣-percentile performance of g(d,X,P),namely,Prob͕g͑d,X,P͒рg␣͖ϭ␣(6) Equation͑5͒indicates that the probability of g(d,X,P)less than or equal to the␣-percentile performance g␣is exactly equal to the desired reliability␣.The concept is demonstrated in Fig.2. If the shaded area,the probability at the left side of Eq.͑6͒,is Fig.1The concept of reliabilityJournal of Mechanical Design JULY2004,Vol.126Õ563equal to the desired reliability ␣,then the point g ␣on g axis is called the ␣-percentile value of function g .From Fig.2we see that,g ␣р0indicates that Prob ͕g i (d ,X ,P )р0͖у␣,which means that the probabilistic constraint is feasible.With the transforma-tion to inverse reliability,the original constraints that require re-liability assessments are now converted to equivalent constraints that evaluate the ␣-percentile performance.Instead of checking the actual reliability,the location of g ␣will now determine the feasibility of a constraint.For simplicity,we use percentile perfor-mance to stand for ␣-percentile performance.It has also been shown that with the percentile formulation we can avoid singular-ity problems which may occur in solving a direct reliability model ͑Eq.͑4͒͒during the iterative reliability assessment procedure ͓24͔.3.2Modeling Design Objective Robustness by Inverse Re-liability Strategy.A robust design objective reflects the need for shrinking the dispersion of a performance.As mentioned in Section 1,the use of Taylor expansion is not very accurate in estimating the standard deviation of a performance.With the original Taguchi’s robust design method,‘‘compound noise’’is used to assess high or low quality performances based on combi-nations of several noise factors ͓2͔.The robust design objective is then represented by minimizing the difference of performances at two compound noise combinations.The strategy of compound noise could significantly reduce the number of experiments ͑or number of simulations ͒since the performance is evaluated only at two points which supposedly correspond to the highest and lowest quality.However,certain conditions need to be satisfied to use this ad hoc approach ͓2͔.For example,͑1͒we must know what the major noise factors are;͑2͒we must know the directionality ͑posi-tive or negative ͒of their effects on the performance;and ͑3͒the directionality of those effects of noise factors should not depend on the settings of the control factors.If the last two conditions are violated,the effect of one noise factor may get compensated for by another noise factor and then the robust design based on the compound noise can give confusing and misleading results.Tagu-chi suggested a typical value of Ϯͱ3/2for noise levels,which does not always generate the highest and lowest quality perfor-mance and can lead to wrong conclusion in design selection ͓2͔.To utilize the idea of compound noise but to overcome the aforementioned drawbacks,we propose to use percentile perfor-mance difference to represent the variation of a performance.The percentile performance difference is given by⌬g ob j ␣1␣2ϭg ob j ␣2Ϫg ob j␣1,(7)in which ␣1and ␣2are reliability levels or the cumulative distri-bution functions ͑CDFs ͒of g given byProb ͕f ͑d ,X ,P ͒рg ob j ␣i͖ϭ␣i͑i ϭ1,2͒(8)␣1is a left-tail CDF,for example,0.05or 0.01,which repre-sents the performance at the left tail of its distribution and ␣2is a right-tail CDF,for example,0.95or 0.99.Percentile performances g ob j ␣1(d ,X ,P )and g ob j␣2(d ,X ,P )represent high and low ͑or low and high ͒system qualities respectively.From Fig.3we see that the percentile performance difference is the distance between ␣1per-centile performance and ␣2percentile performance.As shown in Fig.3,minimizing the percentile performance difference helps to shrink the range of the distribution.It should be noted that when a performance distribution is not unimodal,minimizing the percen-tile performance difference at two tails may not decrease the vari-ance of performance distribution.Our method will not be appli-cable for such cases which are often rare in design applications.Different from Taguchi’s approach of compound noise factor,where only random parameters ͑noise factors ͒are involved,our approach of compound noise factor includes a combination of all random variables,namely,random control factors X and noise factors P ͑therefore we call a compound noise factor ‘‘compound noise setting’’in the subsequent sections ͒.There are several advantages of using percentile performance difference to replace the conventional performance variance ͑or standard deviation ͒for robustness assessment.One major advan-tage is that a percentile performance is related to the probability at the tail areas of a performance distribution and therefore it carries more information than the standard deviation such as it could indicate the skewness of a distribution ͑see the example in Section 5͒,while the standard deviation only captures the dispersion around the mean value.Also with percentile formulation,we can immediately know to what extent or at what confidence level the design robustness is achieved.This confidence level is given by ␣2Ϫ␣1.The other major advantage is related to the computa-tional efficiency achieved by using inverse reliability assessments ͑percentile evaluations ͒for both robustness objective and proba-bilistic constraints ͓19͔,with more details in Section bined with the concept of the Most Probable Point ͑MPP ͒,the percentile formulation gives us reasonable compound noise settings in ro-bustness evaluations.In summary,using the inverse reliability strategy,the unified probabilistic optimization model for integrated robustness and re-liability design becomesmin ͕g ob j ,⌬g ob j ␣1␣2͖Design Variable DV ϭ͕d ,x ͖(9)Subject to:g i ␣͑d ,X ,P ͒р0,i ϭ1,2,...,m .4The Inverse Reliability Assessment MethodTo solve Eq.͑9͒efficiently,an efficient search algorithm for the Most Probable Point of Inverse Reliability ͑MPPIR ͒is developed in this work to evaluate the percentile performances.We will first discuss our proposed MPPIR search algorithm and then explain how the MPPIR is related to the compound noise setting.4.1Background of MPP Search for the Inverse Reliability Problem.The MPP concept was originally developed in the structural reliability area ͓3͔with the purpose of reliability assess-ment.With the MPP approach,the random variables Y ϭ(X ,P)Fig.2An ␣-percentile of a constraintfunctionFig.3Percentile difference for shrinking the distribution564ÕVol.126,JULY 2004Transactions of the ASMEare transformed into an independent and standardized normal space Uϭ(U X,U P).The transformation is given by͓27͔,U iϭ⌽Ϫ1͓F Y i͑Y i͔͒,(10) where⌽Ϫ1is the inverse of a standard normal distribution and F is a CDF of a general random variable Y.Equation͑10͒implies that the transformation maintains the CDFs being identical both in the original random space(Y-space͒and the U-space.The MPP is formally defined in the standardized normal space as the minimum distance point on the constraint boundary g(d,X,P)ϭg(d,U X,U P)ϭ0to the origin.The minimum distance is called reliability index.When the First Order ReliabilityMethod͑FORM͓͒28͔is used,the reliability is given by␣ϭProb͕g͑d,X,P͒р0͖ϭ⌽͑͒,(11) where⌽is the standard normal distribution function.Finding the MPP and the reliability index is a minimization problem,which usually involves an iterative search process.For details about the MPP based method,refer to͓26͔.In an inverse reliability problem,the required reliability␣is given and the percentile performance corresponding to␣is to be evaluated.Form Eq.͑11͒,the reliability indexis given byϭ⌽Ϫ1͑␣͒.(12) Note that Eq.͑12͒is applicable for␣у0.5.When␣Ͻ0.5,it becomesϭ⌽Ϫ1͑1Ϫ␣͒.(13) As shown in Fig.4,the MPPIR becomes the common point ͑tangent point͒of a hyper sphere with radius͑-sphere͒in U-space and the contour of g(U).At this point g(U)reaches its minimum͑or maximum͒.Whether the function is minimum or maximum at the MPPIR depends on to which tail the MPPIR corresponds.When the MPPIR corresponds to the left tail,we have a minimization problem otherwise we have a maximization problem.We will only discuss the maximization,but the same principle can be applied to a minimization problem.Fig.4shows the MPPIR where g(U)is to be maximized.An MPPIR problem is modeled as a maximization problem:ͭmaxmize g͑u͒subject to͑u T u͒1/2ϭ(14) Once the MPPIR is identified,the percentile performance is cal-culated byg␣ϭg͑u M PPIR͒ϭg͑x M PPIR͒,(15) which is the g function evaluated at the MPPIR.Several existing methods can be used to solve Eq.͑14͒,includ-ing optimization techniques͓29͔,the traditional MPP search algo-rithm͓26͔that is based on the concept of the steepest ascent direction͑we will discuss it later in Eq.17͒,the Diagonal Direc-tion Method͓30͔,and the Hybrid Mean Value͑HMV͒Method ͓31͔.Solving Eq.͑14͒using conventional optimization techniques is a generic method but may not be efficient to solve the special type of minimization problem in Eq.͑10͒.Other specialized meth-ods are mostly gradient-based but there is no guarantee of conver-gence.The solution found could be a saddle point or a minimum point instead.Some of the existing MPP or MPPIR search algo-rithms have convergence difficulties for non-concave and non-convex problems.It is our goal in this research to develop a new efficient MPPIR search algorithm that can be used for any types of performance functions and is robust in its convergence behavior.4.2The Proposed MPPIR Search Algorithm.In develop-ing an improved MPPIR search algorithm,we aim to improve the performance of the algorithm in two categories:1͒efficiency:to find the MPPIR with the number of function evaluations as small as possible for any regular͑well-behaved͒functions and2͒ro-bustness:to avoid divergence caused by irregular performance functions.Our proposed algorithm starts from the same vector overlap-ping condition which is used in the traditional search algorithm. Referring to Fig.4for a two dimensional case,the MPPIR is the tangent point of the-sphere and the limit state surface in the U-space.At this tangent point,the vector u M PPIR connecting the MPPIR u M PPIR and the origin O should overlap with the gradient ٌg(u M PPIR)of the function g(U).The angle between u k andٌg(u k)is calculated by␥kϭcosϪ1u k•ٌg͑u k͒ʈu kʈ•ʈٌg͑u k͒ʈ(16)At the MPPIR,the angle␥k should be zero.A sufficiently small angle␥k is considered as the stopping criterion.To satisfy this condition,the search process starts from the steepest ascent direc-tion and this is what a traditional MPPIR search algorithm does. When this direction leads to a decreasing performance function value due to the irregular function behavior,the second measure—an arc search procedure will be performed.It is called arc search because the search of the MPPIR is along an arc of the -sphere.The arc search can avoid converging to a minimum point or a saddle point.Note that for convenience,the search procedure is illustrated here in a two dimensional space.For higher dimensional problems,a plane,a curve,or a circle dis-cussed for the two dimensional case will be a hyper plane,a hyper surface,or a hyper sphere,respectively.Suppose the current point is u k(k stands for the k th iteration in searching the MPPIR͒.Atfirst,the steepest ascent direction ٌg(u k)is used to obtain the new point u kϩ1on the-sphere by the following equation͓28͔,u kϩ1ϭٌg͑u k͒ʈٌg u kʈ(17)Since the feature of the steepest ascent direction ofٌg(u k)is valid locally around u k,there is a need to check the performance function value to see whether there is a progress when using Eq.͑17͒.If g(u kϩ1)Ͼg(u k),there indeed is a progress and the next iteration will follow Eq.͑17͒again.If g(u kϩ1)рg(u k),it indi-cates that u kϩ1is not improved compared with u k.An arc search will then be performed to identify a new u kϩ1that leads to a increasing value of performance.As shown in Fig.5,the arc search is tofind the maximum function value point on the intersection of-sphere and the plane determined by the vectors u k andٌg(u k).Apparently,theplane Fig.4Inverse most probable pointJournal of Mechanical Design JULY2004,Vol.126Õ565passes through the origin O and the search path is an arc of the -sphere.Let ␥k denote the angle between the next point u k ϩ1and the current point u k ͑see Fig.5͒.u k ϩ1is expressed by a linearcombination of u k ϩ1and ٌg (u k )as follows.u k ϩ1ϭsin ͑␥k ͒ͩsin ͑␥k Ϫ␦k ͒u k ʈu k ʈϩsin ␦kٌg ͑u k ͒ʈٌg ͑u k ͒ʈͪ(18)The arc search is then formulated as a one-dimensional maxi-mization problem represented byͭFind :The angle ␦k Maximize g ͑u k ϩ1͒ϭgͭsin ͑␥k ͒ͩsin ͑␥k Ϫ␦k ͒u k ʈu k ʈϩsin ␦kٌg ͑u k ͒ʈٌg ͑u k ͒ʈͪͮ(19)Once the optimal angle ␥k is found,the new point u k ϩ1iscalculated by Eq.͑18͒.To further illustrate the procedure of an arc search,the progress of the proposed method is demonstrated in Fig.6for a three-dimensional case.Suppose from the k -th iteration,an arc search is needed and the current point is u k ,the new point u k ϩ1is deter-mined in the plane by the vector u k and vector ٌg (u k ).Geometri-cally,this new point is the tangent point of the projection of ٌg (u k )on -sphere ͑an arc segment ͒and the projection of g (u k )on -sphere ͑contours on -sphere ͒.Obviously,the value of g (U )at u k ϩ1is smaller than the one at u k .Analogously,the same procedure is conducted to find points u k ϩ2,u k ϩ3,¯,etc.until the vector u i overlaps with the vector ٌg (u i ).If we let the current point be u k on the -sphere,the search process is summarized as follows:1.Calculate the gradient ٌg (u k )at u k .2.Calculate the angle ␥k between ٌg (u k )and u k using Eq.͑16͒;3.If ␥k р,u k is the MPPIR and go to 4,otherwise go to 1.is a small angle,for example,0.1°.4.Calculate the percentile performance g (u k )and stop;5.If g (u k )Ͻg (u k Ϫ1),update the pointu k ϩ1ϭٌg ͑u k ٌ͉͒g ͑u k ͉͒,and k ϭk ϩ1,then go to 1.Otherwise,use Eqs.͑18͒and ͑19͒to perform the arc search to locate the new point u k ϩ1and update k by k ϭk ϩ1.Then,go to 1.To make the search process robust and efficient,the adaptive step size is also employed for the finite difference derivative evaluations if analytical derivatives are unavailable.The step size in one axis is 1%magnitude of the corresponding component of the current U point.For a ‘‘well-behaved’’performance function,for example,a convex function,the steepest ascent direction method works well and function g increases constantly.In this case the efficiency of our method is as good as the traditional method.When function g is convex,or non-concave and non-convex,the arc search in our proposed method guarantees the ascent of the performance func-tion and therefore the convergence.Hence our proposed method is robust to various types of limit-state functions.We will further verify our algorithm through comparative studies.4.3Verifications of the MPPIR Search Algorithm.Three examples are presented in this paper for the purpose of verifica-tion.In all examples,a finite difference method is used for deriva-tive evaluations.The efficiency and robustness of the proposed algorithm is compared with the traditional MPP search algorithm ͑based on the concept of the steepest ascent direction ͒as well as using the Sequential Quadratic Programming ͑SQP ͒for solving directly Eq.͑16͒.The SQP algorithm is chosen because it is a widely accessible and mature optimization solver.All examples involve functions taken from engineering applications,but the de-tailed background is omitted.Example 1:g ͑X ͒ϭX 2ϩ͑X 1ϩ0.25͒2Ϫ͑X 1ϩ0.25͒3Ϫ͑X 1ϩ0.25͒4Ϫ4(20)where X 1ϳN (0.0,1.0)and X 2ϳN (0.0,1.0);N (,)stands for a normal distribution with mean and standard deviation .TheFig.5ArcsearchFig.6Arc search procedure566ÕVol.126,JULY 2004Transactions of the ASME。
第48卷第4期2023年7月㊀林㊀业㊀调㊀查㊀规㊀划Forest Inventory and PlanningVol.48㊀No.4July2023doi:10.3969/j.issn.1671-3168.2023.04.003基于高分辨率航空遥感影像的林分因子智能识别技术研究李琦1,辛亮2,孟陈3(1.上海市林业总站,上海200072;2.上海市测绘院,上海200063;3.景遥信息技术有限公司,上海201109)摘要:森林资源监测的数字化和智能化是未来发展的主要趋势㊂基于高分辨率航空㊁多光谱遥感数据和数字地表模型(DSM)等数据,利用计算机深度学习方法,研究乔木林小班的郁闭度㊁平均树高㊁总株数3项主要林分调查因子的数字化智能提取方法㊂结果表明,郁闭度判读的平均准确率可达到98.6%;平均树高判读的平均准确率可达到90%;株数判读的平均准确率可达到82.36%㊂关键词:智能识别技术;高分辨率航空遥感影像;林分调查因子;自动判读中图分类号:S771.8;TP75;TP18㊀㊀文献标识码:A㊀㊀文章编号:1671-3168(2023)04-0024-04引文格式:李琦,辛亮,孟陈.基于高分辨率航空遥感影像的林分因子智能识别技术研究[J].林业调查规划,2023, 48(4):24-27.doi:10.3969/j.issn.1671-3168.2023.04.003LI Qi,XIN Liang,MENG Chen.Intelligent Recognition Technology of Forest Stand Factors Based on High-resolution Aerial Remote Sensing Images[J].Forest Inventory and Planning,2023,48(4):24-27.doi:10.3969/j.issn.1671-3168.2023.04.003Intelligent Recognition Technology of Forest Stand Factors Based onHigh-resolution Aerial Remote Sensing ImagesLI Qi1,XIN Liang2,MENG Chen3(1.Shanghai Forestry Station,Shanghai200072,China;2.Shanghai Surveying and Mapping Institute,Shanghai200063,China;3.Jingyao(Shanghai)Information Technology Co.,Ltd.,Shanghai201109,China) Abstract:The digitization and intelligence of forest resource monitoring is the main trend in future devel-opment.Based on high-resolution aerial,multispectral remote sensing data,and digital surface model (DSM)data,this paper studied the digital intelligent extraction method for three main forest stand survey factors,namely canopy density,average tree height,and total plant number,in the subcompartment of arboreal forest by using computer deep learning.The results showed that the average accuracy of canopy density interpretation reached98.6%;the average accuracy of average tree height interpretation reached 90%;the average accuracy of plant number interpretation reached82.36%.Key words:intelligent identification technology;high-resolution aerial remote sensing images;forest stand survey factors;automatic interpretation㊀㊀森林是陆地生态系统中的重要主体,为人类提供赖以生存和发展的重要物质基础,对人类生存和经济社会可持续发展起着不可替代的作用[1]㊂上海市是最早进行城市森林规划和建设的城市之收稿日期:2022-03-08;修回日期:2022-06-20.基金项目:上海市绿化和市容管理局科学技术项目(G201209).第一作者:李琦(1969-),男,浙江金华人,高级工程师.主要从事森林资源监测工作.李琦,等:基于高分辨率航空遥感影像的林分因子智能识别技术研究一[2-3]㊂城市森林在固碳[4-5]㊁生物多样性保护[6]和缓解城市热岛[7-8]等方面均具有重要作用㊂森林资源监测是对森林资源的数量㊁质量㊁空间分布㊁利用状况等现状及其动态消长变化进行观测㊁分析和评价的一项林业基础性工作,为实现森林资源科学管理和合理利用提供重要的基础数据保障㊂森林资源监测的主要工作内容之一,即是对森林资源的树种㊁胸径㊁树高㊁郁闭度㊁株数等林分因子状况进行调查㊂传统的人工调查方式存在着诸多不足之处,主要表现在两个方面:(1)大量的监测数据需要调查人员实地获取,工作任务繁重,人力成本高,监测效率较低;(2)监测数据的精度依赖于调查人员的专业素质和工作责任心,主观因素影响较大,监测成果的质量不稳定㊂随着科学技术水平的发展,无人机㊁激光雷达等新装备以及遥感信息自动识别㊁计算机深度学习等新技术已逐步在森林资源监测领域得到重视和广泛研究[9-15]㊂2020年以来,上海市林业总站联合上海市测绘院㊁国家林业和草原局华东调查规划院等多家单位,基于高分辨率航空遥感影像㊁多光谱遥感影像㊁数字地表模型(DSM)等数据,利用计算机深度学习方法,开展了以乔木林小班郁闭度㊁平均树高㊁总株数3项主要林分调查因子为重点研究内容之一的自动估测方法研究,以实现减少外业实地调查工作量,提高监测效率和调查精度,提升上海市森林资源监测智能化水平的目标㊂1研究区域概况上海市地处长江入海口,位于长江三角洲以太湖为中心的碟形洼地东缘,地势低平㊂是一座具有世界影响力的现代化国际大都市㊂据上海市2020年度森林资源监测成果显示,全市森林总面积为117258hm2,森林覆盖率为18.49%㊂其中,乔木林面积为104028hm2,占森林总面积的75.87%㊂上海市森林资源具有典型的城市森林特点,全部为人工林,混交林分占比超过50%,林分组成较为复杂,其功能以生态防护或景观游憩为主,生态公益林面积占比超过80%㊂由于境内水网密布,道路纵横,城镇化水平高等因素影响,森林资源小班的尺度较小,细碎小班多,分布零散㊂全市共有各类森林资源小班约35万个,其中0.5hm2以下小班约28.5万个,占比达81.81%㊂四旁树的分布在全市占有较大比重㊂2研究方法2.1数据源根据上海市森林资源分布特点,本项研究以2020年度森林资源监测成果数据库中乔木林小班为研究对象,基于小班现状分布及其边界范围开展郁闭度㊁平均树高㊁总株数3项主要林分调查因子的自动估测技术研究㊂研究采用的影像数据主要包括上海市2019年11月至2020年2月大飞机DMC III (digital mapping camera)航空遥感影像,分辨率为0.1m,波段为RGBN(红㊁绿㊁蓝㊁近红外)4个波段;以及依据航空摄影测量影像通过空三解算的数字地表模型(DSM)数据(分辨率为0.1m)㊂正射影像和DSM模型经布设于地面的检查点核验,平面㊁高程精度分别为ʃ0.12m和ʃ0.19m㊂辅助数据源为2020年第三季度高景一号卫星影像,分辨率为0.5m㊂2.2技术路线基于现有乔木林小班边界范围,利用本市高精度四波段航空影像提取小班内植被覆盖范围,然后利用DSM数据进行切片处理,剃除低矮植被后提取出乔木分布和冠幅信息,进而获取小班郁闭度㊁平均树高㊁总株数等林分调查因子信息(图1)㊂图1㊀技术路线Fig.1㊀Technical route2.3研究方法2.3.1郁闭度判读郁闭度是指林木冠层的投影面积与该林分林地㊃52㊃第4期林业调查规划总面积的比值㊂郁闭度准确判读的关键在于乔木层林冠范围的准确识别与提取㊂本研究采用归一化植被指数算法进行小班郁闭度的自动判读㊂归一化植被指数是一种常用的植被提取算法,利用归一化植被指数算法可以提取小班内所有的植被信息,然后利用DSM点云数据,通过设置高度阀值剔除草地㊁农用地㊁绿化地表等低矮植被信息,即可获得较为准确的乔木冠层信息㊂由于上海市航空影像的获取时间为冬季,基于光谱信息的冠幅提取方法对落叶树种并不完全适用,判读精度不足㊂因此,本研究中同时利用了植被生长状态较好的夏季卫星影像来进行弥补,由于卫星传感器无法获取立体影像对,基于DSM数据的林冠提取方法并不适用,因此,在已有数据的基础上,通过制作林地样本,利用计算机深度学习技术提取乔木冠幅信息,进而获得了较为真实的小班郁闭度㊂随机选取46个乔木林小班,利用判读数据与人工在航片上区划产生的数据进行精度比对,结果显示,郁闭度判读的平均准确率可达98.6%㊂2.3.2平均树高判读平均树高是反映林分中所有林木高度平均水平的测树因子,是森林资源调查中重要的调查因子之一㊂基于遥感影像开展林分平均树高估测的研究方法很多[13-15],本研究主要利用数字地表模型(DSM)点云技术进行判读㊂DSM是指包含了地表建筑物㊁桥梁和树木等高度的地面高程模型,能够真实地表达地面起伏情况,近年来在城市规划㊁林业等部门已得到较广泛应用㊂由于DSM数据记录了地表不同位置的高程信息,因此对DSM数据进行切片处理可得到不同高度平面上的地物信息㊂本研究通过DSM与数字高程模型(DEM)的差值得到乔木林冠层高度模型(CHM),然后利用判读获取的小班内所有乔木的单木位置,在CHM中自动提取小班内每株乔木的高度数据,最后汇总计算得到小班的平均树高㊂随机选取46个乔木林小班,利用判读数据与小班实地调查数据进行精度比对,结果显示,平均树高判读的平均准确率可达到90%㊂2.3.3株数判读根据遥感影像进行单株定位和识别是本项研究的重点和难点㊂最终研究确定的方法是:将遥感获得的可见光㊁多光谱影像和冠层高度模型(CHM)作为输入源,人工标记常绿㊁落叶和常绿落叶混交3种林分的单株乔木样本,并通过构建全卷积深度学习U-Net神经网络进行株数判读㊂U-net神经网络通过波段组合㊁归一化㊁图像增强㊁交叉熵函数和Adam 优化等算法,能够获得U-net模型最优参数组合,并获得单株乔木分布的概率点阵图,根据概率点阵图可以实现单株乔木识别和位置数据提取,最后根据现状小班边界范围,即可获取小班内乔木总株数㊂随机选取100个乔木林小班,利用判读数据与人工在航片上标记产生的数据进行精度比对,结果显示,株数判读的平均准确率可达到82.36%㊂不同林分类型的株数识别准确率不同,主要表现为常绿林分>混交林分>落叶林分的趋势,其中,常绿林分小班的平均准确率为90.77%,混交林分小班的平均准确率为79.15%,落叶林分小班的平均准确率为69.88%㊂3结㊀论在小班郁闭度㊁平均树高的自动判读研究中,由于研究所使用的航空遥感影像及DSM等数据均具有较高的分辨率,再辅助以夏季卫星影像数据进行判读补充,解决了冬季树木落叶的影响,两项研究内容的自动判读结果均符合小班调查精度要求,基本实现了预期目标,证明了其研究方法的有效性㊂在小班林木总株数的自动判读研究中,由于受航空遥感影像拍摄季节㊁树木落叶等因素影响,以及小班破碎化程度㊁树种组成复杂程度㊁林分郁闭度过高㊁环境影响等因素对判读精度造成的干扰,其判读结果距离小班调查精度要求仍有一定差距,说明其研究方法仍有进一步探索和改进的空间㊂研究结果表明,基于高分辨率航空遥感影像的林分调查因子自动判读技术,能够较为快速㊁客观㊁准确地获取乔木林小班的部分主要林分调查因子,对于提高森林资源监测效率和调查精度具有重要的实践意义㊂由于上海市森林资源存在着林相复杂㊁小班尺度较小且破碎化程度高等特点,目前的试验性研究与未来大范围的实践应用之间仍存在较多不可预见性,因此,本研究工作仍有待进一步深化和拓展㊂本项目的研究成果结合主要树种(组)智能识别㊁主要树种(组)胸径 树高关系模型研建等相关研究成果的综合运用,智能识别技术才能在上海市森林资源监管工作中得到广泛应用,提升全市森林资源监测智能化水平,实现森林资源监测技术质的飞越㊂利用本项目判读获取的小班平均树高及其它研㊃62㊃第48卷李琦,等:基于高分辨率航空遥感影像的林分因子智能识别技术研究究获取的胸径 树高关系模型,可以换算获得小班平均胸径这项重要林分调查因子,结合判读获取的小班乔木总株数,即可自动估算出小班的林木蓄积量,从而为上海市开展林长制考核以及为实现碳中和碳达峰而开展的相关工作提供快捷而准确的数据支撑㊂随着林长制在上海市的全面推行,加强对森林资源更新变化情况的实时监控必将成为各级政府和林业部门的工作重心之一㊂计算机智能识别技术高效的监测效率及其全面㊁客观㊁准确的判读精度,将在实施森林抚育㊁林相改造㊁林地征占用审批㊁森林资源灾后普查等工作中提供强有力的技术支持㊂参考文献:[1]宋永昌.植被生态学[M].上海:华东师范大学出版社,2001.[2]达良俊,杨同辉,宋永昌.上海城市生态分区与城市森林布局研究[J].林业科学,2004(4):84-88. [3]宋永昌.城市森林研究中的几个问题[J].中国城市林业,2004(1):4-9.[4]高业林.基于3S技术的城市森林碳汇能力研究[D].济南:山东建筑大学,2021.[5]张桂莲.基于遥感估算的上海城市森林碳储量空间分布特征[J].生态环境学报,2021,30(9):1777-1786. [6]宋永昌.城市森林研究中的几个问题[J].中国城市林业,2004(1):4-9.[7]陈朱,陈方敏,朱飞鸽,等.面积与植物群落结构对城市公园气温的影响[J].生态学杂志,2011,30(11):2590-2596.[8]曹璐,胡瀚文,孟宪磊,等.城市地表温度与关键景观要素的关系[J].生态学杂志,2011,30(10):2329-2334.[9]陈芸芝,陈崇成,汪小钦,等.多源数据在森林资源动态变化监测中的应用[J].资源科学,2004(4):146-152.[10]覃先林,李增元,易浩若.高空间分辨率卫星遥感影像树冠信息提取方法研究[J].遥感技术与应用,2005(2):228-232.[11]张巍巍,冯仲科,汪笑安,等.基于TM影像的林木参数提取和树高估测[J].中南林业科技大学学报,2013,33(9):27-31.[12]张志超.多源遥感森林资源二类调查主要林分因子估测关键技术研究及实现[D].西安:西安科技大学,2020.[13]江志向,陈紫璇,练一宁,等.基于航模飞行器摄影数据的森林信息提取方法[J].北京测绘,2017(3):153-157.[14]赵芳.测树因子遥感获取方法研究[D].北京:北京林业大学,2014.[15]韩学锋.基于高分辨率遥感林分调查因子的提取研究[D].福州:福建师范大学,2008.责任编辑:许易琦㊃72㊃第4期。
xx年中级通信工程师考试(互联网技术)上午真题xx年中级通信工程师考试上午真题卷面总分:分答题时间:120 分钟单项选择题在下列各题的备选项中,请选择1个最符合题意的选项。
1数字通信的即以每秒几百兆比特以上的速度,传输和交换从语音到数据以至图像的各种信息。
A. 大众化 B. 智能化 C. 模拟化 D. 宽带化2通信技术人员行业道德之一是树立服务保障观念,不图名利地位,属于这一内容的有。
A. 质量第一,确保设备完好率 B. 发扬协作精神 C. 不保守技术 D. 树立整体观念3电信职业道德与电信法律 A. 没有区别 B. 有联系有区别 C. 没有联系D. 无联系有区别4电信条例立法的目的之一是为了。
A. 维护电信用户和电信业务经营者的合法利益B. 鼓励竞争C. 实行互联互通D. 提高电信服务质量5电信监管的基本原则是。
A. 普通服务 B. 规范市场秩序 C. 公开、公平、公正D. 保障电信网络和信息安全6电信经营者经营活动的基本原则是。
A.依法经营,保障服务 B.保障服务,遵守商业道德C.公平、公开、公正,接受监督检查D.依法经营,遵守商业道德7经营基础电信业务的条件中,要求经营者必须依法设立的专门从事基础电信业务的公司,在公司的股权结构中,国有股权或股份不得。
A.少于49% B.少于50% C.少于51% D.少于60%8电信服务质量是指。
A. 服务性能质量B. 服务性能质量和网络性能质量C. 业务性能质量和网络性能质量D. 服务质量和业务质量9为了使网络外部性内部化,以增加整体社会福利,解决的办法是实行互联互通和。
A. 增加业务种类 B. 提高市场进入壁垒 C. 降低市场进入壁垒 D. 对用户提供补贴10合同履行的原则有适当履行和原则。
A. 实际履行 B. 全面履行 C. 不适当履行 D. 代偿履行11终端设备是构成通信网必不可少的设备,在下列终端设备中,不是数字终端设备。
A. PSTN电话机 B. ISDN电话机 C. PC机 D. GSM手机12为提高物理线路的使用效率,电信网的传输系统通常都采用多路复用技术,如PSTN网络和GSM网络的局间中继线路传输均采用复用技术。
[8]EL-SEBAIE M G,MELLOR P B.Plastic Instability Conditions when Deep-drawing into a High Pressure Medium [J].International Journal of Mechanical Sciences,1973,15 (6):485-490.[9]YOSSIFON S,TIROSH J,KOCHAVI E.On Suppression of Plastic Buckling in Hydroforming Processes[J].International Journal of Mechanical Sciences,1984,26(6-8):389-402.[10]YANG D Y,NOH T S.An Analysis of Hydroforming of Longitudinally Curved Boxes with Regular Polygonal Cross= section[J].International Journal of Mechanical Sciences, 1990,32(11):877-890.[11]NIKHARE C,WEISS M,HODGSON P D.Buckling in Low Pressure Tube Hydroforming[J].Journal of Manufacturing Processes,2017,28(1):1-10.[12]王仲仁.特种塑性成形[M].北京:机械工业出版社,1995.[13]NIELSEN K B,BRANNBERG N,NILSSON L.Sheet Metal Forming Simulation Using Explicit Finite Element methods[C].EURODYN'93,Trondheim,1993.[14]SALAHSHOOR M,GORJI A,BAKHSHI-JOOYBARI M.The Study of Forming Concave-bottom Cylindrical Parts in Hydroforming Process[J].The International Journal of Advanced Manufacturing Technology,2015,79(5-8):1139-1151.[15]KIM H S,SUMPTION M D,BONG H J,et al.Development of a Multi-scale Simulation Model of Tube Hydroforming for Superconducting RF Cavities[J].Materials Science and Engineering:A,2017,679:104-115.[16]张志超,徐永超,苑世剑.不锈钢外板对5A06铝合金板材液压胀形行为的影响[J].机械工程学报,2016,52(14):40-47.[17]毛献昌,杨连发,黄晖智,等.AZ31B镁合金液压拉深成形件的壁厚分布[J].热加工工艺,2017(5):125-130. [18]包文兵,徐雪峰,戴龙飞,等.等径三通管整体液压成形壁厚分布规律[J].锻压技术,2017,42(4):91-95. [19]陈大勇,徐勇,张士宏,等.圆角结构对新型金属桥塞密封件液压成形性能的影响[J].锻压技术,2017,42(3):123-129.[20]HASHMI M S J.Radial Thickness Distribution around a Hydraulically Bulge Formed Annealed Copper T-joint: Experimental and Theoretical Predictions[C]//DAVIES B J. Proceedings of the Twenty-second International Machine Tool Design and Research Conference,London:Palgrave, 1982:507-516.[21]AHMED M,HASHMI M S J,Estimation of Machine Parameters for Hydraulic Bulge Forming of Tubular Components[J].Journal of Materials Processing Technology, 1997,64(1-3):9-23.[22]LIMB M E,CHAKRABARTY J,GARBER S,et al.The Forming of Axisymmetric and Asymmetric Components from Tube[C]//KOENIGSBERGER F,TOBIAS S A.Proceedings of the Fourteenth International Machine Tool Design and Research Conference,London:Palgrave,1974:799-805. [23]HUTCHINSON M I.BulgeForming of Tubular Components[D].Sheffield:Sheffield City Polytechnic,1988.[24]DAMBORG F F,JENSEN M R.Hydromechanical Deep Drawing[J].Scandinavian Journal of Metallurgy,2000,29 (2):63-70.[25]NAKAMORI T,SHUKUNO K,MANABE K.In-process Controlled Y-shape Tube Hydroforming with High Accurate Built-in Sensors[J].Procedia Engineering,2017,184:43-49.作者简介:顾勇(1983—),男,讲师,博士研究生,主要研究方向为塑性成形工艺;勾治践(1958—),男,教授,主要研究方向为液压成形及模具设计。
外文原文Implementing and Optimizing an Encryption on AndroidZhaohui Wang, Rahul Murmuria, Angelos StavrouDepartment of Computer ScienceGeorge Mason UniversityFairfax, V A 22030, USA, ,Abstract—The recent surge in popularity of smart handheld devices, including smart-phones and tablets, has given rise to new challenges in protection of Personal Identifiable Information (PII). Indeed, modern mobile devices store PII for applications that span from email to SMS and from social media to location-based services increasing the concerns of the end user’s privacy. Therefore, there is a clear need and expectation for PII data to be protected in the case of loss, theft, or capture of the portable device. In this paper, we present a novel FUSE ( in USErspace) encryption to protect the removable and persistent storage on heterogeneous smart gadget devices running the Android platform. The proposed leverages NIST certified cryptographic algorithms to encrypt the data- at-rest. We present an analysis of the security and performance trade-offs in a wide-range of usage and load scenarios. Using existing known micro benchmarks in devices using encryption without any optimization, we show that encrypted operations can incur negligible overhead for read operations and up to twenty (20) times overhead for write operations for I/Ointensive programs. In addition, we quantified the database transaction performance and we observed a 50% operation time slowdown on average when using encryption. We further explore generic and device specific optimizations and gain 10% to 60% performance for different operations reducing the initial cost of encryption. Finally, we show that our approach is easy to install and configure acrossall Android platforms including mobile phones, tablets, and small notebooks without any user perceivable delay for most of the regular Android applications.Keywords-Smart handheld devices, Full disk encryption, Encrypted , I/O performance.I. BACKGROUND & THREAT MODELA.BackgroundGoogle’s Android is a comprehensive software framework for mobile devices (i.e., smart phones, PDAs), tablet computers and set-top-boxes. The Android operating system includes the system library files, middle-ware, and a set of standard applications for telephony, personal information management, and Internet browsing. The device resources, like the camera, GPS, radio, and Wi-Fi are all controlled through the operating system. Android kernel is based on an enhanced Linux kernel to better address the needs of mobile platforms with improvements on power management, better handling of limited system resources and a special IPC mechanism to isolate the processes. Some of the system libraries included are: a custom C standard library (Bionic), cryptographic (OpenSSL) library, and libraries for media and 2D/3D graphics. The functionality of these libraries are exposed to applications by the Android Application Framework. Many libraries are inherited from open source projects such as WebKit and SQLite. The Android runtime comprises of the Dalvik, a register-based Java virtual machine. Dalvik runs Java code compiled into a dex format, which is optimized for low memory footprint. Everything that runs within the Dalvik environment is considered as an application, which is written in Java. For improved performance, applications can mix native code written in the C language through Java Native Interface (JNI). Both Dalvik and native applications run within the same security environment, contained within the ‘Application Sandbox’. However, native code does not benefit from the Java abstractions (type checking, automated memory management, garbage collection). Table I lists the hardware modules of Nexus S, which is a typical Google branded Android device.Android’s security model differs significantly from the traditional desktopsecurity model [2]. Android applications are treated as mutually distrusting principals; they are isolated from each other and do not have access to each others’ private dat a. Each application runs within their own distinct system identity (Linux user ID and group ID). Therefore, standard Linux kernel facilities for user management is leveraged for enforcing security between applications. Since the Application Sandbox is in the kernel, this security model extends to native code. For applications to use the protected device resources like the GPS, they must request for special permissions for each action in their Manifest file, which is an agreement approved during installation time.Android has adopted SQLite [12] database to store structured data in a private database. SQLite supports standard relational database features and requires only little memory at runtime. SQLite is an Open Source database software library that implements a self-contained, server-less, zeroconfiguration, transactional SQL database engine. Android provides full support for SQLite databases. Any databases you create will be accessible by name to any java class in the application, but not outside the application. The Android SDK includes a sqlite3 database tool that allows you to browse table contents, run SQL commands, and perform other useful functions on SQLite databases. Applications written by 3rd party vendors tend to use these database features extensively in order to store data on internal memory. The databases are stored as single files in the and carry the permissions for only the application that created the be able to access it. Working with databases in Android, however, can be slow due to the necessary I/O.EncFS is a FUSE-based offering encryption on traditional desktop operating systems. FUSE is the supportive library to implement a fully functional in a userspace program [5]. EncFS uses the FUSE library and FUSE kernel module to provide the interface and runs without any special permissions. EncFS runs over an existing base (for example,ext4,yaffs2,vfat) and offers the encrypted . OpenSSL is integrated in EncFS for offering cryptographic primitives. Any data that is written to t he encrypted is encrypted transparently from the user’s perspective and stored onto the base . Reading operations will decrypt the data transparently from thebase and then load it into memory.B.Threat ModelHandheld devices are being manufactured all over the world and millions of devices are being sold every month to the consumer market with increasing expectation for growth and device diversity. The price for each unit ranges from free to eight hundred dollars with or without cellular services. In addition, new smartphone devices are constantly released to the market which results the precipitation of the old models within months of their launch. With the rich set of sensors integrated with these devices, the data collected and generated are extraordin arily sensitive to user’s privacy. Smartphones are therefore data-centric model, where the cheap price of the hardware and the significance of the data stored on the device challenge the traditional security provisions. Due to high churn of new devices it is compelling to create new security solutions that are hardware-agnostic.While the Application Sandbox protects applicationspecific data from other applications on the phone, sensitive data may be leaked accidentally due to improper placement, resale or disposal of the device and its storage media (e.g. removable sdcard). It also can be intentionally exfiltrated by malicious programs via one of the communication channels such as USB, WiFi, Bluetooth, NFC, cellular network etc.Figure 1. Abstraction of Encryption on AndroidFor example, an attacker can compromise a smartphone and gain full control of it by connecting another computing device to it using the USB physical link [33]. Moreover, by simply capturing the smartphones physically, adversaries have access to confidential or even classified data if the owners are the government officials ormilitary personnels. Considering the cheap price of the hardware, the data on the devices are more critical and can cause devastating consequences if not well protected. To protect the secrecy of the data of its entire lifetime, we must have robust techniques to store and delete data while keeping confidentiality.In our threat model, we assume that an adversary is already in control of the device or the bare storage media. The memory-borne attacks and defences are out of the scope of this paper and addressed by related researches in Section II. A robust data encryption infrastructure provided by the operating system can help preserve the confidentiality of all data on the smartphone, given that the adversary cannot obtain the cryptographic key. Furthermore, by destroying the cryptographic key on the smartphone we can make the data practically irrecoverable. Having established a threat model and listed our assumptions, we detail the steps to build encryption on Android in the following sections.V. PERFORMANCEA. ExperimentalSetup For our experiments, we use the Google’s Nexus S smartphone device with Android version 2.3 (codename Gingerbread). The bootloader of the device is unlocked and the device is rooted. The persistent storage on Nexus S smartphones is a 507MB MTD (Memory Technology Device). MTD is neither a block device not a character device, and was designed for flash memory to behave like block devices. In addition to the MTD device, Nexus S has a dedicated MMC (MultiMediaCard, which is also a NAND flash storage technique) device dedicated to system and userdata partition, which is 512MB and 1024MB respectively. Table II provides the MTD device and MMC device partition layout.In order to evaluate this setup for performance, we installed two different types of benchmarking tools. We used the SQLite benchmarking application created by RedLicense Labs - RL Benchmark Sqlite. To better understand finegrained low level operations under different I/O patterns, we use IOzone [7], which is a popular open source micro benchmarking tool. It is to be noted that these tools are both a very good case study for ‘real-use’ as well. RL Benchmark Sqlite behaves as anyapplication that is database-heavy would behave. IOzone uses the direct intensively just like any application would, if it was reading or writing files to the persistant storage. All other applications which run in memory and use the CPU, graphics, GPS or other device drivers are irrelevant for our storage media tests and the presence of encrypted will not affect their performance.IOzone is a benchmark tool [7]. The benchmark generates and measures a variety of and has been widely used in research work for benchmarking various on different platforms. The benchmark tests performance for the generic , such as Read, write, re-read, re-write, read backwards, read strided, fread, fwrite, random read, pread ,mmap, aio read, aio write.IOzone has been ported to many platforms and runs under various operating systems. Here in our paper, we use ARM-Linux version (Android compatible) of latest IOzone available and focus on the encryption overhead. The cache effect is eliminated by cold rebooting the device for each run of IOzone and RL Benchmark Sqlite. The device is fully charged and connected to external USB power while in experiments. We collect the data and plot the average results of the 5 runs in the figures in all the following experiments.A.ThroughputPerformance of EncFS In this section, we present the IOzone performance results for random read and write operations on userdata partition. The benchmark is run for different and for each , with different record lengths. The maximumTable IIISQLITE PERFORMANCE ON GOOGLE NEXUS Sis selected as 4MB due to the observation that 95% of the user data files are smaller than 4MB on a typical Android system.Fig 3 compares the throughput for four typical operations, namely read, random read, write and random write. The IOzone experiments are run on the original ext4 and EncFS with different AES key lengths. Fig 3 shows for read operation, EncFS performs the same with original ext4. However, for random read, write, random write, EncFS only gives 3%, 5%, 4% of the original throughput respectively. Our analysis shows the encryption/decryption contributes the overhead and is the expected trade-off between security and performance. The buffered read in EncFS makes the read operation only incur marginal overhead. However, for random read, the need for the data blocks alignment during decryption results in slower throughput. For different key length, the 256-bits key only incurs additional 10% overhead comparing to 128-bits key for better security. In particular, AES-256 runs 12866KB/s,8915KB/s, 9804KB/s at peak for random read,write and random write respectively while AES-128 runs 14378KB/s, 9808KB/s, 10922KB/s. The performance loss of a longer key length trading better security properties is only marginal to the performance loss of the encryption scheme. Optimizations can compensate such key-length overhead as illustrated in Section V-D. Based on this observation, AES-256 is recommended and used as default in the following subsection unless otherwise mentioned explicitly.Similarly, sdcard partition gives the identical pattern with slightly different value. Due to the fact that the sdcard partition shares the same underlying physical MMC device with userdata partition as listed in Table II, our experiment results demonstrates the original vfat performs 16% faster than ext4 for read and random read operation while ext4 outperforms vfat 80% and 5% for write and random write operations respectively. However, comparing different is out of our focus in this paper. We observed different throughput values and overhead patterns on other devices such as Nexus One, HTC Desire and Dell Streak which use a removable sdcard as separate physical medium to internal NAND device. Both AES-128 and AES-256 throughput on sdcard are statistically identical to the ones on userdata partition given a 95% confidence interval. Such results show that the scheme of encryption in EncFS(e.g. internal data block size, key length) and its FUSE IO primitives are the bottleneck of the performance regardless of the underlying . We suggest corresponding optimizations in Section V-D.In addition to the basic I/O operations, we look at the read operation in detail under different record size before and after encryption. In particular, we plot the 3D surface view and contour view. In the 3D surface graph, the x-axis is the record size, the y-axis is the throughput in Kilobytes per second, and the z-axis is the . The contour view presents the distribution of the throughput across different record sizes and . In a sense, this is a top-view of the 3D surface graph. Figure 4 and 5 show the throughput when IOzone read partial of the the beginning. Figure 4 shows the default ext4 in Android 2.3 favors bigger record size and for better throughput. The performance peak centers in the top-right corner in the contour view of the 3-D graph. However, after placing EncFS, the performance spike shifts to the diagonal where the record size equals to . This is an interesting yet expected result because of the internal alignment of the in decryption.To better understand the performance of our encryption under Android’s SQLite IO access pattern, we present the database transactions benchmark in the next subsection, which is more related to the users’ experiences.C.SQLite Performance BenchmarkingIn addition to the IOzone micro benchmark results in last subsection, we measure the time for various typical database transactions using the RL Benchmark SQLite Performance Application in the Android market [11]. Table III groups the read and write operations and lists the results in detail.We consider that random read and write is a fair representation of database I/O operations in our scenario. This is due to the fact that for SQLite, the database of one or more pages. All reads from and writes to the database at a page boundary and all reads/writes are an integer number of pages in size. Since the exact page is managed by the database engine, only observe random I/O operations.After incorporating the encryption , the database-transactions-intensive apps slows down from 81.68 seconds to 128.66 seconds for the list of operations as described in the Table III. The read operations reflected by select database transactions shows the consistent results with IOzone result: the EncFS buffers help the performance. However, any write operations resulting from insert, update, or drop database transactions will incur 3% to 401% overhead. The overall overhead is 58%. This is the trade-off between security and performance.中文翻译实施和优化android上的加密文件系统王朝晖,拉胡尔Murmuria,安吉罗斯Stavrou计算机科学系乔治·梅森大学费尔法克斯,VA 22030,USA,,摘要:比来激增的智能手持设备,包孕智能手机和平板电脑的普及,已经引起了在庇护个人身份信息(PII),新的挑战。
基于深度信念网络的苹果霉心病病害程度无损检测周兆永1,2,何东健1,*,张海辉1,雷 雨1,苏 东1,陈克涛1(1.西北农林科技大学机械与电子工程学院,陕西杨凌 712100;2.西北农林科技大学网络与教育技术中心,陕西杨凌 712100)摘 要:针对现有霉心病无损检测只能检测出有无病害,无法对病害程度进行判断的问题,研究并提出一种基于深度信念网络(deep belief net,DBN)的无监督检测模型。
该模型由多层限制玻尔兹曼机(restricted Boltzmann machine,RBM)网络和1层反向传播(back propagation,BP)神经网络组成,RBM网络实现最优特征向量映射,输出的特征向量由BP神经网络对霉心病病害程度分类。
对225 个苹果样本在波长200~1 025 nm获取其透射光谱后,根据腐烂面积占横截面比例将霉心病害程度分为健康、轻度、中度和重度4 种,分别用150 个和75 个样本作为训练集和测试集,以全光谱数据和基于连续投影算法提取的特征波长数据为输入构建病害程度判别模型,并比较DBN模型与偏最小二乘判别分析、BP神经网络和支持向量机模型的识别效果,实验结果表明,DBN模型病害判别准确率达到88.00%,具有较好的识别效果。
关键词:苹果霉心病;病害程度;透射光谱;深度信念网络(DBN);限制玻尔兹曼机(RBM)Non-Destructive Detection of Moldy Core in Apple Fruit Based on Deep Belief NetworkZHOU Zhaoyong1,2, HE Dongjian1,*, ZHANG Haihui1, LEI Yu1, SU Dong1, CHEN Ketao1(1. College of Mechanical and Electronic Engineering, Northwest A&F University, Yangling 712100, China;2. Network and Education Technology Center, Northwest A&F University, Yangling 712100, China)Abstract: Apple moldy core is a major disease affecting the internal quality of apple fruit. However, due to the lack of effective means to accurately detect moldy core in apple fruits, detection of moldy core in apples has become a major problem to be solved in the apple industry. To date, there have been no reports on the use of spectroscopy for distinguishing various degrees of moldy core decay in apple fruits. The objective of this study was to develop a non-destructive method for the detection of various degrees of moldy core decay in apple fruits using near infrared transmittance spectroscopy, successive projections algorithm (SPA), and multi-class classification algorithms partial least square-discriminant analysis (PLS-DA), back propagation artificial neural network (BP-ANN), support vector machine (SVM) and deep belief network (DBN). For developing a model to determine the degree of moldy core in apples, 225 samples were selected including a training set of 150 samples and a test set of 75 samples. The model consisted of several layers of restricted Boltzmann machine (RBM) network, which achieved eigenvector projection, and one layer of BP network, which allowed the classification of various degrees of moldy core based on the output eigenvector. The S d value was calculated by dividing the lesion area by the total cross-sectional area. It was proposed that S d = 0, 0 < S d ≤ 10%, 10% < S d < 30% and S d ≥ 30 indicated health, mild, moderate and severe degrees, respectively. The classification accuracy of the DBN model was 88.00%, suggesting good performance of the model, and it was compared with those of the BP-ANN and SVM models.Key words: moldy core in apples; degree of disease; transmittance spectroscopy; deep belief network (DBN); restricted Boltzmann machine (RBM)DOI:10.7506/spkx1002-6630-201714046中图分类号:S24 文献标志码:A 文章编号:1002-6630(2017)14-0297-07收稿日期:2016-09-12基金项目:国家高技术研究发展计划(863计划)项目(2013AA10230402);国家自然科学基金面上项目(61473235);陕西省重大农技推广服务试点项目(2016XXPT-05)作者简介:周兆永(1980—),男,工程师,博士研究生,主要从事智能化检测、模式识别研究。
遥感类英文期刊、与GIS相关的SCI(EI)期刊遥感类英文期刊[1]. REMOTE SENSING OF ENVIRONMENTISSN: 0034-4257版本: SCI-CDE出版频率: Monthly出版社: ELSEVIER SCIENCE INC, 360 PARK AVE SOUTH, NEW YORK, NY, 10010-1710出版社网址:http://www.elsevier.nl/期刊网址:http://www.elsevier.nl/inca/publications/store/5/0/5/7/3/3/index.htt影响因子: 1.697(2001),1.992(2002)主题范畴: REMOTE SENSING; ENVIRONMENTAL SCIENCES; IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY[2]. International Journal of Remote Sensing国际遥感杂志,这是英国出版的一本专业遥感杂志,由Taylor & Francis出版社出版,我看得不多,不好多说,大家可以到/上查该杂志目录和摘要,全文可以到系资料室复印。
Taylor&Francis注册邮箱为cumtlp#.[3]. PHOTOGRAMMETRIC ENGINEERING AND REMOTE SENSINGISSN: 0099-1112版本: SCI-CDE出版频率: Monthly出版社: AMER SOC PHOTOGRAMMETRY, 5410 GROSVENOR LANE, SUITE 210, BETHESDA, MD, 20814-2160出版社网址:/期刊网址:/publications.html影响因子: 0.841(2001);1.176(2002)主题范畴: GEOGRAPHY, PHYSICAL; GEOSCIENCES, MULTIDISCIPLINARY; REMOTE SENSING; PATHOLOGY美国摄影测量与遥感协会的会刊,摄影测量工程与遥感(Photogrametry Engineering and remote sensing)杂志,网址,网上可看论文摘要,全文可去系图书室复印,该杂志南大很全,过去50年的都有,同时你可以发现测绘学报、遥感学报、武测学报甚至国外的文章中参考文献许多来自该刊,在2000年SCI地学类杂志排名中该刊名列第三,相当地不容易,南大校友美国 berkeley加州大学副教授宫鹏就曾三次获得该刊的年度最佳论文奖,由此奠定其在国际遥感学界的地位。
第14卷第3期精密成形工程刘军舰1,胡豪胜2a,2b,周磊1,李伟1,2a(1. 上汽通用五菱汽车股份有限公司,广西柳州 545007;2. 武汉理工大学 a. 现代汽车零部件技术湖北省重点实验室;b. 汽车零部件技术湖北省协同创新中心,武汉 430070)摘要:目的研究非均质秸秆纤维复合材料保险杠蒙皮的刚度性能。
方法采用试验与模拟分析的方法,通过共混挤出与化学发泡注塑工艺制备微发泡秸秆纤维/聚丙烯(SF/PP)复合材料试样,通过试验测试非均质结构试样的力学性能与微观结构,通过有限元分析手段建立非均质微发泡秸秆纤维/PP复合材料结构分析模型,并分析非均质材料保险杠蒙皮的刚度性能。
结果微发泡秸秆纤维/聚丙烯(SF/PP)复合材料的微观结构有明显的“三明治”结构特点,秸秆纤维主要分布在外皮层,泡孔主要分布在芯层。
将非均质秸秆纤维复合材料保险杠蒙皮近似为3层复合板结构,建模的刚度分析结果与试验测试相差约6%。
结论非均质秸秆纤维复合材料汽车注塑件可近似为3层复合板结构进行数值分析,简化了分析过程,研究结果可用于指导产品性能评估,提高产品开发效率。
关键词:微发泡;植物纤维复合材料;非均质;力学性能DOI:10.3969/j.issn.1674-6457.2022.03.014中图分类号:U465.4 文献标识码:A 文章编号:1674-6457(2022)03-0107-09. All Rights Reserved.Stiffness of Heterogeneous Bumper Fascia Made by Straw Fiber CompositesLIU Jun-jian1, HU Hao-sheng2a,2b, ZHOU Lei1, LI Wei1,2a(1. SAIC GM Wuling Automobile Co., Ltd., Liuzhou 545007, China; 2. a. Hubei Key Laboratory of Advanced Technology forAutomotive Components; b. Hubei Collaborative Innovation Center for Automotive ComponentsTechnology, Wuhan University of Technology, Wuhan 430070, China)ABSTRACT: The work aims to research the stiffness of bumper fascia made of micro foamed straw fiber/polypropylene (SF/PP)composites. Micro foamed SF/PP composites were prepared by blending extrusion and chemical foaming injection processes.The mechanical properties and microstructure of heterogeneous samples were tested by experiment. A structure analysis modelof heterogeneous micro foamed straw fiber/PP composite was established through finite element analysis. The stiffness per-formance of the composite bumper fascia was analyzed. The results showed that the microstructure of micro foamed SF/PPcomposites had obvious "sandwich" structure characteristics. Straw fibers were mainly distributed in the outer skin layer andbubbles were mainly distributed in the core layer. The difference between the analysis results and the experimental test wasabout 6%. The automobile injection parts of heterogeneous straw fiber composite can be approximated to a three-layer compos-ite plate structure for numerical analysis, which simplifies the analysis process. The research results can be used to guide productperformance evaluation and improve product development efficiency.KEY WORDS: micro foamed; plant fiber composite; heterogeneous; mechanical property收稿日期:2021-09-02基金项目:国家自然科学基金(51605356);中央高校基本科研业务费专项资金(WUT 2019Ⅲ116CG)作者简介:刘军舰(1982—),男,硕士,高级工程师,主要研究方向为汽车零部件先进制造。
Semiactive Seismic Response Control of Buildingswith Podium StructureY.L.Xu,M.ASCE1;J.Chen2;C.L.Ng3;and W.L.Qu4Abstract:A multistory building with a large podium structure under earthquake excitation may suffer from whipping effect due to the sudden change of building lateral stiffness and mass at the top of the podium structure.An experimental investigation was carried out in this paper to explore the possibility of using magnetorheological͑MR͒dampers to connect the podium structure to the multistory building to prevent the whipping effect.The multistory building was constructed as a slender12-story building model,whereas the podium structure was built as a relatively stiff three-story building model.A MR damper together with a current controller was used to link the three-story building to the12-story building.The dynamic characteristics of the two buildings without any connection and with a rigid connection werefirst identified.The two building models without any connection and with the rigid connection were then tested under the scaled El Centro1940north–south ground motion.Finally,the two building models connected by the MR damper manipulated by a multilevel logic control algorithm were tested under the specified ground motion.The experimental results show that the MR damper with the multilevel logic control algorithm could significantly mitigate the seismic whipping effect and reduce the seismic responses of both the multistory building and podium structure.DOI:10.1061/͑ASCE͒0733-9445͑2005͒131:6͑890͒CE Database subject headings:Buildings,multistory;Structural models;Seismic response;Vibration control;Shake table tests.IntroductionOwing to increasing population,limited availability of land,and preference for centralized services,modern cities often need many multistory and tall buildings.Some multistory and tall buildings are built with a large podium structure to achieve a large open space for parking,shops,restaurants,and hotel lobbies at ground level.In most cases,the building and the podium struc-ture are built on either a common box foundation or a common raft foundation,and there are no settlement joints or anti-earthquake joints between the building and the podium structure. The presence of the podium structure,whose lateral stiffness may be much larger than that of the building,leads to a sudden large change in the lateral stiffness of the building at the top of the podium structure.Consequently,the seismic response of the upper part of the building may be significantly amplified,leading to the so-called whipping effect.Such a problem cannot be easily solved through the conventional structural modification.Recently,Qu and Xu͑2001͒carried out a theoretical study to explore the possibility of using magnetorheological͑MR͒damp-ers to connect a podium structure to a multistory building to pre-vent the whipping effect.A20-story building with afive-story podium structure subjected to the El Centro north–south͑N–S͒1940ground motion was investigated.They concluded that MR dampers could significantly mitigate the seismic whipping effect on the building and reduce the seismic responses of both the building and the podium structure if the control algorithm was selected properly.To confirm the aforementioned theoreticalfinding,this paper presents an experimental study of using a MR damper to connect a podium structure model to a building model,both of which are mounted on a seismic simulator.The design of the two building models and the features of the MR damper with a current control-ler arefirst introduced.The measured dynamic characteristics of the two building models without any connection and with a rigid connection are then presented.Followed are the seismic responses of the two building models without any connection and with the rigid connection under the scaled El Centro1940N–S ground motion generated by the seismic simulator.Finally,the seismic responses of the two building models connected by the MR damper manipulated by a multilevel logic control algorithm are compared with those of the two building models without any connection and with the rigid connection.Experimental ArrangementBuilding ModelsThe multistory building and the podium structure were designed and constructed as a slender12-story shear building model and a relatively stiff three-story shear building model,respectively,in1Chair Professor,Dept.of Civil and Structural Engineering,The Hong Kong Polytechnic Univ.,Hung Hom,Kowloon,Hong Kong.2Professor,College of Automation,Wuhan Univ.of Technology, Wuhan430070,China.3PhD Candidate,Dept.of Civil and Structural Engineering,The Hong Kong Polytechnic Univ.,Hung Hom,Kowloon,Hong Kong.4Professor,College of Civil Engineering and Architecture,Wuhan Univ.of Technology,Wuhan430070,China.Note.Associate Editor:Satish Nagarajaiah.Discussion open until November1,2005.Separate discussions must be submitted for individual papers.To extend the closing date by one month,a written request mustbefiled with the ASCE Managing Editor.The manuscript for this paper was submitted for review and possible publication on April16,2003; approved on November12,2004.This paper is part of the Journal of Structural Engineering,V ol.131,No.6,June1,2005.©ASCE,ISSN 0733-9445/2005/6-890–899/$25.00.890/JOURNAL OF STRUCTURAL ENGINEERING©ASCE/JUNE2005order to facilitate the system identification ͑see Fig.1͒.The steel frame of the 12-story building consisted of 12rigid plates of 600mm ϫ400mm ϫ16mm and four equally sized rectangular columns of the cross section of 6mm ϫ50mm.The plates and columns were properly welded to form rigid joints.The total height of the 12-story building was measured at 2,400mm.The steel frame of the three-story building was made of three rigid plates of 1,500mm ϫ710mm ϫ32mm and 8equally sized rect-angular columns of the cross section of 6mm ϫ50mm.The total height of the three-story building was measured at 600mm.There was a rectangular opening of 700mm ϫ500mm made at the cen-ter of each rigid plate of the three-story building so that the 12-story building could be arranged inside the middle of the three-story building with the 50mm gap between the four sides of the two buildings.The two buildings were then welded on a steel base plate of 25mm thickness,which was in turn bolted firmly on the seismic simulator using a total of 14bolts of high tensile strength.All the columns were made of high strength steel of 435MPa yield stress and 200GPa modulus of elasticity.The 6mm ϫ50mm cross section of the column was arranged in such a way that the first natural frequency of each building was much lower in the x direction than in the y direction.This arrangement allowed the building to move in the x direction only and thus the two buildings were effectively reduced to planar frames in the x –z plane.Each steel floor could be regarded as a rigid plate in horizontal,leading to a shearing type of deformation.Rigid ConnectionsTo create a rigid connection between the 12-story building and the three-story building,high strength steel bars were used to link the two buildings along the middle line of the two buildings in the x direction at the first,second,and third floors,respectively.Eachbar was fixed on the two building floors by a total of four high strength bolts.The assumption of rigid connection was verified by examining the measured response of each floor of each building to see if there was any relative movement between the two buildings.Magnetorheological Damper and ControllerOne MR damper ͑RD-1097-01X ͒manufactured by the Lord Cor-poration ͑Cary,NC ͒,was used to link the two buildings at their third floors along the middle line of the two buildings in the x direction ͑see Fig.1͒.The maximum allowable input current to the damper is 0.5and 1.0A,respectively,for continuous and intermittent application.More than 150N force can be produced by the damper at 1.0A,whereas an inherent damper force at 0.0A is less than 9N.The MR damper was operated under the maximum allowable temperature of 70°C and for no more than 30s continuous application at 0.6A in the experiment.To achieve the best performance of the MR damper,the Rheonetic Wonder Box device controller kit ͑RD-3002-03͒designed by the Lord Corporation was used together with the MR damper.The controller kit ͑the current controller afterwards ͒provided a closed loop current control.The output current ͑i.e.,the input current to the MR damper ͒was almost linearly proportional to the input voltage ͑i.e.,the command voltage from the dSPACE ͒to the cur-rent controller.A 0–5V input signal can be switched up to 1kHz.Seismic Simulator and Ground MotionThe experiments were carried out on the seismic simulator of The Hong Kong Polytechnic Univ.The seismic simulator was manu-factured by the MTS Corporation ͑Eden Prairie,MN ͒with the table size of 3.0m ϫ3.0m in plane ͑MTS 469DU ͒.The maxi-mum acceleration of the seismic simulator was measured at ±1g with the maximum proof specimen mass of 10t,where g is the acceleration due to gravity in meters/second 2.The simulator could be operated within the frequency range from 0.1to 50Hz and the maximum stroke range of ±100mm.The El Centro 1940earthquake ground acceleration ͑N–S component ͒with a time scale of 1:3and a peak acceleration of 0.13g was used as a major input excitation.Figs.2͑a and b ͒display,respectively,the time–history and the power spectrum of the scaled El Centro ground motion measured from the table surface of the simulator.The seismic simulator was operated under a closed-up feedback con-trol.The acceleration time histories of the simulator ͑the input ground motion to the base of the buildings ͒directly measured from the table surface of the simulator were carefully monitored to ensure that the acceleration time–histories used in different test cases were repeatable.Instrumentation and MeasurementTo identify dynamic characteristics of the buildings and to form a semiactive control system for the two buildings linked by the MR damper,a total of 12accelerometers ͑B&K 4370,Demark ͒were installed on the two buildings in the x direction to measure their responses,as shown in Fig.1.The accelerometers were of piezo-electric type having a measured frequency range from 0.2to 100Hz.Each of the first,second,third,fourth,sixth,eighth,tenth,and twelfth floors of the 12-story building had one accelerometer placed at its floor center whereas each floor of the three-story building had one accelerometer mounted at thefloorFig. 1.Configuration and instrumentation of building-podium structure system:͑a ͒plan view;͑b ͒section A-A;and ͑c ͒section B-BJOURNAL OF STRUCTURAL ENGINEERING ©ASCE /JUNE 2005/891center.To monitor the motion of the simulator in the x direction,an accelerometer was mounted on the table surface of the simu-lator to directly measure the ground acceleration at the base of the buildings.A force transducer was placed in series with the MR damper to measure the control force applied to the buildings.The signal from each accelerometer and the force transducer was transferred to the signal conditioner ͑B&K 2635,Demark ͒by which the signal could be amplified and filtered properly.The signal from the signal conditioner was then passed to a data acquisition/processing system ͑INV300E,Demark ͒where the analog signal was sampled at 1,000Hz and converted to the digi-tal data.The digital data were further analyzed to obtain dynamic characteristics of the buildings.The exciting force was applied to the top floor of either the 12-story building or the three-story building for the case of no connection using an instrumented hammer of plastic head randomly impacting the building for ap-proximately 60s.For the rigid connection of the two buildings,the random force was applied to the fifth floor of the 12-story building only.A block diagram for the measurement of dynamic characteristics of the two buildings is shown in Fig.3.To obtain the seismic responses of the two buildings without any connection and with the rigid connection,the signal from the signal conditioner was passed to the dSPACE real-time simulator system ͑made in Germany ͒.The dSPACE real-time simulator sys-tem comprised a DS1005PowerPC controller ͑PPC ͒processorboard,a DS2003multichannel analog to digital ͑A/D ͒board,a DS2102high resolution digital to analog ͑D/A ͒board,and a DS4003digital input–output ͑I/O ͒board.The DS1005PPC pro-cessor board embraced a Motorola PowerPC 750microprocessor running at 480MHz as its central processing unit ͑CPU ͒and resided in a triple-wide Industry Standard Architecture ͑ISA ͒slot in a host computer.It provided the basis for operating modular hardware of dSPACE,gave the real-time calculation power to the system,and rendered the interface to the I/O boards and host computer.The DS2003A/D board provided 32channels with the resolution and the sampling time up to 16bits and around 5.7–72.5s,performing instantaneous A/D conversion of the measured voltage signals within the range of either ±5or ±10V.The control of the dSPACE CPU and the access to its memory were executed by the main program ControlDesk of the dSPACE,which offered an automatic implementation of the MATLAB /Simulink block program on the host computer via real time inter-face ͑RTI ͒and provided a real time interactive data display and visualization.The RTI possessed compiling,linking,download-ing,and configuring capacities.In this type of experiment,the analog signal of 30s duration,transferred from the signal condi-tioner to the dSPACE real-time simulator system,was sampled at 1,000Hz by the DS2003multichannel A/D board to obtain the digital data.The digital data were then further analyzed using the PPC controller together with the MATLAB/Simulink program on the host computer via RTI.A block diagram for the measurement of seismic responses of the two buildings without control is dis-played in Fig.4.For the semiactive control of the two buildings linked by the MR damper,the relative structural responses of the two buildings at the third floors were measured and transferred to the DS2003A/D board to have digital signals.The digital signals were then analyzed based on the multilevel logic control algorithm and sent to the DS2102D/A board in the dSPACE to apply the control voltage signal to the MR damper via the current controller so as to form a closed loop semiactive control system.The logic control algorithm was implemented by the PPC controller via the MATLAB/Simulink program.The DS2102D/A board consisted of six parallel channels at 16-bit resolution.The setting time is 1.6s and the programmable output voltage was ±5or ±10V or 0–10V.A block diagram for the measurement of seismic re-sponses of the two buildings with semiactive control is depicted in Fig.5.Dynamic Characteristics of BuildingsThe recorded acceleration responses of the two buildings at dif-ferent floors due to random hammering forces were processedtoFig.2.Time history and power spectrum of scaled El Centro north–south ground motion:͑a ͒time history;and ͑b ͒powerspectrumFig. 3.Block diagram of dynamic characteristics measurementsystemFig.4.Block diagram of dynamic response measurement system892/JOURNAL OF STRUCTURAL ENGINEERING ©ASCE /JUNE 2005obtain auto spectra,cross spectra,and phase information,by which the natural frequencies,mode shapes,and modal damping ratios of the two buildings without any connection and with the rigid connection were identified.In the spectral analysis,each acceleration response time–history was divided into128segments with an overlapped length.Each segment contained4,096points. The piecewise smooth method and the hamming window were applied in the spectral analysis to reduce random error and leakage.The identified natural frequencies and modal damping ratios of the three-story building only,the12-story building only,and the rigidly connected two buildings are listed in Table1.It is seen that for the three-story building only,thefirst three natural fre-quencies are9.08,27.86,and40.08Hz,respectively.Thefirst three natural frequencies of the12-story building only are3.91, 11.24,and18.57Hz,respectively.Relatively large differences in thefirst three natural frequencies between the two buildings indi-cate relatively large motions between the two buildings at thefirst three stories when they are subjected to ground motion.Corre-spondingly,the motion of the damper linking the two buildings is larger,and it is thus expected that the damper can function well. Furthermore,the relatively large differences in thefirst three natu-ral frequencies between the two buildings provide a basis for application of the multilevel logic control algorithm proposed in this study.After the two buildings are rigidly connected together, thefirst three natural frequencies become 4.39,10.23,and 15.59Hz,respectively.Thefirst natural frequency of the rigidly connected two buildings is greater than thefirst natural frequency of the isolated12-story building while the second and third natu-ral frequencies are smaller than those of the isolated12-story building.This is because thefirst natural frequency of the three-story building is higher than thefirst natural frequency of the 12-story building but it is smaller than the second natural fre-quency of the12-story building.Thefirst three modal damping ratios are0.93,0.39,and1.02for the three-story building only, 1.29,1.02,and0.46for the12-story building only,and1.67,0.55, and0.51for the rigidly connected two buildings.Clearly,thefirst three natural frequencies and modal damping ratios of the12-story building with the three-story building connected are differ-ent from those of the12-story building only.Figs.6͑a–c͒depict thefirst three modal shapes of the three-story building only,the 12-story building only,and the rigidly connected two buildings, respectively.Thefirst mode shape of the rigidly connected two buildings is significantly different from that of the12-story build-ing only,which will be seen that this change actually causes the whipping effect when the rigidly connected two buildings are subjected to ground motion.Seismic Response of Buildings without ControlTo obtain the absolute peak and root mean square͑RMS͒accel-eration responses and the relative peak and RMS displacement responses of the two buildings with respect to the ground,several computer programs were designed using the MATLAB/Simulink as a platform.These programs were then converted to the PPC controller of the dSPACE via RTI.To obtain the displacement response of the building,the real-time integration of the accelera-tion response was performed and then passed through a high-pass filter with the lowest frequency set as0.64Hz.To justify this approach,a linear variable differential transducer displacementFig.5.Block diagram of real-time control system Table1.Measured Natural Frequencies and Modal Damping RatiosMode numberThree-story building12-story buildingTwo buildings rigidlyconnected DampingratioFrequency͑Hz͒DampingratioFrequency͑Hz͒DampingratioFrequency͑Hz͒1st0.939.08 1.29 3.91 1.67 4.39 2nd0.3927.86 1.0211.240.5510.23 3rd 1.0240.080.4618.570.5115.59 4th——0.5325.910.5524.37 5th——0.3732.750.3729.73 6th——0.4339.590.3834.11 7th——0.3845.940.5342.40 8th——0.3451.320.4144.35 9th——0.3456.210.3550.68 10th——0.3760.120.3657.02 11th——0.4563.540.4661.89 12th——0.9465.010.8364.81JOURNAL OF STRUCTURAL ENGINEERING©ASCE/JUNE2005/893transducer was used to record the dynamic displacement of the simulator in the x direction.The directly measured displacement was then compared with that obtained from the acceleration time history through double integration.The comparison result was quite satisfactory and thus,in the late analysis,the displacement response was obtained from the double integration of the accel-eration response with the filter.Subsequently,the relative dy-namic displacement of each floor to the simulator was acquired by subtracting the simulator displacement from the absolute floor displacement.Figs.7͑a and b ͒show the absolute peak and RMS acceleration responses,respectively,of the 12-story building with and without the three-story building being rigidly connected.It is seen that both the absolute peak and RMS acceleration responses of the 12-story building have a moderate increase in the first three floors after the two buildings were rigidly connected to each other,but the increase becomes larger and larger with increasing height.The absolute peak and RMS acceleration responses of the connected 12-story building at the top floor increase more than double com-pared with those of the 12-story building without any connection.Figs.8͑a and b ͒display the relative peak and RMS displacementresponses,respectively,of the 12-story building with and without the three-story building being pared with the separated 12-story building,the relative peak and RMS displace-ment responses of the connected 12-story building at the first three floors were reduced.However,both the peak and RMS dis-placement responses of the connected 12-story building increase rapidly and become increasingly larger than those of the separated 12-story building.At the top of the building,about 46and 60%increases are observed for the peak and RMS displacement re-sponses,respectively.It is interesting to see that the relative dis-placement response curve of either the separated or the connected 12-story building is similar to the corresponding first mode shape of the building:the change in the mode shape does indicate the occurrence of the whipping effect.The results presented in Figs.8͑a and b ͒also indicate that the story drift between the third and fourth floors of the connected 12-story building is significantly increased compared with that of the separated 12-story building.The shear force in the fourth floor of the separated 12-story build-ing is 1,126N,whereas the shear force in the fourth floor of the connected 12-story building is 1,966N.The significant increases of the shear force in the fourth floor and the acceleration and displacement responses at the top floor of the 12-story building are regarded as the whipping effect.Figs.9͑a and b ͒show the relative RMS displacement andtheparison of first three modal shapes of buildings:͑a ͒three-story building;͑b ͒12-story building;and ͑c ͒rigidly connected twobuildingsparison of acceleration responses of 12-story building:͑a ͒peak acceleration response;and ͑b ͒root mean square acceleration response894/JOURNAL OF STRUCTURAL ENGINEERING ©ASCE /JUNE 2005absolute RMS acceleration responses,respectively,of the three-story building without and with rigidly connected to the 12-story building.The relative displacement responses of the connected three-story building are almost the same as those of the separated three-story building,but the absolute acceleration responses of the connected three-story building are much less than those of the separated three-story building.Seismic Response of Buildings with Semiactive ControlMultilevel Logic Control AlgorithmSemiactive control system compromises between passive control system and active control system.It appears to be particularly promising in offering the reliability of passive control system,yet maintaining the versatility and adaptability of active control sys-tem with extremely low power requirement that is always a criti-cal design consideration under an earthquake.The semiactive control system using MR dampers appears to be the most attrac-tive and effective control system.The essential characteristic of MR fluid is its ability to reversibly change from a free-flowing,linear viscous fluid to a semisolid with controllable yield strengthin milliseconds when exposed to a magnetic field ͑Spencer et al.1997͒.Nevertheless,owing to the nonlinear characteristic of MR fluid,an appropriate control strategy plays a key role in the ulti-mate capability of a MR damper that subsequently leads to an-other challenging task to the civil engineering profession.A variety of control algorithms adopting MR dampers as ac-tuators for structural control have been proposed.One of the con-trol algorithms suggested by Dyke and Spencer ͑1996͒was a clipped-optimal force control algorithm using the structural accel-eration response as a feedback.Instantaneous optimal active con-trol algorithm together with optimal linear passive control strat-egy was studied by Ribakov and Gluck ͑1999͒.A semiactive suboptimal displacement control algorithm was proposed by Xu et al.͑2000͒for MR dampers implemented in a building including the stiffness of brace system supporting the MR damper.In con-sideration that the motion of the MR damper in this application was related to the relative motion of the two buildings at the third floor,a multilevel logic control algorithm was implemented in the experiment ͑Chen et al.2002͒.The multilevel logic control algorithm possesses the advantage of quick and simple control decision and avoids the requirement of the accurate mathematical models for the control system and the MR damper.Only the relative velocity and displacement be-tween the two buildings at the third floor,where the MR damper was installed,are considered as feedbacks for thedeterminationparison of displacement responses of 12-story building:͑a ͒peak displacement response;and ͑b ͒root mean square displacementresponseparison of root mean square responses of three-story building:͑a ͒root mean square displacement response;and ͑b ͒root mean square acceleration responseJOURNAL OF STRUCTURAL ENGINEERING ©ASCE /JUNE 2005/895of control force by complying with the logic rules of PanBoolean algebra.The core concept of the logic control is to switch the control damper force to a corresponding prespecified actuated force region based on different states of feedback responses by either increase or decrease of applied current.Obviously,the magnitude of control force should be a function of the level of deviation of structure from its static equilibrium,which is re-flected by the relative velocity and displacement of the two build-ings at the thirdfloor.The larger the magnitude of deviation,the greater the damper force that should be applied.Let us define the relativefloor displacement and velocity as x r and x˙r.The pre-specified static equilibrium of the two buildings corresponding to x r and x˙r are denoted e0and c0,where both e0and c0are a real number with a value greater than or equal to zero.There are three possible states concerning the relativefloor displacement x r.They are x11,x12,and x13,corresponding to x rϽ−e0,͉x r͉ഛe0,and x rϾe0. Similarly,there are three different states of relativefloor velocity x˙r,denoted by x21,x22,and x23corresponding to x˙rϽ−c0,͉x˙r͉ഛc0,and x˙rϾc0,respectively.Therefore,there are nine combinations for the states of x r and x˙r,and different combinations of states disclose the level of deviation of the buildings.When both x r and x˙r are within the range of e0and c0,no additional current͑damper force͒will be applied.Once either one of the responses,x r or x˙r, falls out the prespecified equilibrium range while the other still falls inside that range,the control force will be augmented by moderately increasing the applied current.In the situation where both x r and x˙r have already deviated from the prespecified static equilibrium region but they have the opposite sign,the damper force will remain the same magnitude as the previous one.On the contrary,when both x r and x˙r depart from the equilibrium region and they have the same sign,the greatest control force will be applied by supplying a higher current.As a result,there are a total of four states of control force region y11,y12,y13,and y14,which correlate with high current͑HC͒supply,low current͑LC͒supply, constant current͑CC͒supply,and no current͑NC͒supply.Table2 and Fig.10are provided to illustrate the division of various re-sponse combinations and the corresponding control states.Ac-cording to the PanBoolean algebra,the following relationships exist:y11=x11·x21+x13·x23͑1͒y13=x11·x23+x13·x21͑2͒y14=x12·x22͑3͒y12=1−͑y11+y13+y14͒͑4͒Seismic Responses of Buildings with Semiactive ControlThe proposed semiactive control was programmedfirst by the MATLAB/Simulink and then converted to the executive program in the dSPACE via RTI,which consisted of three major parts:͑1͒converting the measured voltage signals of the two building re-sponses at the thirdfloor to the digital responses and calculating the relative velocity and displacement as feedbacks;͑2͒symbol-izing the feedback responses into the Boolean values,according to Eqs.͑1͒–͑4͒,that are readable by the PPC controller for deci-sion making;and͑3͒sending a command voltage signal to the current controller to adjust the control force based on the multi-level logic control algorithm.In the semiactive control,the index of equilibrium state was selected as e0=0.3mm and c0 =2mm/s.The states of applied control current were chosen as 0.6A for HC,0.45A for MC,0.35A for LC,and0.0A for NC. The passive-off mode in which the applied current is zero was also investigated in the study.The MR damper in the passive-off mode provided information on the minimum effectiveness of the damper if the external power was cut off.The passive-on mode with the maximum constant current and the passive control mode with an optimal level of constant current were not considered in this study because this study is mainly a feasibility study of using MR dampers to connect a podium structure to a multistory build-ing to prevent whipping effect and reduce seismic responses of both buildings.A comparison of control performance between the semiactive control and the optimal passive control will be per-formed at a later stage.Figs.7͑a and b͒show the absolute peak and RMS acceleration responses,respectively,of the12-story building for the four cases:the two buildings separated,the two buildings rigidly con-nected,the two building connected by the MR damper in the passive off mode,and the two buildings connected by the MR damper in the semiactive control mode.Figs.8͑a and b͒show the relative peak and RMS displacement responses,respectively of the12-story building for the four cases.It is seen from Figs.8͑a and b͒that both the peak and RMS displacement responses of the 12-story building in the passive-off mode are smaller than those of the12-story building separated from the three-story building. Compared with the case of the two buildings rigidly connected,it is clearly seen that the MR damper in the passive-off mode totally eliminates the whipping effect that exists in the rigidly connected building.When the MR damper worked in the semiactive control mode with the logic control algorithm,both the peak and RMS displacement responses of the12-story building are further re-duced compared with those when the MR damper worked in the passive-off mode.For the peak displacement response at the top of the12-story building,the peak displacement response is 10.24mm for the rigidly connected building,7.02mm for theTable2.Principle of Multilevel Logic Control AlgorithmResponse state x rResponse state x˙rx˙rϽ−c0͉x˙r͉ഛc0x˙rϾc0x rϽ−e0I2͑HC͒II3͑LC͒III2͑CC͉͒x r͉ഛe0II2͑LC͒O͑NC͒II4͑CC͒x rϾe0III1͑CC͒II1͑LC͒I1͑HC͒Fig.10.Diagram of multilevel logic control algorithm 896/JOURNAL OF STRUCTURAL ENGINEERING©ASCE/JUNE2005。
Viscoelastic Functionally Graded Finite-Element MethodUsing Correspondence PrincipleEshan V.Dave,S.M.ASCE1;Glaucio H.Paulino,Ph.D.,M.ASCE2;andWilliam G.Buttlar,Ph.D.,P.E.,A.M.ASCE3Abstract:Capability to effectively discretize a problem domain makes thefinite-element method an attractive simulation technique for modeling complicated boundary value problems such as asphalt concrete pavements with material non-homogeneities.Specialized “graded elements”have been shown to provide an efficient and accurate tool for the simulation of functionally graded materials.Most of the previous research on numerical simulation of functionally graded materials has been limited to elastic material behavior.Thus,the current work focuses onfinite-element analysis of functionally graded viscoelastic materials.The analysis is performed using the elastic-viscoelastic correspondence principle,and viscoelastic material gradation is accounted for within the elements by means of the generalized iso-parametric formulation.This paper emphasizes viscoelastic behavior of asphalt concrete pavements and several examples, ranging from verification problems tofield scale applications,are presented to demonstrate the features of the present approach.DOI:10.1061/͑ASCE͒MT.1943-5533.0000006CE Database subject headings:Viscoelasticity;Asphalt pavements;Concrete pavements;Finite-element method.Author keywords:Viscoelasticity;Functionally graded materials;Asphalt pavements;Finite-element method;Correspondence principle.IntroductionFunctionally Graded Materials͑FGMs͒are characterized by spa-tially varied microstructures created by nonuniform distributionsof the reinforcement phase with different properties,sizes andshapes,as well as,by interchanging the role of reinforcement andmatrix materials in a continuous manner͑Suresh and Mortensen1998͒.They are usually engineered to produce property gradientsaimed at optimizing structural response under different types ofloading conditions͑thermal,mechanical,electrical,optical,etc.͒͑Cavalcante et al.2007͒.These property gradients are produced in several ways,for example by gradual variation of the content ofone phase͑ceramic͒relative to another͑metallic͒,as used in ther-mal barrier coatings,or by using a sufficiently large number ofconstituent phases with different properties͑Miyamoto et al.1999͒.Designer viscoelastic FGMs͑VFGMs͒can be tailored tomeet design requirements such as viscoelastic columns subjectedto axial and thermal loads͑Hilton2005͒.Recently,Muliana ͑2009͒presented a micromechanical model for thermoviscoelastic behavior of FGMs.Apart from engineered or tailored FGMs,several civil engi-neering materials naturally exhibit graded material properties. Silva et al.͑2006͒have studied and simulated bamboo,which is a naturally occurring graded material.Apart from natural occur-rence,a variety of materials and structures exhibit nonhomoge-neous material distribution and constitutive property gradations as an outcome of manufacturing or construction practices,aging, different amount of exposure to deteriorating agents,etc.Asphalt concrete pavements are one such example,whereby aging and temperature variation yield continuously graded nonhomogeneous constitutive properties.The aging and temperature induced prop-erty gradients have been well documented by several researchers in thefield of asphalt pavements͑Garrick1995;Mirza and Witc-zak1996;Apeagyei2006;Chiasson et al.2008͒.The current state-of-the-art in viscoelastic simulation of asphalt pavements is limited to either ignoring non-homogeneous property gradients ͑Kim and Buttlar2002;Saad et al.2006;Baek and Al-Qadi2006; Dave et al.2007͒or considering them through a layered ap-proach,for instance,the model used in the American Association of State Highway and Transportation Officials͑AASHTO͒Mechanistic Empirical Pavement Design Guide͑MEPDG͒͑ARA Inc.,EC.2002͒.Significant loss of accuracy from the use of the layered approach for elastic analysis of asphalt pavements has been demonstrated͑Buttlar et al.2006͒.Extensive research has been carried out to efficiently and ac-curately simulate functionally graded materials.For example, Cavalcante et al.͑2007͒,Zhang and Paulino͑2007͒,Arciniega and Reddy͑2007͒,and Song and Paulino͑2006͒have all reported onfinite-element simulations of FGMs.However,most of the previous research has been limited to elastic material behavior.A variety of civil engineering materials such as polymers,asphalt concrete,Portland cement concrete,etc.,exhibit significant rate and history effects.Accurate simulation of these types of materi-als necessitates the use of viscoelastic constitutive models.1Postdoctoral Research Associate,Dept.of Civil and EnvironmentalEngineering,Univ.of Illinois at Urbana-Champaign,Urbana,IL61801͑corresponding author͒.2Donald Biggar Willett Professor of Engineering,Dept.of Civil andEnvironmental Engineering,Univ.of Illinois at Urbana-Champaign,Ur-bana,IL61801.3Professor and Narbey Khachaturian Faculty Scholar,Dept.of Civiland Environmental Engineering,Univ.of Illinois at Urbana-Champaign,Urbana,IL61801.Note.This manuscript was submitted on April17,2009;approved onOctober15,2009;published online on February5,2010.Discussion pe-riod open until June1,2011;separate discussions must be submitted forindividual papers.This paper is part of the Journal of Materials in CivilEngineering,V ol.23,No.1,January1,2011.©ASCE,ISSN0899-1561/2011/1-39–48/$25.00.JOURNAL OF MATERIALS IN CIVIL ENGINEERING©ASCE/JANUARY2011/39The current work presents afinite element͑FE͒formulation tailored for analysis of viscoelastic FGMs and in particular,as-phalt concrete.Paulino and Jin͑2001͒have explored the elastic-viscoelastic correspondence principle͑CP͒in the context of FGMs.The CP-based formulation has been used in the current study in conjunction with the generalized iso-parametric formu-lation͑GIF͒by Kim and Paulino͑2002͒.This paper presents the details of thefinite-element formulation,verification,and an as-phalt pavement simulation example.Apart from simulation of as-phalt pavements,the present approach could also be used for analysis of other engineering systems that exhibit graded vis-coelastic behavior.Examples of such systems include metals and metal composites at high temperatures͑Billotte et al.2006;Koric and Thomas2008͒;polymeric and plastic based systems that un-dergo oxidative and/or ultraviolet hardening͑Hollaender et al. 1995;Hale et al.1997͒and gradedfiber reinforced cement and concrete structures.Other application areas for the graded vis-coelastic analysis include accurate simulation of the interfaces between viscoelastic materials such as the layer interface between different asphalt concrete lifts or simulations of viscoelastic glu-ing compounds used in the manufacture of layered composites ͑Diab and Wu2007͒.Functionally Graded Viscoelastic Finite-Element MethodThis section describes the formulation for the analysis of vis-coelastic functionally graded problems using FE framework and the elastic-viscoelastic CP.The initial portion of this section es-tablishes the basic viscoelastic constitutive relationships and the CP.The subsequent section provides the FE formulation using the GIF.Viscoelastic Constitutive RelationsThe basic stress-strain relationships for viscoelastic materials have been presented by,among other writers,Hilton͑1964͒and Christensen͑1982͒.The constitutive relationship for quasi-static, linear viscoelastic isotropic materials is given asij͑x,t͒=2͵tЈ=−ϱtЈ=t G͓x,͑t͒−͑tЈ͔͒ͫij͑x,tЈ͒−13␦ijkkͬdtЈ+͵tЈ=−ϱtЈ=t K͓x,͑t͒−͑tЈ͔͒␦ijkk dtЈ͑1͒whereij=stresses;ij=strains at any location x.The parameters G and K=shear and bulk relaxation moduli;␦ij=Kronecker delta; and tЈ=integration variable.Subscripts͑i,j,k,l=1,2,3͒follow Einstein’s summation convention.The reduced timeis related to real time t and temperature T through the time-temperature super-position principle͑t͒=͵0t a͓T͑tЈ͔͒dtЈ͑2͒For a nonhomogeneous viscoelastic body in quasi-static condi-tion,assume a boundary value problem with displacement u i on volume⍀u,traction P i on surface⍀and body force F i,the equilibrium and strain-displacement relationships͑for small de-formations͒are as shown in Eq.͑3͒ij,j+F i=0,ij=12͑u i,j+u j,i͒͑3͒respectively,where,u i=displacement and͑•͒,j=ץ͑•͒/ץx j.CP and Its Application to FGMsThe CP allows a viscoelastic solution to be readily obtained by simple substitution into an existing elastic solution,such as a beam in bending,etc.The concept of equivalency between trans-formed viscoelastic and elastic boundary value problems can be found in Read͑1950͒.This technique been extensively used by researchers to analyze variety of nonhomogeneous viscoelastic problems including,but not limited to,beam theory͑Hilton and Piechocki1962͒,finite-element analysis͑Hilton and Yi1993͒, and boundary element analysis͑Sladek et al.2006͒.The CP can be more clearly explained by means of an ex-ample.For a simple one-dimensional͑1D͒problem,the stress-strain relationship for viscoelastic material is given by convolution integral shown in Eq.͑4͒.͑t͒=͵0t E͑t−tЈ͒ץ͑tЈ͒ץtЈdtЈ͑4͒If one is interested in solving for the stress and material properties and imposed strain conditions are known,using the elastic-viscoelastic correspondence principle the convolution integral can be reduced to the following relationship using an integral trans-form such as the Laplace transform:˜͑s͒=sE˜͑s͒˜͑s͒͑5͒Notice that the above functional form is similar to that of the elastic problem,thus the analytical solution available for elastic problems can be directly applied to the viscoelastic problem.The transformed stress quantity,˜͑s͒is solved with known E˜͑s͒and ˜͑s͒.Inverse transformation of˜͑s͒provides the stress͑t͒.Mukherjee and Paulino͑2003͒have demonstrated limitations on the use of the correspondence principle in the context of func-tionally graded͑and nonhomogenous͒viscoelastic boundary value problems.Their work establishes the limitation on the func-tional form of the constitutive properties for successful and proper use of the CP.Using correspondence principle,one obtains the Laplace trans-form of the stress-strain relationship described in Eq.͑1͒as ˜ij͑x,s͒=2G˜͓x,˜͑s͔͒˜ij͑x,s͒+K˜͓x,˜͑s͔͒␦ij˜kk͑x,s͒͑6͒where s=transformation variable and the symbol tilde͑ϳ͒on top of the variables represents transformed variable.The Laplace transform of any function f͑t͒is given byL͓f͑t͔͒=f˜͑s͒=͵0ϱf͑t͒Exp͓−st͔dt͑7͒Equilibrium͓Eq.͑3͔͒for the boundary value problem in the trans-formed form becomes˜,j͑x,s͒=2G˜͑x,s͒˜,j d͑x,s͒+2G˜,j͑x,s͒˜d͑x,s͒+K˜͑x,s͒˜,j͑x,s͒+K˜,j͑x,s͒˜͑x,s͒͑8͒where superscript d indicates the deviatoric component of the quantities.Notice that the transformed equilibrium equation for a nonho-mogeneous viscoelastic problem has identical form as an elastic40/JOURNAL OF MATERIALS IN CIVIL ENGINEERING©ASCE/JANUARY2011nonhomogeneous boundary value problem.This forms the basis for using CP-based FEM for solving graded viscoelastic problems such as asphalt concrete pavements.The basic FE framework for solving elastic problems can be readily used through the use of CP,which makes it an attractive alternative when compared to more involved time integration schemes.However,note that due to the inapplicability of the CP for problems involving the use of the time-temperature superposition principle,the present analysis is applicable to problems with nontransient thermal conditions.In the context of pavement analysis,this makes the present proce-dure applicable to simulation of traffic ͑tire ͒loading conditions for given aging levels.The present approach is not well-suited for thermal-cracking simulations,which require simulation of con-tinuously changing and nonuniform temperature conditions.FE FormulationThe variational principle for quasi-static linear viscoelastic mate-rials under isothermal conditions can be found in Gurtin ͑1963͒.Taylor et al.͑1970͒extended it for thermoviscoelastic boundary value problem͟=͵⍀u͵t Ј=−ϱt Ј=t ͵t Љ=−ϱt Љ=t −t Ј12C ijkl ͓x ,ijkl ͑t −t Љ͒−ijklЈ͑t Ј͔͒ϫץij ͑x ,t Ј͒ץt Јץkl ͑x ,t Љ͒ץt Љdt Јdt Љd ⍀u−͵⍀u͵t Ј=−ϱt Ј=t ͵t Љ=−ϱt Љ=t −t ЈC ijkl ͓x ,ijkl ͑t −t Љ͒−ijklЈ͑t Ј͔͒ϫץij ء͑x ,t Ј͒ץt Јץkl ء͑x ,t Љ͒ץt Љdt Јdt Љd ⍀u −͵⍀͵t Љ=−ϱt Љ=tP i ͑x ,t −t Љ͒ץu i ͑x ,t Љ͒ץt Љdt Љd ⍀=0͑9͒where ⍀u =volume of a body;⍀=surface on which tractions P iare prescribed;u i =displacements;C ijkl =space and time depen-dent material constitutive properties;ij =mechanical strains and ij ء=thermal strains;while ijkl =reduced time related to real time t and temperature T through time-temperature superposition prin-ciple of Eq.͑2͒.The first variation provides the basis for the FE formulation␦͟=͵⍀u ͵t Ј=−ϱt Ј=t ͵t Љ=−ϱt Љ=t −t ЈͭC ijkl ͓x ,ijkl ͑t −t Љ͒−ijklЈ͑t Ј͔͒ץץt Ј͑ij ͑x ,t Ј͒−ij ء͑x ,t Ј͒͒ץ␦kl ͑x ,t Љ͒ץt Љͮdt Јdt Љd ⍀u−͵⍀͵t Љ=−ϱt Љ=tP i ͑x ,t −t Љ͒ץ␦u i ͑x ,t Љ͒ץt Љdt Љd ⍀=0͑10͒The element displacement vector u i is related to nodal displace-ment degrees of freedom q j through the shape functions N iju i ͑x ,t ͒=N ij ͑x ͒q j ͑t ͒͑11͒Differentiation of Eq.͑11͒yields the relationship between strain i and nodal displacements q i through derivatives of shape func-tions B iji ͑x ,t ͒=B ij ͑x ͒q j ͑t ͒͑12͒Eqs.͑10͒–͑12͒provide the equilibrium equation for each finite element͵tk ij ͓x ,͑t ͒−͑t Ј͔͒ץq j ͑t Ј͒ץt Јdt Ј=f i ͑x ,t ͒+f i th ͑x ,t ͒͑13͒where k ij =element stiffness matrix;f i =mechanical force vector;and f i th =thermal force vector,which are described as follows:k ij ͑x ,t ͒=͵⍀uB ik T ͑x ͒C kl ͓x ,͑t ͔͒B lj ͑x ͒d ⍀u ͑14͒f i ͑x ,t ͒=͵⍀N ij ͑x ͒P j ͑x ,t ͒d ⍀͑15͒f i th ͑x ,t ͒=͵⍀u͵−ϱtB ik ͑x ͒C kl ͓x ,͑t ͒−͑t Ј͔͒ץl ء͑x ,t Ј͒ץt Јdt Јd ⍀u͑16͒l ء͑x ,t ͒=␣͑x ͒⌬T ͑x ,t ͒͑17͒where ␣=coefficient of thermal expansion and ⌬T =temperaturechange with respect to initial conditions.On assembly of the individual finite element contributions for the given problem domain,the global equilibrium equation can be obtained as͵tK ij ͓x ,͑t ͒−͑t Ј͔͒ץU j ͑t Ј͒ץt Јdt Ј=F i ͑x ,t ͒+F i th ͑x ,t ͒͑18͒where K ij =global stiffness matrix;U i =global displacement vec-tor;and F i and F i th =global mechanical and thermal force vectors respectively.The solution to the problem requires solving the con-volution shown above to determine nodal displacements.Hilton and Yi ͑1993͒have used the CP-based procedure for implementing the FE formulation.However,the previous re-search efforts were limited to use of conventional finite elements,while in the current paper graded finite elements have been used to efficiently and accurately capture the effects of material non-homogeneities.Graded elements have benefit over conventional elements in context of simulating non-homogeneous isotropic and orthotropic materials ͑Paulino and Kim 2007͒.Kim and Paulino ͑2002͒proposed graded elements with the GIF,where the consti-tutive material properties are sampled at each nodal point and interpolated back to the Gauss-quadrature points ͑Gaussian inte-gration points ͒using isoparametric shape functions.This type of formulation allows for capturing the material nonhomogeneities within the elements unlike conventional elements which are ho-mogeneous in nature.The material properties,such as shear modulus,are interpolated asG Int.Point =͚i =1mG i N i ͑19͒where N i =shape functions;G i =shear modulus corresponding tonode i ;and m =number of nodal points in the element.JOURNAL OF MATERIALS IN CIVIL ENGINEERING ©ASCE /JANUARY 2011/41A series of weak patch tests for the graded elements have been previously established ͑Paulino and Kim 2007͒.This work dem-onstrated the existence of two length scales:͑1͒length scale as-sociated with element size,and ͑2͒length scale associated with material nonhomogeneity.Consideration of both length scales is necessary in order to ensure convergence.Other uses of graded elements include evaluation of stress-intensity factors in FGMs under mode I thermomechanical loading ͑Walters et al.2004͒,and dynamic analysis of graded beams ͑Zhang and Paulino 2007͒,which also illustrated the use of graded elements for simulation of interface between different material layers.In a recent study ͑Silva et al.2007͒graded elements were extended for multiphys-ics applications.Using the elastic-viscoelastic CP,the functionally graded vis-coelastic finite element problem could be deduced to have a func-tional form similar to that of elastic place transform of the global equilibrium shown in Eq.͑18͒isK ˜ij ͑x ,s ͒U ˜j ͑s ͒=F ˜i ͑x ,s ͒+F ˜i th ͑x ,s ͒͑20͒Notice that the Laplace transform of hereditary integral ͓Eq.͑18͔͒led to an algebraic relationship ͓Eq.͑23͔͒,this is major benefit of using CP as the direct integration for solving hereditary integrals will have significant computational cost.As discussed in a previ-ous section,the applicability of correspondence principle for vis-coelastic FGMs imposes limitations on the functional form of constitutive model.With this knowledge,it is possible to further customize the FE formulation for the generalized Maxwell model.Material constitutive properties for generalized Maxwell model is given asC ij ͑x ,t ͒=͚h =1n͓C ij ͑x ͔͒h Exp ͫ−t ͑ij ͒hͬ͑no sum ͒͑21͒where ͑C ij ͒h =elastic contributions ͑spring coefficients ͒;͑ij ͒h=viscous contributions from individual Maxwell units,commonly called relaxation times;and n ϭnumber of Maxwell unit.Fig.1illustrates simplified 1D form of the generalized Max-well model represented in Eq.͑21͒.Notice that the generalized Maxwell model discussed herein follows the recommendations made by Mukherjee and Paulino ͑2003͒for ensuring success of the correspondence principle.For the generalized Maxwell model,the global stiffness matrix K of the system can be rewritten asK ij ͑x ,t ͒=K ij 0͑x ͒Exp ͩ−t ijͪ=K ij 0͑x ͒K ij t͑t ͒͑no sum ͒͑22͒where K ij 0=elastic contribution of stiffness matrix and K t=time dependent portion.Using Eqs.͑20͒and ͑22͒,one can summarize the problem asK ij 0͑x ͒K ˜ij t ͑s ͒U ˜j ͑s ͒=F ˜i ͑x ,s ͒+F ˜i th ͑x ,s ͒͑no sum ͒͑23͒FE ImplementationThe FE formulation described in the previous section was imple-mented and applied to two-dimensional plane and axisymmetricproblems.This section provides the details of the implementation of formulation along with brief description method chosen for numerical inversion from Laplace domain to time domain.The implementation was coded in the commercially available software Matlab.The implementation of the analysis code is di-vided into five major steps as shown in Fig.2.The first step is very similar to the FE method for a time dependent nonhomogeneous problem,whereby local contribu-tions from various elements are assembled to obtain the force vector and stiffness matrix for the system.Notice that due to the time dependent nature of the problem the quantities are evaluated throughout the time duration of analysis.The next step is to trans-form the quantities to the Laplace domain from the time domain.For the generalized Maxwell model,the Laplace transform of the time-dependent portion of the stiffness matrix,K t ,can be directly ͑and exactly ͒determined using the analytical transform given byK ij 0͑x ͒K ˜ij t ͑s ͒U ˜j ͑s ͒=F ˜i ͑x ,s ͒+F ˜i th ͑x ,s ͓͒no sum for K ij 0͑x ͒K ˜ij t ͑s ͔͒͑24͒Laplace transform of quantities other than the stiffness matrix canbe evaluated using the trapezoidal rule,assuming that the quanti-ties are piecewise linear functions of time.Thus,for a given time dependent function F ͑t ͒,the Laplace transform F ˜͑s ͒is estimated asF ˜͑s ͒=͚i =1N −11s 2⌬t͕s ⌬t ͑F ͑t i ͒Exp ͓−st i ͔−F ͑t i +1͒Exp ͓−st i +1͔͒+⌬F ͑Exp ͓−st i ͔−Exp ͓−st i +1͔͖͒͑25͒where ⌬t =time increment;N =total number of increments;and⌬F =change in function F for the given increment.Once the quantities are calculated on the transformed domain the system of linear equations are solved to determine the solu-tion,which in this case produces the nodal displacements in thetransformed domain,U˜͑s ͒.The inverse transform provides the solution to the problem in the time domain.It should be noted that()1C x ()2C x ()n C x τττ12nFig.1.Generalized Maxwell modelDefine problem in time-domain (evaluate load vector F(t)and stiffnessmatrix components K 0(t)and K t (x ))Perform Laplace transform to evaluate F(s)and K t (s)~~Solve linear system of equations to evaluate nodal displacement,U(s)Perform inverse Laplace transforms to get the solution,U(t)~Post-process to evaluate field quantities of interestFig.2.Outline of finite-element analysis procedure42/JOURNAL OF MATERIALS IN CIVIL ENGINEERING ©ASCE /JANUARY 2011the formulation as well as its implementation is relatively straight forward using the correspondence principle based transformed ap-proach when compared to numerically solving the convolution integral.The inverse Laplace transform is of greater importance in the current problem as the problem is ill posed due to absence of a functional description in the imaginary prehensive comparisons of various numerical inversion techniques have been previously presented ͑Beskos and Narayanan 1983͒.In the current study,the collocation method ͑Schapery 1962;Schapery 1965͒was used on basis of the recommendations from previous work ͑Beskos and Narayanan 1983;Yi 1992͒.For the current implementation the numerical inverse trans-form is compared with exact inversion using generalized Maxwell model ͓c.f.Eq.͑21͔͒as the test function.The results,shown in Fig.3,compare the exact analytical inversion with the numerical inversion results.The numerical inversion was carried out using 20and 100collocation points.With 20collocation points,the average relative error in the numerical estimate is 2.7%,whereas with 100collocation points,the numerical estimate approaches the exact inversion.Verification ExamplesIn order to verify the present formulation and its implementation,a series of verifications were performed.The verification was di-vided into two categories:͑1͒verification of the implementation of GIF elements to capture material nonhomogeneity,and ͑2͒verification of the viscoelastic portion of the formulation to cap-ture time and history dependent material response.Verification of Graded ElementsA series of analyzes were performed to verify the implementation of the graded elements.The verifications were performed for fixed grip,tension and bending ͑moment ͒loading conditions.The material properties were assumed to be elastic with exponential spatial variation.The numerical results were compared with exact analytical solutions available in the literature ͑Kim and Paulino 2002͒.The comparison results for fixed grip loading,tensile load-ing,and bending were performed.The results for all three casesshow a very close match with the analytical solution verifying the implementation of the GIF graded parison for the bending case is presented in Fig.4.Verification of Viscoelastic AnalysisVerification results for the implementation of the correspondence principle based viscoelastic functionally graded analysis were performed and are provided.The first verification example repre-sents a functionally graded viscoelastic bar undergoing creep de-formation under a constant load.The analysis was conducted for the Maxwell model.Fig.5compares analytical and numerical results for this verification problem.The analytical solution ͑Mukherjee and Paulino 2003͒was used for this analysis.It can be observed that the numerical results are in very good agreement with the analytical solution.The second verification example was simulated for fixed grip loading of an exponentially graded viscoelastic bar.The numeri-cal results were compared with the available analytical solution24F u n c t i o n ,f (t )T e s t Time (sec)Fig.3.Numerical Laplace inversion using collocation method-y -y D i r e c t i o n (M P a )E (x)--S t r e s s i n -x (mm)parison of exact ͑line ͒and numerical solution ͑circular markers ͒for bending of FGM bar ͑insert illustrates the boundary value problem along with material gradation ͒m )y e n t i n y D i r e c t i o n (m E(x)D i s p a l c e m x (mm)parison of exact and numerical solution for the creep ofexponentially graded viscoelastic barJOURNAL OF MATERIALS IN CIVIL ENGINEERING ©ASCE /JANUARY 2011/43͑Mukherjee and Paulino 2003͒for a viscoelastic FGM.Fig.6compares analytical and numerical results for this verification problem.Notice that the results are presented as function of time,and in this boundary value problem the stresses in y-direction are constant over the width of bar.Excellent agreement between nu-merical results and analytical solution further verifies the veracity of the viscoelastic graded FE formulation derived herein and its successful implementation.Application ExamplesIn this section,two sets of simulation examples using the gradedviscoelastic analysis scheme discussed in this paper are presented.The first example is for a simply supported functionally graded viscoelastic beam in a three-point bending configuration.In order to demonstrate the benefits of the graded analysis approach,com-parisons are made with analysis performed using commercially available software ͑ABAQUS ͒.In the case,of ABAQUS simula-tions,the material gradation is approximated using a layered ap-proach and different refinement levels.The second example is that of an aged conventional asphalt concrete pavement loaded with a truck tire.Simply Supported Graded Viscoelastic BeamFig.7shows the geometry and boundary conditions for the graded viscoelastic simply supported beam.A creep load,P ͑t ͒,is imposed at midspanP ͑t ͒=P 0t͑26͒The viscoelastic relaxation moduli on the top ͑y=y 0͒and bottom ͑y=0͒of the beam are shown in Fig.8.The variation of moduli is assumed to vary linearly from top to bottom as follows:E ͑y ,t ͒=ͩyy 0ͪE Top ͑t ͒+ͩy 0−y y 0ͪE Bottom ͑t ͒͑27͒The problem was solved using three approaches namely,͑1͒graded viscoelastic analysis procedure ͑present paper ͒;͑2͒com-mercial software ABAQUS with different levels of mesh refine-ments and averaged material properties assigned in the layered manner;and ͑3͒assuming averaged material properties for the whole beam.In the case of the layered approach using commer-cial software ABAQUS,three levels of discretization were used.A sample of the mesh discritization used for each of the simula-tion cases is shown in Fig.9.Table 1presents mesh attributes for each of the simulation cases.The parameter selected for comparing the various analysis op-tions is the mid span deflection for the beam problem discussed earlier ͑c.f.Fig.7͒.The results from all four simulation options are presented in Fig.10.Due to the viscoelastic nature of the problem,the beam continues to undergo creep deformation with increasing loading time.The results further illustrate the benefit of using the graded analysis approach as a finer level of meshTable 1.Mesh Attributes for Different Analysis Options Simulation case Number of elements Number of nodes Total degrees of freedom FGM/average/6-layer 7201,5733,1469-layer 1,6203,4396,87812-layer2,8806,02512,050y E 0(x )t i me (sec)parison of exact and numerical solution for the exponen-tially graded viscoelastic bar in fixed grip loadingy()()0P t P h t =xx 0=10y 0=1x 0/2Fig.7.Graded viscoelastic beam problemconfigurationE 0(M P a )l x a t i o n M o d u l u s ,E (t )/N o r m a l i z e d R e 10101010time (sec)Fig.8.Relaxation moduli on top and bottom of the graded beamFG M A verage 6-Layer 9-Layer 12-LayerFig.9.Mesh discretization for various simulation cases ͑1/5th beam span shown for each case ͒。
Instructional designFrom Wikipedia, the free encyclopediaInstructional Design(also called Instructional Systems Design (ISD)) is the practice of maximizing the effectiveness, efficiency and appeal of instruction and other learning experiences. The process consists broadly of determining the current state and needs of the learner, defining the end goal of instruction, and creating some "intervention" to assist in the transition. Ideally the process is informed by pedagogically(process of teaching) and andragogically(adult learning) tested theories of learning and may take place in student-only, teacher-led or community-based settings. The outcome of this instruction may be directly observable and scientifically measured or completely hidden and assumed. There are many instructional design models but many are based on the ADDIE model with the five phases: 1) analysis, 2) design, 3) development, 4) implementation, and 5) evaluation. As a field, instructional design is historically and traditionally rooted in cognitive and behavioral psychology.HistoryMuch of the foundations of the field of instructional design was laid in World War II, when the U.S. military faced the need to rapidly train large numbers of people to perform complex technical tasks, fromfield-stripping a carbine to navigating across the ocean to building a bomber—see "Training Within Industry(TWI)". Drawing on the research and theories of B.F. Skinner on operant conditioning, training programs focused on observable behaviors. Tasks were broken down into subtasks, and each subtask treated as a separate learning goal. Training was designed to reward correct performance and remediate incorrect performance. Mastery was assumed to be possible for every learner, given enough repetition and feedback. After the war, the success of the wartime training model was replicated in business and industrial training, and to a lesser extent in the primary and secondary classroom. The approach is still common in the U.S. military.[1]In 1956, a committee led by Benjamin Bloom published an influential taxonomy of what he termed the three domains of learning: Cognitive(what one knows or thinks), Psychomotor (what one does, physically) and Affective (what one feels, or what attitudes one has). These taxonomies still influence the design of instruction.[2]During the latter half of the 20th century, learning theories began to be influenced by the growth of digital computers.In the 1970s, many instructional design theorists began to adopt an information-processing-based approach to the design of instruction. David Merrill for instance developed Component Display Theory (CDT), which concentrates on the means of presenting instructional materials (presentation techniques).[3]Later in the 1980s and throughout the 1990s cognitive load theory began to find empirical support for a variety of presentation techniques.[4]Cognitive load theory and the design of instructionCognitive load theory developed out of several empirical studies of learners, as they interacted with instructional materials.[5]Sweller and his associates began to measure the effects of working memory load, and found that the format of instructional materials has a direct effect on the performance of the learners using those materials.[6][7][8]While the media debates of the 1990s focused on the influences of media on learning, cognitive load effects were being documented in several journals. Rather than attempting to substantiate the use of media, these cognitive load learning effects provided an empirical basis for the use of instructional strategies. Mayer asked the instructional design community to reassess the media debate, to refocus their attention on what was most important: learning.[9]By the mid- to late-1990s, Sweller and his associates had discovered several learning effects related to cognitive load and the design of instruction (e.g. the split attention effect, redundancy effect, and the worked-example effect). Later, other researchers like Richard Mayer began to attribute learning effects to cognitive load.[9] Mayer and his associates soon developed a Cognitive Theory of MultimediaLearning.[10][11][12]In the past decade, cognitive load theory has begun to be internationally accepted[13]and begun to revolutionize how practitioners of instructional design view instruction. Recently, human performance experts have even taken notice of cognitive load theory, and have begun to promote this theory base as the science of instruction, with instructional designers as the practitioners of this field.[14]Finally Clark, Nguyen and Sweller[15]published a textbook describing how Instructional Designers can promote efficient learning using evidence-based guidelines of cognitive load theory.Instructional Designers use various instructional strategies to reduce cognitive load. For example, they think that the onscreen text should not be more than 150 words or the text should be presented in small meaningful chunks.[citation needed] The designers also use auditory and visual methods to communicate information to the learner.Learning designThe concept of learning design arrived in the literature of technology for education in the late nineties and early 2000s [16] with the idea that "designers and instructors need to choose for themselves the best mixture of behaviourist and constructivist learning experiences for their online courses" [17]. But the concept of learning design is probably as old as the concept of teaching. Learning design might be defined as "the description of the teaching-learning process that takes place in a unit of learning (eg, a course, a lesson or any other designed learning event)" [18].As summarized by Britain[19], learning design may be associated with:∙The concept of learning design∙The implementation of the concept made by learning design specifications like PALO, IMS Learning Design[20], LDL, SLD 2.0, etc... ∙The technical realisations around the implementation of the concept like TELOS, RELOAD LD-Author, etc...Instructional design modelsADDIE processPerhaps the most common model used for creating instructional materials is the ADDIE Process. This acronym stands for the 5 phases contained in the model:∙Analyze– analyze learner characteristics, task to be learned, etc.Identify Instructional Goals, Conduct Instructional Analysis, Analyze Learners and Contexts∙Design– develop learning objectives, choose an instructional approachWrite Performance Objectives, Develop Assessment Instruments, Develop Instructional Strategy∙Develop– create instructional or training materialsDesign and selection of materials appropriate for learning activity, Design and Conduct Formative Evaluation∙Implement– deliver or distribute the instructional materials ∙Evaluate– make sure the materials achieved the desired goals Design and Conduct Summative EvaluationMost of the current instructional design models are variations of the ADDIE process.[21] Dick,W.O,.Carey, L.,&Carey, J.O.(2004)Systematic Design of Instruction. Boston,MA:Allyn&Bacon.Rapid prototypingA sometimes utilized adaptation to the ADDIE model is in a practice known as rapid prototyping.Proponents suggest that through an iterative process the verification of the design documents saves time and money by catching problems while they are still easy to fix. This approach is not novel to the design of instruction, but appears in many design-related domains including software design, architecture, transportation planning, product development, message design, user experience design, etc.[21][22][23]In fact, some proponents of design prototyping assert that a sophisticated understanding of a problem is incomplete without creating and evaluating some type of prototype, regardless of the analysis rigor that may have been applied up front.[24] In other words, up-front analysis is rarely sufficient to allow one to confidently select an instructional model. For this reason many traditional methods of instructional design are beginning to be seen as incomplete, naive, and even counter-productive.[25]However, some consider rapid prototyping to be a somewhat simplistic type of model. As this argument goes, at the heart of Instructional Design is the analysis phase. After you thoroughly conduct the analysis—you can then choose a model based on your findings. That is the area where mostpeople get snagged—they simply do not do a thorough-enough analysis. (Part of Article By Chris Bressi on LinkedIn)Dick and CareyAnother well-known instructional design model is The Dick and Carey Systems Approach Model.[26] The model was originally published in 1978 by Walter Dick and Lou Carey in their book entitled The Systematic Design of Instruction.Dick and Carey made a significant contribution to the instructional design field by championing a systems view of instruction as opposed to viewing instruction as a sum of isolated parts. The model addresses instruction as an entire system, focusing on the interrelationship between context, content, learning and instruction. According to Dick and Carey, "Components such as the instructor, learners, materials, instructional activities, delivery system, and learning and performance environments interact with each other and work together to bring about the desired student learning outcomes".[26] The components of the Systems Approach Model, also known as the Dick and Carey Model, are as follows:∙Identify Instructional Goal(s): goal statement describes a skill, knowledge or attitude(SKA) that a learner will be expected to acquire ∙Conduct Instructional Analysis: Identify what a learner must recall and identify what learner must be able to do to perform particular task ∙Analyze Learners and Contexts: General characteristic of the target audience, Characteristic directly related to the skill to be taught, Analysis of Performance Setting, Analysis of Learning Setting∙Write Performance Objectives: Objectives consists of a description of the behavior, the condition and criteria. The component of anobjective that describes the criteria that will be used to judge the learner's performance.∙Develop Assessment Instruments: Purpose of entry behavior testing, purpose of pretesting, purpose of posttesting, purpose of practive items/practive problems∙Develop Instructional Strategy: Pre-instructional activities, content presentation, Learner participation, assessment∙Develop and Select Instructional Materials∙Design and Conduct Formative Evaluation of Instruction: Designer try to identify areas of the instructional materials that are in need to improvement.∙Revise Instruction: To identify poor test items and to identify poor instruction∙Design and Conduct Summative EvaluationWith this model, components are executed iteratively and in parallel rather than linearly.[26]/akteacher/dick-cary-instructional-design-mo delInstructional Development Learning System (IDLS)Another instructional design model is the Instructional Development Learning System (IDLS).[27] The model was originally published in 1970 by Peter J. Esseff, PhD and Mary Sullivan Esseff, PhD in their book entitled IDLS—Pro Trainer 1: How to Design, Develop, and Validate Instructional Materials.[28]Peter (1968) & Mary (1972) Esseff both received their doctorates in Educational Technology from the Catholic University of America under the mentorship of Dr. Gabriel Ofiesh, a Founding Father of the Military Model mentioned above. Esseff and Esseff contributed synthesized existing theories to develop their approach to systematic design, "Instructional Development Learning System" (IDLS).The components of the IDLS Model are:∙Design a Task Analysis∙Develop Criterion Tests and Performance Measures∙Develop Interactive Instructional Materials∙Validate the Interactive Instructional MaterialsOther modelsSome other useful models of instructional design include: the Smith/Ragan Model, the Morrison/Ross/Kemp Model and the OAR model , as well as, Wiggins theory of backward design .Learning theories also play an important role in the design ofinstructional materials. Theories such as behaviorism , constructivism , social learning and cognitivism help shape and define the outcome of instructional materials.Influential researchers and theoristsThe lists in this article may contain items that are not notable , not encyclopedic , or not helpful . Please help out by removing such elements and incorporating appropriate items into the main body of the article. (December 2010)Alphabetic by last name∙ Bloom, Benjamin – Taxonomies of the cognitive, affective, and psychomotor domains – 1955 ∙Bonk, Curtis – Blended learning – 2000s ∙ Bransford, John D. – How People Learn: Bridging Research and Practice – 1999 ∙ Bruner, Jerome – Constructivism ∙Carr-Chellman, Alison – Instructional Design for Teachers ID4T -2010 ∙Carey, L. – "The Systematic Design of Instruction" ∙Clark, Richard – Clark-Kosma "Media vs Methods debate", "Guidance" debate . ∙Clark, Ruth – Efficiency in Learning: Evidence-Based Guidelines to Manage Cognitive Load / Guided Instruction / Cognitive Load Theory ∙Dick, W. – "The Systematic Design of Instruction" ∙ Gagné, Robert M. – Nine Events of Instruction (Gagné and Merrill Video Seminar) ∙Heinich, Robert – Instructional Media and the new technologies of instruction 3rd ed. – Educational Technology – 1989 ∙Jonassen, David – problem-solving strategies – 1990s ∙Langdon, Danny G - The Instructional Designs Library: 40 Instructional Designs, Educational Tech. Publications ∙Mager, Robert F. – ABCD model for instructional objectives – 1962 ∙Merrill, M. David - Component Display Theory / Knowledge Objects ∙ Papert, Seymour – Constructionism, LOGO – 1970s ∙ Piaget, Jean – Cognitive development – 1960s∙Piskurich, George – Rapid Instructional Design – 2006∙Simonson, Michael –Instructional Systems and Design via Distance Education – 1980s∙Schank, Roger– Constructivist simulations – 1990s∙Sweller, John - Cognitive load, Worked-example effect, Split-attention effect∙Roberts, Clifton Lee - From Analysis to Design, Practical Applications of ADDIE within the Enterprise - 2011∙Reigeluth, Charles –Elaboration Theory, "Green Books" I, II, and III - 1999-2010∙Skinner, B.F.– Radical Behaviorism, Programed Instruction∙Vygotsky, Lev– Learning as a social activity – 1930s∙Wiley, David– Learning Objects, Open Learning – 2000sSee alsoSince instructional design deals with creating useful instruction and instructional materials, there are many other areas that are related to the field of instructional design.∙educational assessment∙confidence-based learning∙educational animation∙educational psychology∙educational technology∙e-learning∙electronic portfolio∙evaluation∙human–computer interaction∙instructional design context∙instructional technology∙instructional theory∙interaction design∙learning object∙learning science∙m-learning∙multimedia learning∙online education∙instructional design coordinator∙storyboarding∙training∙interdisciplinary teaching∙rapid prototyping∙lesson study∙Understanding by DesignReferences1.^MIL-HDBK-29612/2A Instructional Systems Development/SystemsApproach to Training and Education2.^Bloom's Taxonomy3.^TIP: Theories4.^Lawrence Erlbaum Associates, Inc. - Educational Psychologist -38(1):1 - Citation5.^ Sweller, J. (1988). "Cognitive load during problem solving:Effects on learning". Cognitive Science12 (1): 257–285.doi:10.1016/0364-0213(88)90023-7.6.^ Chandler, P. & Sweller, J. (1991). "Cognitive Load Theory andthe Format of Instruction". Cognition and Instruction8 (4): 293–332.doi:10.1207/s1532690xci0804_2.7.^ Sweller, J., & Cooper, G.A. (1985). "The use of worked examplesas a substitute for problem solving in learning algebra". Cognition and Instruction2 (1): 59–89. doi:10.1207/s1532690xci0201_3.8.^Cooper, G., & Sweller, J. (1987). "Effects of schema acquisitionand rule automation on mathematical problem-solving transfer". Journal of Educational Psychology79 (4): 347–362.doi:10.1037/0022-0663.79.4.347.9.^ a b Mayer, R.E. (1997). "Multimedia Learning: Are We Asking theRight Questions?". Educational Psychologist32 (41): 1–19.doi:10.1207/s1*******ep3201_1.10.^ Mayer, R.E. (2001). Multimedia Learning. Cambridge: CambridgeUniversity Press. ISBN0-521-78239-2.11.^Mayer, R.E., Bove, W. Bryman, A. Mars, R. & Tapangco, L. (1996)."When Less Is More: Meaningful Learning From Visual and Verbal Summaries of Science Textbook Lessons". Journal of Educational Psychology88 (1): 64–73. doi:10.1037/0022-0663.88.1.64.12.^ Mayer, R.E., Steinhoff, K., Bower, G. and Mars, R. (1995). "Agenerative theory of textbook design: Using annotated illustrations to foster meaningful learning of science text". Educational TechnologyResearch and Development43 (1): 31–41. doi:10.1007/BF02300480.13.^Paas, F., Renkl, A. & Sweller, J. (2004). "Cognitive Load Theory:Instructional Implications of the Interaction between InformationStructures and Cognitive Architecture". Instructional Science32: 1–8.doi:10.1023/B:TRUC.0000021806.17516.d0.14.^ Clark, R.C., Mayer, R.E. (2002). e-Learning and the Science ofInstruction: Proven Guidelines for Consumers and Designers of Multimedia Learning. San Francisco: Pfeiffer. ISBN0-7879-6051-9.15.^ Clark, R.C., Nguyen, F., and Sweller, J. (2006). Efficiency inLearning: Evidence-Based Guidelines to Manage Cognitive Load. SanFrancisco: Pfeiffer. ISBN0-7879-7728-4.16.^Conole G., and Fill K., “A learning design toolkit to createpedagogically effective learning activities”. Journal of Interactive Media in Education, 2005 (08).17.^Carr-Chellman A. and Duchastel P., “The ideal online course,”British Journal of Educational Technology, 31(3), 229-241, July 2000.18.^Koper R., “Current Research in Learning Design,” EducationalTechnology & Society, 9 (1), 13-22, 2006.19.^Britain S., “A Review of Learning Design: Concept,Specifications and Tools” A report for the JISC E-learning Pedagogy Programme, May 2004.20.^IMS Learning Design webpage21.^ a b Piskurich, G.M. (2006). Rapid Instructional Design: LearningID fast and right.22.^ Saettler, P. (1990). The evolution of American educationaltechnology.23.^ Stolovitch, H.D., & Keeps, E. (1999). Handbook of humanperformance technology.24.^ Kelley, T., & Littman, J. (2005). The ten faces of innovation:IDEO's strategies for beating the devil's advocate & driving creativity throughout your organization. New York: Doubleday.25.^ Hokanson, B., & Miller, C. (2009). Role-based design: Acontemporary framework for innovation and creativity in instructional design. Educational Technology, 49(2), 21–28.26.^ a b c Dick, Walter, Lou Carey, and James O. Carey (2005) [1978].The Systematic Design of Instruction(6th ed.). Allyn & Bacon. pp. 1–12.ISBN020*******./?id=sYQCAAAACAAJ&dq=the+systematic+design+of+instruction.27.^ Esseff, Peter J. and Esseff, Mary Sullivan (1998) [1970].Instructional Development Learning System (IDLS) (8th ed.). ESF Press.pp. 1–12. ISBN1582830371. /Materials.html.28.^/Materials.htmlExternal links∙Instructional Design - An overview of Instructional Design∙ISD Handbook∙Edutech wiki: Instructional design model [1]∙Debby Kalk, Real World Instructional Design InterviewRetrieved from "/wiki/Instructional_design" Categories: Educational technology | Educational psychology | Learning | Pedagogy | Communication design | Curricula。
博士生发一篇information fusion Information Fusion: Enhancing Decision-Making through the Integration of Data and KnowledgeIntroduction:Information fusion, also known as data fusion or knowledge fusion, is a rapidly evolving field in the realm of decision-making. It involves the integration and analysis of data and knowledge from various sources to generate meaningful and accurate information. In this article, we will delve into the concept of information fusion, explore its key components, discuss its application in different domains, and highlight its significance in enhancingdecision-making processes.1. What is Information Fusion?Information fusion is the process of combining data and knowledge from multiple sources to provide a comprehensive and accurate representation of reality. The goal is to overcome the limitations inherent in individual sources and derive improved insights and predictions. By assimilating diverse information,information fusion enhances situational awareness, reduces uncertainty, and enables intelligent decision-making.2. Key Components of Information Fusion:a. Data Sources: Information fusion relies on various data sources, which can include sensors, databases, social media feeds, and expert opinions. These sources provide different types of data, such as text, images, audio, and numerical measurements.b. Data Processing: Once data is collected, it needs to be processed to extract relevant features and patterns. This step involves data cleaning, transformation, normalization, and aggregation to ensure compatibility and consistency.c. Information Extraction: Extracting relevant information is a crucial step in information fusion. This includes identifying and capturing the crucial aspects of the data, filtering out noise, and transforming data into knowledge.d. Knowledge Representation: The extracted information needs to be represented in a meaningful way for integration and analysis.Common methods include ontologies, semantic networks, and knowledge graphs.e. Fusion Algorithms: To integrate the information from various sources, fusion algorithms are employed. These algorithms can be rule-based, model-based, or data-driven, and they combine multiple pieces of information to generate a unified and coherent representation.f. Decision-Making Processes: The ultimate goal of information fusion is to enhance decision-making. This requires the fusion of information with domain knowledge and decision models to generate insights, predictions, and recommendations.3. Applications of Information Fusion:a. Defense and Security: Information fusion plays a critical role in defense and security applications, where it improves intelligence analysis, surveillance, threat detection, and situational awareness. By integrating information from multiple sources, such as radars, satellites, drones, and human intelligence, it enables effective decision-making in complex and dynamic situations.b. Health Monitoring: In healthcare, information fusion is used to monitor patient health, combine data from different medical devices, and provide real-time decision support to medical professionals. By fusing data from wearables, electronic medical records, and physiological sensors, it enables early detection of health anomalies and improves patient care.c. Smart Cities: Information fusion offers enormous potential for the development of smart cities. By integrating data from multiple urban systems, such as transportation, energy, and public safety, it enables efficient resource allocation, traffic management, and emergency response. This improves the overall quality of life for citizens.d. Financial Markets: In the financial sector, information fusion helps in the analysis of large-scale and diverse datasets. By integrating data from various sources, such as stock exchanges, news feeds, and social media mentions, it enables better prediction of market trends, risk assessment, and investmentdecision-making.4. Significance of Information Fusion:a. Enhanced Decision-Making: Information fusion enables decision-makers to obtain comprehensive and accurate information, reducing uncertainty and improving the quality of decisions.b. Improved Situational Awareness: By integrating data from multiple sources, information fusion enhances situational awareness, enabling timely and informed responses to dynamic and complex situations.c. Risk Reduction: By combining information from diverse sources, information fusion improves risk assessment capabilities, enabling proactive and preventive measures.d. Resource Optimization: Information fusion facilitates the efficient utilization of resources by providing a holistic view of the environment and enabling optimization of resource allocation.Conclusion:In conclusion, information fusion is a powerful approach to enhance decision-making by integrating data and knowledge from multiple sources. Its key components, including data sources, processing, extraction, knowledge representation, fusion algorithms, and decision-making processes, together create a comprehensive framework for generating meaningful insights. By applying information fusion in various domains, such as defense, healthcare, smart cities, and financial markets, we can maximize the potential of diverse information sources to achieve improved outcomes.。
专利名称:Method and Apparatus for Heterogeneous Small Cells Self-Organization in LTENetworks Based on Internet ProtocolMultimedia Subsystems发明人:Kaveh GHABOOSI,Manish Vemulapalli,Jim Oates,Yuqiang Tang申请号:US14039967申请日:20130927公开号:US20150093996A1公开日:20150402专利内容由知识产权出版社提供专利附图:摘要:A wireless network includes a plurality of small cells. The small cells each register with the network, and an application server tracks the locations of the small cells within the network. Upon entry to the network, the small cell registers with various event packages managed by the application server. When a small cell in the network updates its configurations and/or parameters, the small cell publishes those changes to the application server. The application server determines other nearby small cells that are “interested” in those changes based on their proximity to the changed cell. The application server then notifies those proximate cells of the changes made to the changed cell so that the proximate cells can update their own configurations.申请人:Broadcom Corporation地址:Irvine CA US国籍:US更多信息请下载全文后查看。
专利名称:Method of identifying an extremeinteraction pitch region, methods ofdesigning mask patterns and manufacturingmasks, device manufacturing methods andcomputer programs发明人:Shi, Xuelong,Chen, Jang Fung,Hsu, Duan-Fu Stephen申请号:EP02251365.9申请日:20020227公开号:EP1237046A3公开日:20031119专利内容由知识产权出版社提供专利附图:摘要:Optical proximity effects (OPEs) are a well-known phenomenon in photolithography. OPEs result from the structural interaction between the main feature and neighboring features. It has been determined by the present inventors that such structural interactions not only affect the critical dimension of the main feature at the image plane, but also the process latitude of the main feature. Moreover, it has been determined that the variation of the critical dimension as well as the process latitude of the main feature is a direct consequence of light field interference between the main feature and the neighboring features. Depending on the phase of the field produced by the neighboring features, the main feature critical dimension and process latitude can be improved by constructive light field interference, or degraded by destructive light field interference. The phase of the field produced by the neighboring features is dependent on the pitch as well as the illumination angle. For a given illumination, the forbidden pitch region is the location where the field produced by the neighboring features interferes with the field of the main feature destructively. The present invention provides a methodfor determining and eliminating the forbidden pitch region for any feature size andillumination condition. Moreover, it provides a method for performing illumination design in order to suppress the forbidden pitch phenomena, and for optimal placement of scattering bar assist features.申请人:ASML Masktools B.V.地址:De Run 1110 5503 LA Veldhoven NL国籍:NL代理机构:Leeming, John Gerard更多信息请下载全文后查看。
123SPRINGER BRIEFS IN MATERIALS Hamid Reza Rezaie Leila Bakhtiari Andreas ÖchsnerBiomaterials and Their ApplicationsSpringerBriefs in MaterialsMore information about this series at /series/10111Hamid Reza Rezaie · Leila BakhtiariAndreas Öchsner1 3Biomaterials and Their ApplicationsHamid Reza Rezaie Ceramic and Biomaterial Division Department of Engineering Materials Iran University of Science and Technology Tehran IranLeila Bakhtiari Ceramic and Biomaterial Division Department of Engineering Materials Iran University of Science and Technology Tehran Iran ISSN 2192-1091 ISSN 2192-1105 (electronic)SpringerBriefs in Materials ISBN 978-3-319-17845-5 ISBN 978-3-319-17846-2 (eBook)DOI 10.1007/978-3-319-17846-2Library of Congress Control Number: 2015937750Springer Cham Heidelberg New York Dordrecht London© The Author(s) 2015This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission or information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed.The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.The publisher, the authors and the editors are safe to assume that the advice and information in this book are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or the editors give a warranty, express or implied, with respect to the material contained herein or for any errors or omissions that may have been made.Printed on acid-free paperSpringer International Publishing AG Switzerland is part of Springer Science+Business Media ()Andreas Öchsner School of Engineering Griffith University Southport, QLD AustraliaContents1 Introduction (1)1.1 The History of Biomaterials (2)1.2 Classification of Biomaterials (3)1.2.1 Metallic Materials. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2.2 Polymers (6)1.2.3 Ceramics (7)1.2.4 Composites (14)1.2.5 Natural Materials (16)1.3 Biocompatibility (16)References (17)2 Application of Biomaterials (19)References (24)3 Bio-implants and Bio-devices (25)3.1 Design of Biomedical Implants Surface (26)3.1.1 Surface Coatings Methods (28)3.1.2 Surface Modification Methods (30)References (32)4 New Trends in Biomaterials (33)4.1 Mesoporous Structures (33)4.2 Nano Biomaterials (34)4.2.1 Role of Nanomaterials in Drug Delivery Systems (35)4.2.2 Role of Nanomaterials in Tissue Engineering (41)4.2.3 Role of Nanomaterials in Biosensores Applications (45)4.3 Smart Biomaterials (45)References (46)5 Tissue Response in Biomaterials (47)Reference (49)AbstractBiomaterials are synthetic or natural materials that are used to directly replace, change or improve the living tissues which have been damaged by diseases, trauma, accidents, etc. This biomaterial can be single or in combination with other materials which are used in living tissues at different times. Although a lot of efforts have been done on improving the quality and efficiency of biomaterials, there is a long way to achieve an ideal biomaterial with minimum side effects. This book is an overview of the types of biomaterial (such as bioceramics, biopolymers, metals and biocomposites) and especially nano-biomaterials, their applications in different tissues (drug delivery systems, tissue engineering, implants, etc.) and also to give an overall view in new trends in the biomaterial field the during last decades. Keywords Biomaterial · Biocompatibility ·Tissue engineering ·Drug delivery systems ·Meso-porous materials · Biocomposite ·Surface modification · Implants ·Dental material1Biomaterials are synthetic or natural materials intended to function appropriately in a biological environment in which they are used to direct, supplement or replace the functions of living tissues of the human body. After the invention of first gen-eration of materials during 1960–1970 for use inside a human body, synthetic bio-materials became a subject of interest [1]. The use of biomaterials dates back to ancient civilizations. Artificial eyes, ears, teeth and noses were found on Egyptian mummies. Chinese and Indians used waxes, glues, and tissues in reconstructing missing or defective parts of the body. Over the centuries, advancements in syn-thetic materials, surgical techniques, and sterilization methods have permitted the use of biomaterials in many new ways. Medical practice today utilizes a large number of devices and implants. Biomaterials in the form of implants (sutures, bone plates, joint replacements, ligaments, vascular grafts, heart valves, intraocu-lar lenses, dental implants and etc.) and medical devices (pacemakers, biosensors, artificial hearts, blood tubes and etc.) are widely used to replace and/or restore the function of traumatized or degenerated tissues or organs, to assist in healing, to improve function, to correct abnormalities and then improve the quality of life of the patients.In the early days all kind of natural materials such as wood, glue and rubber and tissues from living forms and manufactured materials such as iron, zinc, gold and glass were used as biomaterials based on trial and error. The host responses to these materials were extremely varied. Over the last 30 years considerable progress has been made in understanding the interactions between non-living and living materials. Researchers have coined the words “biomaterial” and “bio-compatibility” to indicate the biological performance of materials. Materials that are compatible are called biomaterials, and the biocompatibility is a descriptive term which indicates the ability of a material to perform with an appropriate host response, in a specific application. Table 1.1 summarizes various important factors that are considered in selecting material for a biomedical application [2].Chapter 1Introduction© The Author(s) 2015H. Reza Rezaie et al., Biomaterials and Their Applications ,2 1 Introduction 1.1 T he History of BiomaterialsThe introduction of non-biological materials into the human body took place throughout history. After World War II, the physician was implicitly entrusted with the life and health of the patient and had much more freedom than is seen today to Table 1.1 Various factors of importance in material selection for biomedical selection [2]FactorsDescription 1st level materialproperties Chemical/biological characteristicsChemical compo-sition (Bulk andSurface)Physical characteris-tic density Mechanical/structural characteristics Elastic modulus Poisson’s ratio Yield strengthTensile strengthCompressive strength2nd level material properties Adhesion Surface topol-ogy (texture and roughness)HardnessShear modulusShear strengthFlexural modulusFlexural strengthSpecific functional requirements (based on application)Biofunctionality (non-thrombogenic, cell adhesion and etc.)Bioinert (non-toxic, non-irritant, non-allergic, non-carcino-genic and etc.)Bioactive Biostability (resist-ant to corrosion, hydrolysis, oxidation and etc.)Biogradation Form (solid, porous, coating, film, fiber, mesh, powder)Geometry Coefficient of t hermal expansion Electrical conductivity Color, aesthetics Refractive index Opacity or translucencyStiffness or rigidityFracture toughnessFatigue strengthCreep resistanceFriction and wearresistanceAdhesion strengthImpact strengthProof stressAbrasion resistance Processing andfabricationReproducibility, quality, sterilizability, packaging, secondary processability Characteristics ofhost:Tissue, organ, spe-cies, age, sex, race, health condition, activity, systemic response Other Medical/surgicalprocedureperiod of application/usagecost1.1 The History of Biomaterials3 take heroic action when other options were exhausted. These medical practition-ers had read about the post-World War II marvels of materials science. Looking at a patient open on the operating table, they could imagine replacements, bridges, conduits, and even organ systems based on such materials. Many materials were tried on the spur of the moment. Some fortuitously succeeded. These were high risk trials, but usually they took place where other options were not available.Intraocular lenses, hip and knee prostheses, dental implants, artificial kidney, artificial hearts, breast implants, vascular grafts, stents, pacemakers, heart valves, drug delivery systems etc. [3].1.2 C lassification of BiomaterialsThe different classes of materials used for the fabrication of bio-implants and bio-devices can be broadly classified as (1) metallic materials, (2) polymers, (3) ceramics, (4) composites and (5) natural materials.1.2.1 M etallic MaterialsMetallic materials are most commonly used for load bearing implants and inter-nal fixation devices. The processing method and purity of the metal determines its properties. Some featured properties of metallic materials are its high tensile strength, high yield strength, resistance to cyclic loading (fatigue), resistance to time dependent deformation (creep) and its corrosion resistance. They generally find applications in the fabrication of implant devices such as hip joint prosthesis, knee joint prosthesis, dental implants, cardiovascular devices, surgical instruments and etc. The most commonly used metals and alloys for medical device appli-cations include stainless steels, commercially pure titanium and its alloys, and cobalt-based alloys which are briefly discussed below [1].316L Stainless Steel. Stainless steels are iron based alloys which was first used in orthopaedic surgery in 1926. These alloys have a minimum of 10.5 % Cr as an alloying element, needed to prevent the formation of rust. There are two strength-ening methods for 316L stainless steels: cold-working and controlling grain size. Each of these methods helps to increase the difficulty of slip of dislocations [3]. Apart from implant applications commercial grade stainless steels are also widely used for the manufacture of surgical and dental instruments. Although there are several types of stainless steels (see Table 1.2) in use for medical applications, 316L (18Cr–14Ni–2.5Mo) single phase austenitic (FCC) stainless steel is the most popular one for implant applications. The “L” in the designation denotes its low carbon content and as a result it has high corrosion resistance in in-vivo conditions.41 IntroductionCobalt alloys . Among cobalt alloys, Co–Cr based alloys are the most com-monly used alloys in biomedical applications. The presence of Cr imparts corro-sion resistance and the addition of small amounts of other elements such as iron, molybdenum, or tungsten can make very good high temperature properties and abrasion resistance. Cobalt-based alloys include Haynes-Stellite 21 and 25 (ASTM F75 and F90), forged Co–Cr–Mo alloy (ASTM F799), and multiphase (MP) alloy MP35N (ASTM F562). The F75 and F799 alloys are virtually identical in compo-sition, each being about 58–70 % Co and 26–30 % Cr, with the key difference in their processing history. The other two alloys, F90 and F562, have slightly less Co and Cr, but more Ni (F562) or more tungsten (F90).Some important clinical applications of these alloys are in dentistry and maxil-lofacial surgery as partial denture, dental implants, and maxillofacial implants and in orthopaedics as fracture fixation plates and screws and hip and knee p rosthesis. Casting Co–Cr-based alloys for the fabrication of implants is not a preferredt echnique as solidification during casting may result in large dendritic grains and thereby decrease its yield strength. For improving the mechanical properties of these alloys, powder metallurgical techniques such as hot isostatic pressing (HIP) followed by forging [4] have been used for such applications.Titanium alloys . Commercially pure (CP) titanium (ASTM F67) and extra-low interstitial (ELI) Ti–6AI–4V alloy (ASTM F136) are the two most com-mon titanium-based implant biomaterials. These alloys are suitable for load bearing implants due to its superior mechanical properties (tensile strength and fatigue strength), chemical stability (corrosion resistance), and biocompatibility under in vivo conditions [4–6]. Recent researches showed the dissolution of aluminium and vanadium ions from passivation layer break down during wear in Ti–6Al–4V into the body fluid would have many side effects. Consequently, other titanium alloys such as Ti–6Al–7Nb and Ti–13Nb–13Zr are under study in terms of their corrosion rate, mechanical properties, and biocompatibility as compared to Ti–6Al–4V. Table 1.3 lists titanium and its applications in the bio-medical field [1, 3].Table 1.2 Types of stainless steels in use for medical applications [1]Types of stainless steel Cr content (%)Medical applicationsMartensitic stainless steel10.5–18Bone curettes, chisels and gouges, dental burs,dental chisels, curettes, explorers, root elevators and scalers, forceps, hemostats, reactors, orthodontic pliers and scalpelsFerritic stainless steel 11–30Solid handles for instruments, guide pins and fastenersAustenitic stainless steel16–26Canulae, dental impression trays, guide pins,h ollowware, hypodermic needles, steam sterilizers,storage cabinets, hip implants and knee implants5A selection of metallic biomaterials is related to the final application of the bio-medical device, so it is necessary to have adequate information about these materials. Table 1.4 lists the mechanical properties of metallic biomaterials used as implants.In the titanium based alloys group, there is a special alloy which is known as Nitinol (Nickel -Titanium Naval Ordnance Laboratory). This class of titanium andTable 1.3 Mechanical properties and clinical applications of Ti-based metallic materials AlloyElastic modulus (GPa)0.2 %offset yieldstrength (MPa)Ultimate tensile strength (MPa)Elongation (%)Clinical applicationsPure Ti 102–110170–480240–55015–24Pacemaker cases, housings for v entricular-assist devices, implantable infusion drug pumps, dental implants, maxillofacial and craniofacial implants, screws and staples for spinal surgery Ti–6Al–4V 11086093010–15Total joint replacementarthroplasty primarily for hips and knees Ti–6Al–7Nb 10579586010Femoral hip stems, fracture fixation plates, spinal components, fasteners, nails, rods, screws and wireTi–13Nb–13Zr 79–84836–908973–103710–16Orthopaedic implantsTable 1.4 Typical mechanical properties of implant metals [3]MaterialsASTM designationConditionYoung’s modulus (GPa)Yield strength (MPa)Tensile strength (MPa)Fatigue endurance limitstrength (at 107 cycles, R = −1a ) (MPa)Stainless steelF745Annealed 190221483221–280F55, F56, F138, F139Annealed 190331586241–27630 % cold-worked 190792930310–448Cold forged19012131351820(continued)1.2 Classification of Biomaterials61 Introductionnickle alloys in addition to having titanium alloy properties (good mechanical qualities and biocompatibility), have other unique behaviours such as shape mem-ory, extremely high elasticity, force hysteresis, fatigue resistance, thermal deploy-ment, and kink resistance. Clamps for orthopaedic and traumatological bone fixation, filters to retain emboli in vascular surgery, orthodontic wires and heart stents are some important applications of NiTi alloys [3, 7].1.2.2 P olymersPolymers are the other category of materials which are used as biomaterials. Polymers are long chain molecules consisting of a large number of small repeating monomers (composer unit). Polymers can be derived either from natural sources or from synthetic organic sources. These materials have already been used in surgi-cal tools, implantable devices, device coatings, catheters, vascular grafts, injectablea P/M HIP: Powder metallurgy product, hot-isostatically pressedb R is defined as σmin /σmaxMaterialsASTM designationConditionYoung’s modulus (GPa)Yield strength (MPa)Tensile strength (MPa)Fatigue endurance limitstrength (at 107 cycles, R = −1a ) (MPa)Co–Cr alloysF75As-cast/annealed 210448–517655–889207–310P/M HIP b 2538411277725–950F799Hot forged 210896–12001399–1586600–89644 % cold-worked 21016061896586F562Hot forged 232965–10001206500Cold-worked, aged23215001795689–793 (axial tension R = 0.05,30 Hz Ti alloys F6730 % cold-worked Grade 4110485760300F136Forged annealed 116896965620Forged, heat treated11610341103620–689Table 1.4 (continued)7biomaterials and therapeutics. Examples of polymers used as biomaterials are listed in Table 1.5. Conducting polymers (CPs) are an important group of polymers that were produced first in 1970s and showed both electrical and optical properties similar to those of metals and inorganic semiconductors, but which also exhibit the attractive properties associated with conventional polymers, such as ease of synthe-sis and flexibility in processing. Biosensors, neural probes, tissue engineering, drug delivery and bio-actuators are important applications of CPs [1, 8].Some advantages of using polymers as biomaterials are the easy fabrication and the tuneable surface properties, which can produce a wide range of composites. On the other hand, disadvantages of these biomaterials included: difficulty in ster-ilization, easy water absorption, weak mechanical properties, and the release of harmful monomers in the human body.1.2.3 C eramicsCeramics are inorganic, nonmetallic materials that have superior compressive strength and biological inertness that make them useful for medical applications. These materials have interatomic bonds (ionic or covalent) which generally form at elevated temperatures. A class of such materials used for skeletal or hard tissue repair are commonly referred to bioceramics . These bioceramics may be bioinert (alumina, zirconia), bioresorbable (tricalcium phosphate), bioactive (hydroxyapa-tite, bioactive glasses, and glass ceramics), or porous for tissue in growth (hydroxyapatite coating, and bioglass coating on metallic materials). Their s uccess depends on their ability to induce bone regeneration and bone in growth at the tissue-implant interface without the intermediate fibrous tissue layer [1, 7].Depending on the biomaterial type and tissue, different responses occur. Table 1.6 describes these responses.Ceramics, glasses, and glass-ceramics are widely used in biomedical applica-tions. Drug delivery systems, tissue engineering, dental restoration, implants andTable 1.5 Polymers used as biomaterials [1]PolymerApplicationUltrahigh-molecular-weight polyethylene (UHMWPE)Knee, hip, shoulder jointsSiliconeFinger joints Polylactic and polyglycolic acid SuturesSilicone, acrylic, nylonTracheal tubes Acetal, polyethylene, polyurethane Heart pacemaker Polyester, polytetrafluoroethylene, PVC Blood vessels Nylon, PVC, siliconesGastrointestinal segmentsPolydimethyl siloxane, polyurethane, PVC Facial prostheses PolymethylmethacrylateBone cement1.2 Classification of Biomaterials8 1 IntroductionTable 1.6 Responses types of biomaterial-tissue [3]If the material is toxic, the surrounding tissue diesIf the material is nontoxic and biologically inactive (nearly inert), a fibrous tissue of variable thickness formsIf the material is nontoxic and biologically active (bioactive), an interfacial bond formsIf the material is nontoxic and dissolves, the surrounding tissue replaces itimplant coatings and bone cements are the most important application of biocer-amics. Among the various bioceramics, bioactive ceramics such as hydroxyapatite and bioglass are the most interesting for biomedical applications.Alumina. Bauxite1 and native corundum2 are the main sources of high purity alumina. The most common refining process is the Bayer process, which yields alu-mina [12]. The Bayer process involves the dissolution of crushed bauxite in sodium hydroxide (NaOH) solution under pressure at high temperatures (up to 300 °C) to form a supersaturated sodium aluminate solution. The hydrated a luminum oxide is precipitated by seeding or as a metastable bayerite on reduction of the pH by c arbon dioxide. Washing and dehydrating the precipitate at 1000 ~ 1200 °C turns it into a low-temperature form of “calcined” alumina. Depending on the source of the raw materials other refining processes have been developed. The crystal structure of α-alumina is hexagonal close packed (a = 0.4758 and c = 1.299 nm) and belongs to space group D36d.The mechanical properties of polycrystalline alumina depend largely on grain size, grain distribution, and porosity. Most alumina used for implant fabrication is either a polycrystalline solid of high density and purity or an artificially grown colourless single crystal similar to sapphire or ruby.Alumina in general has a hardness of 20 ~ 30 GPa and a Mohs hardness of 9. The high hardness is accompanied by low friction and wear, which are major advantages in using alumina as a joint-replacement material, in spite of its brittleness.Inertness, biocompatibility, nonsensitization of tissues, excellent wear and fric-tion properties, higher compressive strength to tensile strength make alumina as a good choice for artificial joint and teeth.Hip joint replacements include 3 parts, the femoral head sphere, the stem or shaft and the cup. The femoral head sphere was made of alumina ceramics and the stem of CoCrMo alloys. The acetabular cup was made of ultra-high molec-ular weight polyethylene (UHMWPE). There are some problems with this joint replacement technique, such as implant fixation, aseptic loosening, tissue response to wear particulates, infection, ectopic bone formation and pain.The use of alumina ceramics in other joints (for example in the knee) has been attempted but has not gained much popularity. This is due to the much larger range1Hydrated aluminum oxide.2Aluminum oxide mineral (α-alumina).1.2 Classification of Biomaterials9 of motion (ROM) in the knee than in the hip, as well as the much smaller surface contact area and greater incongruency. Again, fixation is much more difficult in the hip than in the knee joint.In alumina dental implants, fixation is the most difficult problem too. Increasing the surface area, rendering their surface porous, has been tested to solve this problem [9].Zirconia. Zirconium oxides (zirconia) have been used for the purpose of fab-ricating implants. Some of their mechanical properties are as good as or better than those of alumina ceramics. They are highly biocompatible, like other ceram-ics, and can be made into large implants, such as the femoral head of a hip joint replacement. Some of their drawbacks include the fact that they exhibit high density, low hardness, and phase transformations under stress in aqueous condi-tions, thus degrading their mechanical properties. It is noteworthy that zirconium-niobium metal can be used as an articulating material for joint implants. In bulk behavior this material is very similar to metallic zirconium.Zircon (ZrSiO4) is the most commercially important zirconium mineral and is found mostly in the mineral baddeleyite. Zircon is a gold-coloured silicate of z irconium, a mineral found in igneous and sedimentary rock and occurring in multi-coloured tetragonal crystals. The transparent varieties are usually deposited in beach sand, and are used as gems. Zircon is first chlorinated to form ZrCl4 in a fluidized bed reactor in the presence of petroleum coke. A second chlorination is required for high-quality zirconium. Zirconium is precipitated with either hydrox-ides or sulfates, and then is calcined to its oxide [9].Calcium phosphates. Calcium phosphate (CaP) salts are the major mineral constituents of vertebrate bone and tooth. Within the past 20–30 years interest has intensified in the use of calcium phosphates as biomaterials, but only cer-tain compounds are useful for implantation in the body, since both their solubil-ity and speed of hydrolysis increase with a decreasing calcium/phosphorus ratio. Driessens (1983) stated that those compounds with a Ca/P ratio of less than 1:1 are not suitable for biological implantation [3]. Calcium phosphates show d ifferent behavior against microbial attacks, changes in pH and solvent type. Depending on the solvent type, temperature, pressure and impurities, different types of c alcium phosphates may be appear. Most common calcium phosphates are listed in Table 1.7.The biomineral phase, which is one or more types of calcium phosphates, com-prises 65–70 % of bone, water accounts for 5–8 % and the organic phase, which is primarily in the form of collagen, accounts for the remaining portion. The colla-gen, which gives the bone its elastic resistance, acts as a matrix for the deposition and growth of minerals. Among the CaP salts, hydroxyapatite (Ca10(PO4)6(OH)2, HAp), as a thermodynamically most stable crystalline phase of CaP in body fluid, possesses the most similarity to the mineral part of bone. It has been well docu-mented that HA can promote new bone ingrowth through osteoconduction mecha-nism without causing any local or systemic toxicity, inflammation or foreign body response. Currently, HA is commonly the material of choice for various biomedi-cal applications, for example as a replacement for bony and periodontal defects,101 Introductionalveolar ridge, middle ear implants, tissue engineering systems, drug delivery agent, dental materials and bioactive coating on metallic osseous implants [3, 10]. The main mechanical properties of HA are shown in Table 1.8.Hydroxyapatite can be synthesized by different methods. Dry methods, wet methods and high temperature methods are the most important category of HA synthesizing methods. Table 1.9 summarizes these methods in details. By using different methods, different types of HA with different properties were formed.In dry methods there is not a solvent during the synthesis and the two main methods of this category are solid-state synthesis and the mechanochemical pro-cess. These methods have the convenience of producing highly crystalline HA from relatively inexpensive raw materials. The main disadvantage is the large size of particles in the case of solid-state synthesis and the low phase purity of HA in the case of the mechanochemical process.In wet methods, various sources of calcium and phosphates ions were used. Conventional chemical precipitation, hydrolysis method, sol–gel method, hydro-thermal method, emulsion method, and sonochemical method are common routes of wet methods. In wet methods there is exact control on the morphology and size especially in synthesize of nanosized particles but there are some difficulties in crystallinity and phase purity control.Table 1.7 Main calcium phosphates salts [10]NameSymbol(s)FormulaCa/P Monocalcium phosphate monohydrate(MCPM) and (MCPH)Ca(H 2PO 4)2·H 2O 0.5Monocalcium phosphate anhydrous(MCPA) and (MCP)Ca(H 2PO 4)20.5Dicalcium phosphate dihydrate (Brushite)(DCPD)CaHPO 4·2H 2O 1.0Dicalcium phosphate anhydrous (Monetite)(DCPA) and (DCP)CaHPO 41.0Octacalcium phosphate (OCP)Ca 8(HPO 4)2(PO 4)4·5H 2O 1.33α-Tricalcium phosphate (α-TCP)Ca 3(PO 4)2 1.5β-Tricalcium phosphate(β-TCP)Ca 3(PO 4)21.5Amorphous calcium phosphate (ACP)Ca x (PO4)y ·nH 2O 1.2–2.2Hydroxyapatite(HA) and (HAp)Ca 10(PO 4)6(OH)21.67Table 1.8 Mechanical propertiesof hydroxyapatite [3]Theoretical density 3.156 g cm 3Hardness500–800 HV , 2000–3500 Knoop Tensile strength 40–100 MPa Bend strength20–80 MPa Compressive strength 100–900 MPa Fracture toughness ~1 MPa m 0.5Young’s modulus70–120 GPa。
Sigma D⏹CT powered directional short-circuit and directional earthfault indicator for all distribution networks / neutral point treatments⏹Earth fault detection with up to five different earth fault detection methods, also in combination⏹Fully automatic voltage calibration⏹Easy and flexible parameter setting via DIP switch or USBport⏹Event memory for fault evaluation ⏹Multicolour LED status display⏹Remote signalling via freely programmable relays ⏹Sigma Explorer Software: Commissioning and parameter-isation via front accessible USB portSpecial features of Sigma D ++⏹Only 3 single-phase current sensors needed for all earthfault detection methods⏹Wide-range power supply 24 to 230 V AC/DCProduct featuresThe Sigma D series are combined directional short-circuit and directional earth fault indicators for medium voltage distribution networks. The devices are current sensor pow-ered. The voltage information will be taken from an inte-grated voltage detecting system (Wega series), from an HR interface or capacitive post insulators.The Sigma D + and Sigma D ++ provide additional earth fault detection methods for compensated and isolated neutral networks.The variants differ in regard of the transient earth fault method.Sigma D +For the transient earth fault method with the Sigma D + a summation current sensor is mandatory, auxiliary supply is optional.Sigma D ++For the transient earth fault method only three single-phase current sensors are needed, but auxiliary supply is mandatory. The connection of a summation current sensor is optional.For all other methods no auxiliary supply is needed.Sigma D +Sigma D | Sigma D + | Sigma D ++Directional fault indicatorSigma D ++⏹Immediate detection of fault direction ⏹No auxiliary supply required⏹Fast commissioning and parameterisation ⏹Monitoring on site with USB and notebookYour advantages1) Combination with summation current sensor possible: 3+1Dimension drawing in catalogueon page 158, M3Equipment set optionsFor directional fault indicators Sigma D series and ComPass B seriesHR Jack moduleAir-insulated switchgearGas-/solid insulated switchgearWega with+Split-core single-phase current sensorsfor retrofit and new installationsCurrent signalClosed single-phase current sensors fornew installationsSigma D series ComPass B series[+]Optional supplementFor new installations on bushingsfor Sigma 2.0 series, Sigma D series, ComPass seriesSingle-phase current sensorsType:RM6Type:Safelink, SafePlus, SafeRingSchneider Electric Type:FBXDriescherType:MINEX, MINEX C, G.I.S.E.L.A.EATONType:XIRIASiemensType:8DJH (cable panel)Ormazabal Type:ga, gae, geLucy Electric Type:AegisPlus1) Without retaining plates. Order no. with retaining plates on requestConductor Ø [mm]Cable length [m]Order no.15 — 553.0049-6024-001Conductor [mm]Cable length [m]Order no.For retrofit on insulated cablesfor Sigma 2.0 series, Sigma D series, ComPass seriesSiemensType:NXPLUS C ,8DJH (cubicle width 430 mm )Summation current transformersFor installation on insulated cablesfor SigmaplusOrder no.40 — 1153.0049-6013-016Conductor [mm]Cable length [m]Order no.350 — 50, oval3.0049-6013-027Conductor [mm]Cable length [m]Order no.280 — 50, oval3.0049-6013-028Conductor Ø [mm]Trip currents 1) [A]Order no.up to 11540, 80, 120 or 16049-6014-009Conductor Ø [mm]Trip currents 1) [A]Order no.350 x 50, oval 80, 120, 160 or 20049-6014-0211) adjustableFor installation on medium voltage cablesfor Opto F+E 3.0Conductor Ø [mm]Trip currents 1) [A]Order no.280 x 50, oval 80, 120, 160 or 20049-6014-0221) adjustable For installation on medium voltage cablesfor Earth Zero, Earth Zero Flag, Earth 4.0Conductor Ø [mm]Cable length [m]Order no.60 — 1503.049-6013-029Conductor Ø [mm]Cable length [m]Order no.220 — 2504.0049-6023-020Summation current sensor, splittablefor Sigma D +, Sigma D ++, ComPass B seriesSummation current sensorsHR interface cable Interface cableProduct matrix Capacitive and resistive voltage signalC1A2-24C1Ix RDP seriesCapacitive voltage couplingfor Wega series in air-insulated switchgearsFor Wega seriesC1A2-24Cable length [m]Rated voltage [kV]Order no. setFor medium voltage switchgear types 1)Driescher: LDTM-12/24Driescher: TSL-20, TSL-G20Calor Emag: C2-20,Calor Emag: C3-10/20F&G: Concordia Sprecher 12F&G: E A20Leukhardt: 10 kV4.512, 24V38-9100-061-0011) Further types of switchgear on request.Wega series as well as set of connection cables see page 100 and 110 / 111C1Ix Voltage [kV]Order no.C1I1-12max. 12 3 x 48-0101-002C1I2-24max. 24 3 x 48-0101-003C1I3-36max. 363 x 48-0101-004Wega 1.2 C vario as well as set of connection cables see page 104 and 110 / 111Voltage sensorsHorstmann products are in step with the times:As grids become increasingly complex and heterogeneous, greater demands are placed on the availability of electricity networks. The increasing use of renewable energy sources and the desire for decentralisation play important roles in this development.The Horstmann solution:Information based network monitoring — the iHost system reduces power outage times thanks to quicker availability of information.The iHost system collects data from devices such as from the short-circuit and earth fault indicators in the field (e.g. of the Compass series — see page 44), evaluates the data in a data concentrator and shares it with the control room systems and / or mobile terminals. Fault information and exceeded limits can also be send by e-mail or SMS.Advantages at a glance:Current information about network performance Continuous network supervision prevents power outages Alerts in the case of faults or irregularities Analysis tools for increasing network visibilityProduct features:Data concentrator for short-circuit and earth fault indicatorsBundles and processes all data received from remote field devicesProvides data access at any time in various ways and devicesCentral management of all field devices — with one clickGrid monitoring: system overview, data analysis, health checksConfiguration and firmware updates from SCADAData on demandCustomised visualisation of data and alarms Individual notifications, generated automaticallyEmbedded databaseGrid data available from day one of installation Flexible data provision for asset management, plan-ning, engineers and further useriHost solutioniHost Cloud iHost CloudFor smaller scale projects or pilot schemes iHost Cloudis the best choice. Quick and easy implementation works without software installation. Handling is very user-friend-ly — all you need is a web-enabled device, your username and password. Customised notifications in case of a fault or alarms are possible via SMS and e-mail.iHost CompactIf you want to see the data in your SCADA, iHost Compact is the right choice. With this solution iHost becomes part of your SCADA infrastructure. Installed on a physical or a virtual server iHost is a gateway that processes all data and forwards them directly to your SCADA. With iHost Com-pact you manage all remote devices installed in the powernetwork.iHost Compact1) Customers server hardware must contain serial interface.**SIM-M: 2G, 3G, 4G; 20 MB data volume / month / SIM cardiHost SoloWith iHost Solo and iHost Pro all measured values as well as fault information are transferred directly to your SCADA and are available on mobile devices at the same time. All data is stored in iHost. Installed in your premises these solutions provide you multiple options regarding the use, analysis and visualisation of data.iHost SoloiHost Solo is designed for medium sized distribution net-works whereas iHost Pro can handle the variety of remote devices, even of large distribution utilities.iHost ProComplete with high availability resilience the system sup-ports all departments of your company. The system can be tailored for user groups depending on their requirements.iHost ProiHostRemote monitoring software for SCADA1) Customers server hardware must contain serial interface.Reporter 3.0Detection and forwarding of digital states as generatede.g. by short-circuit or earth fault indicators, door contacts etc.Bidirectional data transfer to iHostInternal battery supply / no auxiliary supply necessaryProduct featuresThe Reporter 3.0 is used for the remote signalling of short-circuits, earth faults and additional status reports(door contact, temperature sensor etc.) from a medium-volt-age network that are reported by short-circuit and earth fault indicators. The received reports are transferred to iHost through a bidirectional data connection. The Report-er 3.0 is housed in robust, weatherproof housing for wall mounting and can be configured using Windows-based PC software and iHost.Reported short-circuits and earth faults are securely sent to SCADA via the iHost system and can be retrieved by any web-enabled device at any time. Notifications can also be received by e-mail and /or SMS..xlsPlatformFault indicator with relay contactse. g. door contact Temperature sensor...Reporter 3.0Remote monitoring to iHostDimension drawing in catalogue on page 159, M10External signal lampsfor installation outside the switchgear3 LEDsOrder no.5 m connection cable, with battery, for permanent contact 49-0702-00510 m connection cable, with battery, for permanent contact 49-0702-01015 m connection cable, with battery, for permanent contact49-0702-015Bicolour 3 LEDs red/green Order no.3 m connection cable, with battery49-0706-001Bicolour 1 LED red/greenOrder no.2 m connection cable, with battery, without fibre optic cable (see page 57)49-0704-001Wall-mounted housingsfor the installation of short-circuit and earth fault indicators as well as integrated voltage detecting systems outside the switchgearAccessoriesFor short-circuit and earth fault indicators and integrated voltage detecting systemsOrder no.Fibre optic cable 3 m (standard length for short-circuit CTs)49-0602-009Fibre optic cable 4 m (standard length for earth fault CTs)49-0602-001Fibre optic cable 1,8 m (standard length for external signal lamp)49-6007-206Fibre optic cablesOrder no.Cutting tool for fibre optic cables49-0109-003Accessories for Opto seriesOrder no.Transformer with cable for top-hat rail mounting (115 V — 230 V AC / 24 V — 48 V AC)49-0921-002Order no.Optical testing unit to excite the indicator for connection to the fibre optic cable plug49-0109-002Order no.Tablet for parameter setting during installation or monitoring, incl. cover, pencil, power supply and USB cable49-6022-010Installation systemfor Sigma D series and ComPass seriesOrder no.Accessories for plug-in housingOrder no.Spring clip suitable for 2 mm front plate thickness (standard)49-9090-018Order no.Temperature range Dimension Cable lengthProtection degree−50 to +180 °C 6 x 50 mm10 m (silicone, 2 ferrules)IP6549-9090-013Temperature sensor PT100。
贴膜机构的基因建模方法与构型分析发布时间:2022-12-23T02:08:20.023Z 来源:《科学与技术》2022年16期8月作者:李海波1,陈立2[导读] 通过总结、分析贴膜机构执行端的主要动作方式,为贴膜机构的变胞演化提供了方向。
李海波1,陈立21.张家界航空工业职业技术学院航空维修工程系,湖南张家界,427000;2.南京航空航天大学机电工程学院,江苏南京,210000摘要:通过总结、分析贴膜机构执行端的主要动作方式,为贴膜机构的变胞演化提供了方向。
将生物学特性引入到机构的变胞设计中,提出以胞元、胞基因作为多构态进化的要素,构建机构学意义上以变胞元和变胞基因为进化单元的设计理念,模仿生物基因进化的机制,将机构的演变过程比拟为变胞基因的聚合进化过程,得到了基于变胞元素的变胞机构演化方法,为机构的灵活演变、创建全新且具有工程应用价值的机构提供了新的思路。
关键词:贴膜机构;变胞机构;基因设计;变胞元;变胞基因中图分类号:TH122 文献标识码:AMethod of Genetic Modeling and Conformational?Analysis of Screen Protector MechanismLi Haibo1, Chen Li2(1.Zhangjiajie Institute of Aeronautical Engineering,Hu Nan,Zhangjiajie,427000;2. Nanjing University of Aeronautics and Astronautics ,Jiang Su,Nan Jing,210000)Abstract:By summarizing and analyzing the main action modes of the actuating end of the screen protector mechanism, the direction of the metamorphic evolution of the screen protector mechanism is provided. The biological characteristics are introduced into the metamorphic design of the mechanism,and the concept of the cell and cell gene as the elements of multi structural evolution is proposed. The design concept of taking the cell and cell gene as the evolution unit is constructed. By imitating mechanism of biological gene evolution, the evolution process of the mechanism is compared to the polymerization evolution process of the metamorphic gene. The metamorphic mechanism evolution method based on the metamorphic element is obtained,which can?profit?the flexible evolution of the mechanism and provides new idea to create new mechanism with engineering application value.Key words: Screen Protector mechanism; Metamorphic mechanisms;Genetic design; Metamorphic cell; Metamorphic gene0 前言基于避免电子产品的人机交互界面被划伤和缓解电磁波对人体的辐射以及提高使用舒适与美观度等需求,需要在屏幕的表面贴敷一层保护膜[1]。
城市轨交自动售检票车站管理系统的研究与实践周元军;苏厚勤;南志文【摘要】During rail transit operation, a city rail transit automatic fare collection station management system faces directly to passengers. It is charged with a key responsibility for connections between the uppers and the lowers in a line AFC system. Derived from SMS functional requirements, combining the AFC syslem's hierarchical system structure, the authors have researched and designed on SMS architecture pattern and reference model, proposed a technical idea about XML specification interface protocol simplification and reconstruction, described the data receiving and sending process on a heterogeneous platform, and additionally presents a WPF technology based programmed realization of a device operational monitoring interface example.%城市轨道交通自动售检票车站管理系统在轨道交通运营中直接面向乘客,并在线路自动售检票系统中承担着承上启下的关键作用.从车站管理系统的功能需求出发,结合自动售检票系统具有分层系统的结构特征,对车站管理系统的构架模式和参考模型进行研究和设计,提出关于XML 规范接口协议简约与重构的技术思路,描述异构平台数据收发的处理流程,并给出基于WPF技术编程实现设备运营监控界面示例.【期刊名称】《计算机应用与软件》【年(卷),期】2012(029)005【总页数】4页(P199-202)【关键词】城市轨交;自动售检票;车站管理系统;软件构架;可扩展标注语言;消息报文【作者】周元军;苏厚勤;南志文【作者单位】东华大学计算机科学与技术学院上海 200051;东华大学计算机科学与技术学院上海 200051;东华大学计算机科学与技术学院上海 200051【正文语种】中文【中图分类】TP3150 引言轨道交通自动售检票AFC系统是一个涉及面广、集成度高、应用性强和社会影响大的票务信息采集与处理系统,涉及计算机、嵌入式、机电一体化、通信、网络、数据库、数据处理、信息安全和系统集成等相关技术的集成应用,也是轨道交通领域面向乘客服务的典型的综合性应用系统[1]。
The Design of SMS Based HeterogeneousMobile BotnetGuining Geng, Guoai Xu, Miao Zhang and Yanhui GuoInformation Security Center,Beijing University of Posts and Telecommunications, Beijing, Chinagengguining@, {xga, zhangmiao, yhguo }@Guang YangChina Information Technology Security Evaluation Center, Beijing, Chinasunwina@Wei CuiInformation Center of Ministry of Science and Technology of the People's Republic of Chinacuiw@Abstract—Botnets have become one of the most serious security threats to the traditional Internet world. Although the mobile botnets have not yet caused major outbreaks worldwide in cellular network, but most of the traditional botnet experience can be transferred to mobile botnet on mobile devices, so mobile botnet may evolve faster since techniques are already explored. From the theoretical work of some researchers and the reports of security companies, we can see that the mobile botnet attacks and trends are quite real. In this paper, we proposed a SMS based heterogeneous mobile botnet, and shown how SMS based C&C channel structure can be exploited by mobile botnets. At last, we give the analysis of connectivity, security and robustness evaluation of our model.Index Terms—heterogeneous, mobile botnet, SMS, C&C, robustnessI.I NTRODUCTIONTraditional botnets have become one of the most serious security threats to the Internet. The word “bot” means that those victims controlled by attacker, and it derives from the word “robot”. A bot master can control a large scale of bots at different locations to initiate attack, and due to the complexity of the internet, it can be hardly trace back, and lots of researchers have done remarkable studies about the traditional botnet, such as botnet threats[1], botnet model[2], control strategies[3] and botnet detection[4].Compared with the evolution of traditional botnet on the Internet world, mobile botnet are 7-8 years[5] behind. Although the mobile botnets have not yet caused major outbreaks worldwide in cellular network, but most of the traditional botnet experience can be transferred to mobile botnet. Norman[5] predicts that malware on mobile device(smart phones) will evolve faster since techniques are already explored.The early malware can only perform one or two tasks, like Cabir. Cabir was detected in 2004, it is the earliest mobile botnet we ever known.But in 2009, the situation has changed dramatically. Mobile bots can connect back to a malicious bot server and transfer valuable information of the infected device, like SymbOS.Exy.C[6], Ikee.B[7,8]. According to the report of Symantec[6] on 13 July 2009, SymbOS.Exy.C may the first bot on Symbian OS. It is a worm similar to other worms that made for Symbian OS, but the difference is that the bot node tries to contact a malicious bot server and transfer valuable information, such as the phone types, International Mobile Equipment Identity (IMEI) and International Mobile Subscriber Identity (IMSI)[9]. Symantec mentions that this may be the first true botnet occurred on a cellular mobile device.BBOS_ZITMO.B[10] was detected in 2011. It receives commands via SMS, steal users information by forwarding SMS messages to a set/predefined admin phone number, monitors incoming calls and SMS. The most important is that it also has a stealth mechanism that prevents being seen as an installed app.Figure1 illustrates the simplified typical infection cycle of an SMS based mobile botnet. There are mainly 4 steps in the botnet operation [11]. Vulnerable smart phones could be first infected by bots in the original botnet. The existed techniques (smart phone OS vulnerabilities, Trojan horses, worms, etc) were used during the infection period. After the infection, the infected node connects to the bot server to join in the origin botner. All the bot servers are organized as the C&C network and controlled by bot master. Bot master use the C&C network to issue commands, and control the whole botnet. According to our analysis, the bots in the same tier or sub network could launch the corresponding attack after receiving the command. Authorization is achieved via a channel password.Figure 1 Typical infection cycle Consider the potential threats of the mobile botnet to the cellular network. Researchers have done some remarkable research about the attacks on cellular network, such as DoS attack [12], C&C channel study[13], paging channel overloads [14]. In this paper, we proposed the SMS based heterogeneous mobile botnet. The C&C channel is SMS based heterogeneous multi-tree structured network. That is, all C&C commands are transfered via SMS messages since SMS is available to almost mobile phones. We use heterogeneous multi-tree topology to enhance the scalability and robustness of the botnet. We also made all the bot lists and some of the important commands encrypted. Our research has the following main contributions: • We proposed an improved SMS basedheterogeneous mobile botnet. The heterogeneousbot nodes and networks structure have made themobile botnet more efficient to communicate andmore secure.• Contrast with the traditional botnet of Internetworld, we give the definition of mobile botnet andillustrate the characteristics of the mobile botnet.• We introduced node degree threshold k 〈〉 andmobile botnet height H to improve the security ofthe C&C channel. The improved C&C channelraises the bar for the countermeasure of mobilebotnet community.• Through the design of the eviction and replacement mechanism of failed or recovered botserver node, the robustness of C&C channel wasenhanced.The remainder of the paper is organized as follows.Section II shows the characteristics of cellular bonnets.Section III illustrates the mobile botnet attacks. SectionIV proposed our SMS based heterogeneous cellularmobile botnet. Section V discussed our model from theaspect of connectivity, security and robustness. The paperconcludes and future work with Section VI.II. THE CHARACTERISTICS OF M OBILE BOTNETSIn this paper we define the mobile botnet as follows:Definition 1 mobile botnet: Consists of a network with compromised smart phones, controlled by attacker (“bot master”) thorough a command and control(“C&C”) network for malicious purposes.The mobile botnets are different from the traditional botnets of Internet world. Since the mobile botnets are mostly communicated between mobile devices (smart phones), it has the following characteristics.• Limited by the power resource. The mobile devices such as smart phones are different from PC, its run time is limited due to the use of battery. • The communication costs problems. The communications of mobile botnets will consume the limited power resource, network traffic and especially the phone charge. The communications of the mobile botnets will lead to the cost of theowner, and a significant rise of the phone charge will result in the investigation of the cause and thus may lead to the exposure of the culluar bot. The SMS based mobile botnets, depending on the type of mobile phone contract, SMS messages can be completely free or charging very few money. • The connectivity changes constantly, even unstable. The connectivity may be both affected by physical environment or personal factors. It can be affected by the networks around the mobile phone owner, the action of the mobile phone owner, such as the user is in the tunnel or turn off the mobile phone during the bed time. As shown in Table 1, the connectivity of bots can be changed constantly during the daily life.Table 1 THE C ONNECTIVITY AND T IME I NTERVAL Connectivity Time Intervals WiFi Morning (at home) GSM/EDGE/3G Day time (at work/school)WiFi Day time (at Starbucks) No signal Day time (at wild ) WiFi Evening (at home) Turn off the mobile phone Night (bed time) • Lack of IP address. The lack of IP address maycause the problem of indirect connect. Due to thelack of IP address, most mobile phones are usingNAT gateway and thus the devices are not directlyreachable, so the traditional P2P based C&Cnetwork may not suit for mobile botnet. • The diversity of operating system of smart phone.The design of mobile botnet has to consider thediversity of the OS platform of smart phone.• Mobile botnets are hard to be detected. Evidencesindicate that, mobile botnets are becoming moreand more sophisticated[10]. The valuablemessages can be forwarded by SMS messages to apredefined server device, and the SMS messagesare deleted immediately after they were forwardedby mobile botnets, so it is hard to be detected. It also has a stealth mechanism that prevents beingdetected as an installed app..III. MOBILE BOTNET ATTACKA. Information LeakageOne of the main targets of the mobile botnet is to retrieve sensitive information from the victims. The mobile bot can quickly scanning the host node for significant corporate or financial information, such as usernames and passwords, address list and text messages.B. DoS AttackBecause most of the functionality of cellular network rely on the availability and proper functioning of HLRs(Home Location Register), so the DoS attack could block the legitimated users of a local cellular network from sending or receiving text messages and calls[9,12,15,17].In the practical circumstances, a bot master of a mobile botnet could control the compromised mobile phones to overwhelm a specific HLR with a large volume of traffic. Through the DoS attack, it will affect all the legitimated users who rely on the same HLR, their requests will be dropped.By overloading (red arrow) the HLR of a local central component of the cellular network, the network will unable to server legitimated traffic (blue arrow) of very large geographic regions. Figure 2 shows how a cellularFigure 2 A cellular network DoS attackC. Charge lossThere exist some service which the smart phones can give money to charity organizations[17]. If the mart phone called or sent a text message to the specific service number, then the subscriber will pays a preset amount of money. The bot master can also creates its own service number and programs all the bots to call or sent a text message to the specific service number. Of course, the price should be low, so the subscribers would not notice and be suspicious about the extra charges.IV.SMS BASED HETEROGENEOUS MOBILE BOTNETDESIGNAs we known, the three main components of botnet are vectors, C&C channel and network topology. The vector solved the problem of how to spread bot code, and can be used in propagation period; C&C channel is used to issue command and control messages; the network topology of botnet is to manage the botnet and enhance the security and stealth of botnets.Considering the problems that have been encountered during the design period of traditional botnets and mobile botnets, we think the bot master should consider the following challenges:1. How to transmit command and control messages and control traffic flow.2. How to prevent the proposed mobile botnet from being detected by defenders.3. How to monitor the status of bot nodes.4. How to enhance the robustness of mobile botnet even some bot nodes have been removed by the defenders. In this section, we will propose the SMS based heterogeneous mobile botnet, and briefly overview our mobile botnet and all type of nodes and then focus on the propogation, network topology, command and control mechanism, node replacement mechanism of our mobile botnet. The designed mobile botnet model is shown inFigure 3 The simplified SMS-based heterogeneous mobile botnet A. Node TypesThe infected node can be classified by the difference of performance, battery resource, connectivity and networks around it, so they can be divided into several types of bot nodes. As is illustrated in Figure 3, our mobile botnet is made up of bot master, collection node, bot servers, region bot servers and a certain amount of bot nodes. The topology of our mobile botnet was designed multi-tree structured from top to down, but the topology of the same tier nodes such as bot server tier was designed P2P structured. So, the heterogeneous exist both in bot nodes types and network topology.Bot master node: Bot master controls all the nodes of the mobile botnet. The bot master has direct contact with bot servers. It has the bot list of all the botnet, and it stores all the node’s information, like user name, phone number, international mobile subscriber identity (IMSI), International Mobile Equipment Identity(IMEI). Unless an emergency situation, bot master will not communicate with bot node directly.Collection node:It receives the valuable information from all the nodes of the botnet. The stored information can be fetched by bot master. Before it receives the information, the node that sends information must be authenticated.Bot server node:Bot Server node is one of the key nodes of our model. It has two main functions, search and forward, it has 'k direct links with region bot servers, and each of them controls a sub network. It has the bot list that includes all the nodes it communicated with, and the bot list is encrypted.Region bot server node: Region bot server node is another key node of our model. It is both boy server and bot node. It receives commands from bot server and forwards to the bots of its sub network. It executes the commands and transfer valuable information to the collection node. Also its bot list was encrypted.Bot node: Bot node is the leaf node of ourmobile botnet. It receives commands from region bot server, and executes the commands.The comparison of communication and bot list between the five types of nodes that we have stated is shown in table2.Table 2THE COMMUNICATION OF MODEL NODESNode type Bot master Collection node Bot server Region bot server BotCommu. Node list Commu. Node list Commu.Node list Commu.Node list Commu.NodelistBot master yes yes yes yes yes Collection node yes yes yes yes yes Bot server yes yes yes yes yes yes Region bot serveryes yes yes yes yes yes Bots yes yes yes yes yes yes yes B. PropogationWe divide the propagation phase into three steps. In the first step, the bot master exploit the operatingsystem and configuration vulnerabilities to compromisethe mobile devices, install the bot software and collecting valuable information. After the bot software was installed, we assume the bot node has a stealth mechanism that prevents being seen as an installed app, and the messages that used to collect valuable information were deleted.In the second step, the bot master choose bot server nodes from the compromised nodes as the first tier nodesaccording to the following conditions, the energyresource, the region where it belongs to and theconnectivity. The rest of the nodes that are not chosen as the bot servers, assigned to the bot servers according the regions, as the second tier nodes.The third step, the nodes of the existed botnets continued infecting mobile devices under the control ofthe bot master. According the expansion of the mobilebotnet, the number of tier will grow.C. Network TopologyDefinition 2 Botnet degree: The degree of our mobile botnet is K . It is the maximum value of ,i k i N 〈〉∈. Definition 3 Botnet height: The height of our mobile botnet is H . It is the number of tiers of the model.Definition 4 A tomic network: The atomic network iscomposed of region bot node and 'k bot nodes, it is the smallest bot network that was control by region bot server at the lowest tier.Definition 5 'k : we define 'k is the number of lower tier nodes that communicate with higher tier node n i .In general, our proposed mobile botnet is made up of 1 bot master, 'k bot servers, 'k region bot servers, and'k sub networks.The maximum degree of bot server n i is '2k k 〈〉=, in Figure 3, 'k =3, k 〈〉=6. For mobile botnets with largescale bot nodes, it makes sense to limit the degree ofbotnet K under a threshold. So, unless the bot nodes arehigh capacity server, bot masters should keepk 〈〉small[14]. As we said, the topology of our model is heterogeneous multi-tree structured. From top to down, the model is a multi-tree structure network. The bot server tier and region bot server tiers are the P2P structured networks. The degree of the model is K , theheight of the model is H . The degree of node n i is i k 〈〉, K is the maximum number of the degree of all the nodes, 12max(,,...,)N K k k k =〈〉〈〉〈〉. Our botnet(see Figure3) can be divided intoH -1 tiers and 'k subnets. The lowest tier network contains 'k ('k =3) subnet, and eachsubnet is composed of 'k P2P structured atomic networks, as is shown in Figure 4.The topology between bot server nodes is P2P structured network (tier 1in Figure 3), the distances between them are all 1 hop. Also the topology between region bot server nodes and bot nodes are P2P structurednetwork. This will enhance the communication efficiency between the mentioned bot nodes,.D. Command and Control NetworkIn our mobile botnet model, we use SMS as the C&C channel. Most of the traditional botnets are using centralized IRC, HTTP protocol and P2P based C&C channel, and all of these channels are IP-based C&C delivery. Unlike the PC world, even with the edge or 3G networks, the smart mobile devices (smart phones) still can not establish stable IP based connections with each other. Given this limitation, in our work we use SMS as our un-centralized and non-ip-based C&C channel.The advantages of SMS based C&C channel can be listed as follows.First, it is popular. Most of mobile phone subscribers are sending and receiving text messages to each other. Second, it is easy to use. Malicious content can be hidden in message, the command and control messages can be disguised as spam-looking messages.Third, it has high connectivity. Even a phone has signal problem or power off, the SMS messages that send to it will eventually arrive to it, because the SMS messages can be stored in a service center and delivered once the signal becomes available or turned back on[19]. Our mobile botnet can be considered as “PUSH” based botnet. The “PUSH” based botnet deliver commands form bot server to bot nodes whenever the bot master wants, and we call this process “PUSH”. On the contrary, the traditional botnet [20] and [21] are the “PULL” based botnets, the bot nodes of them periodically communicate with bot server to get commands. But the “PULL” based botnets have some drawbacks, such as generate additional information, have unexpected latency in command delivery. G. Gu et al [18] has designed an effective detection schemes for it. All of the above statements made the “PULL” based botnets unattractive in practice. The “PUSH” based delivery does not generate additional information, and have little latency in command delivery, in this paper, we design our model as “PUSH” based botnet.As is illustrated in Figure 3, The basic design idea for our SMS-based mobile botnet is to use a heterogeneous multi-tree network as the Command and Control(C&C) networks, the network is like a commands channel, all the nodes of network(black nodes) are served as bot server. The C&C network of our proposed model is composed of bot master, bot server and region bot server.Like Kademlia[22], the absence of authentication mechanism means that anyone can insert values to the commands, the defender may use this launch index poisoning attacks and the C&C may be disrupted. So before communication the nodes should be authenticated, and the important command should be encrypted. For the efficiency of C&C, only critical commands issued by the bot master that order bots to execute malicious tasks such as “transmit valuable information” are encrypted, while commands for P2P communication purposes such as“search node” are only disguised without encryption[19].Figure 5 The C&C networkAccording to the bot list of it, bot server node performs two main function, search and forward. Some of the important commands are encrypted, and the encrypted commands are pushed from top tier nodes to lower tier nodes, from the bot master to bot server node, to region bot server node and bot node respectively. All of the bot lists are encrypted, once the key node was captured, the defender will not get the other nodes that communicate with it. And, also it cannot trace the bot master.E . Node repalcement mechanismAt some special circumstances, bot node type may need to transfered from each other. Like one of the bot server node is failed or recovered.The bot server nodes and region bot server nodes are the key nodes of our model, so they must be replaced by other nodes if they are out of function, such as failed or recovered. As is shown in Figure 6, at some circumstances, bot node type may need to transfer from each other. Since bot master has the bot list of the failed key node, by sending a message that containing the failed node’s bot list and communication keys to the new key node and that will make it reconnected to the bots.Figure 6 The transfer between three type of nodes1) Node EvictionAssumption1: we assume that the failed or recovered nodes can be detected by bot master.Assumption2: we assume that all bot node have a self destroy mechanism to delete the bot list that stored in it.Table 3T HE LIST OF NOTATIONSNotation Description K BB Keys between bot server and bot master K BS Keys between bot serversK BRKeys between bot server and region botserverFirst, the bot master detect the failed or recovered node n i , then according to phone number of n i that bot master stored in it to refresh the according keys, such as K BB , K BS , K BR. Second, the bot master sends '1k − messages that contain the new keys K BS to all the other bot servers,and send 'k messages that contain K BR to the region bot servers, which n i communicate with. Third, the phone number of n i was deleted from the bot list of bot master, bot server and region bot server. After the eviction period, the failed or recovered node n i can not communicate with the bot nodes that we have mentioned, and also the messages that send to them werediscarded automatically. The eviction procedure isillustrated in Figure 7.2) Node ReplacementAt the replacement period, first, bot master choose a relatively powerful node from low tier bot nodes or region bot server as the new bot server according to standers that we have stated in propagation section. Second, the bot master send 1 message that contains the new keys K BB , K BS , K BR and bot list to the new assigned bot server. Third, the new bot serner need to send '1k − messages to establish communication with the rest botservers and 'kmessages to establish communication with the region bot servers that were communicated with the captured bot server, so the total number of messages during the replacement period is 2'1k −. The replaceprocedure is illustrated in Figure8.Figure 8 The replacement of bot server node V . EVALUATION In this section, we would like to analyze our proposed mobile botnet from the aspect of connectivity, security, and robustness. A. ConnectivityIn this section, we will discuss the connectivity of our heterogeneous botnet with different scale of node, and compare the connectivity with the P2P structured botnet. We will look at the 200-node botnet first and then the 1000-node and 2000-node botnet. In the 200-node botnet,when 'k =4 and 'k =5, all the nodes in the botnet can be reached in 4 hops, and when 'k =3, all the nodes in the botnet can be reached in 5 hops, as is shown in Figure 8. In the 1000-node botnet, as is illustrated in Figure 9, all the nodes can be reached in 5 hops when 'k =4 and 'k =5, and when 'k =3, all the nodes can be reached in 6 hops. Figure 10 shows the 2000-node botnet, all the nodes will be reached in 5, 6, 7 hops respectively.From Figure 9, 10, 11, we can see that the larger of 'k , the less hops mobile botnet needs to reache all the bot node of it, when the number of hops larger than 3, the connectivity will rise dramatically.Figure 9 The relationship between connectivity and k’, when N=200Figure 10 The relationship between connectivity and k’, when N=1000Figure 11 The relationship between connectivity and k’, when N=2000We compare our proposed botnet with the P2P structured botnet. In the P2P structured botnet, theaverage path length between any two nodes can be 1 hop(a)The shortest path length (b)The longest path lengthFigure 12 the path length of P2P structured BotnetAs is illustrated in Figure 11(a), in the P2P structured botnet, the commands from Bot 0 can be directly forward to Bot 1, Bot 2, Bot 3, Bot 4, Bot 5. So the mean path length is11112_11111N N ii i P Pstructured lPL N N −−=====−−∑∑(1)In Figure 8(b), the commands from Bot 0 can finally forward to Bot 5, through Bot 1, Bot 2, Bot 3 and Bot 4. So the mean path length is11112_2112N N ii i P P structured l iNPL N N −−=====−−∑∑ (2)According to Formula (1) and (2), the mean path length of the P2P structured botnet is212224P Pstructured NN Mean ++==(3)So the mean connection hops between two nodes of the P2P structured botnet is24N+. When N =6, the mean connection hops is 2, it has nice network connectivity, but when N=200 and 2000, the mean connection hops are approximately 50 and 500, respectively, and it larger than our proposed botnet.From the comparison, we can see that our proposed mobile botnet have high connectivity and it can be adjusted easily by adjust the height and scale of mobile botnet. It has high flexibility and scalability.B. SecurityAs we described in section IV, the height of the model is H , 'k is the number of lower tier nodes that communicate with heighter tier node. As is illustrated in Figure 5, the maximum number of nodes at the lowest tieris '()Hk . Then, the maximum number of our proposed botnet model is2''0()()H i Hi N k k −==+∑(1) The larger of the H , the less community load of the key node. But large H will lead to long cummunication path and low connectivity. The relationship between model degree K and node degree i k 〈〉 is12max(,,...,)N K k k k =〈〉〈〉〈〉 (2)'2k k 〈〉≤(3)As we can see form Formula (2) and (3), K increasewith the increase of 'k . During the C&C period, the keynodes have to forward at least 'k messages at one time, but the large 'k means that there is large scale of abnormal data traffic, and that will jeopardize the safetyof key nodes. The relationships of 'k , N , and H are shown in Figure 9. When 'k =3 and the value of H are 2, 3, 4, 5 respectively, the proposed botnet will have better connectivity but have lower capacity. When the value of'k is 5, there is a better leverage between connectivityand N. In this paper, consider the tradeoff between 'k andN , we set our mobile botnet H =5, '5k ≤, so 10k 〈〉≤, 10K ≤, and the scale of the botnet can up to 3281 botnodes.By the tradeoff between k’ and H, we can efficiently control the amount of traffic flow that travels between bot nodes, and the communications between bot nodes will not catch the attention of defender and then protect the mobile botnet, so our mobile botnet can have high security.。