当前位置:文档之家› BulletProof A defect-tolerant cmp switch architecture

BulletProof A defect-tolerant cmp switch architecture

BulletProof A defect-tolerant cmp switch architecture
BulletProof A defect-tolerant cmp switch architecture

BulletProof:A Defect-Tolerant CMP Switch Architecture Kypros Constantinides?Stephen Plaza?Jason Blome?Bin Zhang?Valeria Bertacco?Scott Mahlke?T odd Austin?Michael Orshansky?

?Advanced Computer Architecture Lab?Department of Electrical and Computer Engineering University of Michigan University of T exas at Austin

Ann Arbor,MI48109Austin,TX,78712

{kypros,splaza,jblome,valeria,bzhang@https://www.doczj.com/doc/8412533208.html,

mahlke,austin}@https://www.doczj.com/doc/8412533208.html, orshansky@https://www.doczj.com/doc/8412533208.html,

Abstract

As silicon technologies move into the nanometer regime,tran-sistor reliability is expected to wane as devices become subject to extreme process variation,particle-induced transient errors, and transistor wear-out.Unless these challenges are addressed, computer vendors can expect low yields and short mean-times-to-failure.In this paper,we examine the challenges of designing complex computing systems in the presence of transient and per-manent faults.We select one small aspect of a typical chip multi-processor(CMP)system to study in detail,a single CMP router switch.To start,we develop a uni?ed model of faults,based on the time-tested bathtub https://www.doczj.com/doc/8412533208.html,ing this convenient abstraction, we analyze the reliability versus area tradeo?across a wide spec-trum of CMP switch designs,ranging from unprotected designs to fully protected designs with online repair and recovery capabil-ities.Protection is considered at multiple levels from the entire system down through arbitrary partitions of the design.To bet-ter understand the impact of these faults,we evaluate our CMP switch designs using circuit-level timing on detailed physical lay-outs.Our experimental results are quite illuminating.We?nd that designs are attainable that can tolerate a larger number of defects with less overhead than na¨?ve triple-modular redundancy, using domain-speci?c techniques such as end-to-end error detec-tion,resource sparing,automatic circuit decomposition,and it-erative diagnosis and recon?guration.

1.Introduction

A critical aspect of any computer design is its https://www.doczj.com/doc/8412533208.html,ers expect a system to operate without failure when asked to perform a task.In reality,it is impossible to build a completely reliable system,consequently,vendors target design failure rates that are imperceptibly small[23].More-over,the failure rate of a population of parts in the?eld must exhibit a failure rate that does not prove too costly to service.The reliability of a system can be expressed as the mean-time-to-failure(MTTF).Computing system reli-ability targets are typically expressed as failures-in-time,or FIT rates,where one FIT represents one failure in a billion hours of operation.

In many systems today,reliability targets are achieved by employing a fault-avoidance design strategy.The sources of possible computing failures are assessed,and the neces-sary margins and guards are placed into the design to en-sure it will meet the intended level of reliability.For exam-ple,most transistor failures(e.g.,gate-oxide breakdown)can be reduced by limiting voltage,temperature and frequency [8].While these approaches have served manufacturers well for many technology generations,many device experts agree that transistor reliability will begin to wane in the nanome-ter regime.As devices become subject to extreme process variation,particle-induced transient errors,and transistor wearout,it will likely no longer be possible to avoid these faults.Instead,computer designers will have to begin to directly address system reliability through fault-tolerant de-sign techniques.

Figure1illustrates the fault-tolerant design space we fo-cus on in this paper.The horizontal axis lists the type of device-level faults that systems might experience.The source of failures are widespread,ranging from transient faults due to energetic particle strikes[32]and electrical noise[28],to permanent wearout faults caused by electro-migration[13],stress-migration[8],and dielectric break-down[10].The vertical axis of Figure1lists design solutions to deal with faults.Design solutions range from ignoring any possible faults(as is done in many systems today),to detecting and reporting faults,to detecting and correcting faults,and?nally fault correction with repair capabilities. The?nal two design solutions are the only solutions that can address permanent faults,with the?nal solution being the only approach that maintains e?cient operation after encountering a silicon defect.

In recent years,industry designers and academics have paid signi?cant attention to building resistance to transient faults into their designs.A number of recent publications have suggested that transient faults,due to energetic par-ticles in particular,will grow in future technologies[5,16].

A variety of techniques have emerged to provide a capa-bility to detect and correct these type of faults in storage, including parity or error correction codes(ECC)[23],and logic,including dual or triple-modular spatial redundancy [23]or time-redundant computation[24]or checkers[30]. Additional work has focused on the extent to which circuit timing,logic,architecture,and software are able to mask out the e?ects of transient faults,a process referred to as “derating”a design[7,17,29].

In contrast,little attention has been paid to incorporat-ing design tolerance for permanent faults,such as silicon de-fects and transistor wearout.The typical approach used to-day is to reduce the likelihood of encountering silicon faults through post-manufacturing burn-in,a process that acceler-ates the aging process as devices are subjected to elevated temperature and voltage[10].The burn-in process accel-erates the failure of weak transistors,ensuring that,after burn-in,devices still working are composed of robust tran-sistors.Additionally,many computer vendors provide the ability to repair faulty memory and cache cells,via the in-

MANUFACTURING

Mainstream Solutions High-end Solutions Specialized Solutions

as a result of relentless down-scaling[27].

Transistor Infant Mortality.Scaling has had adverse e?ects on the early failures of transistor devices.Tradition-ally,early transistor failures have been reduced through the use of burn-in.The burn-in process utilizes high voltage and temperature to accelerate the failure of weak devices, thereby ensuring that parts that survive burn-in only pos-sess robust transistors.Unfortunately,burn-in is becoming less e?ective in the nanometer regime,as deep sub-micron devices are subject to thermal run-away e?ects,where in-creased temperature leads to increased leakage current and increased leakage current leads to yet higher temperatures. The end results is that aggressive burn-in will destroy even robust transistors.Consequently,vendors may soon have to relax the burn-in process which will ultimately lead to more early-failures for transistors in the?eld.

2.1The Bathtub:A Generic Model for Semicon-

ductor Hard Failures

To derive a simple architect-friendly model of failures,we step back and return to the basics.In the semiconductor industry,it is widely accepted that the failure rate for many systems follows what is known as the bathtub curve,as il-lustrated in Figure2.We will adopt this time-tested fail-ure model for our research.Our goal with the bathtub-curve model is not to predict its exact shape and magnitude for the future(although we will?t published data to it to create“design scenarios”),but rather to utilize the bath-tub curve to illuminate the potential design space for future fault-tolerant designs.The bathtub curve represents device failure rates over the entire lifetime of transistors,and it is characterized by three distinct regions.

?Infant Period:In this phase,failures occur very soon and thus the failure rate declines rapidly over time.

These infant mortality failures are caused by latent manufacturing defects that surface quickly if a tem-perature or voltage stress is applied.

?Grace Period:When early failures are eliminated, the failure rate falls to a small constant value where failures occur sporadically due to the occasional break-down of weak transistors or interconnect.

?Breakdown Period:During this period,failures oc-cur with increasing frequency over time due to age-related wearout.Many devices will enter this period at roughly the same time,creating an avalanche e?ect and a quick rise in device failure rates.

With the respect to Figure2,the model is represented with the following equations:

F G+λL109

(t+1)m

),if0≤t

F G+(t?t B)b,if t B≤t

(t is measured in hours)

Where the parameters of the model are as follows:

?λL:average number of latent manufacturing defects per chip

?m:infant period maturing factor

?F G:grace period failure rate

?t B:breakdown period start point

?b:breakdown factor,

is the average number of“killer”defects per chip,andγis an empirically estimated parameter with typical values be-tween0.01and0.02.The same work,provides formulas for deriving the maximum number of latent manufacturing de-fects that may surface during burn-in test.Based on these models,the average number of latent manufacturing defects per chip(140mm2)for current technologies(λL)is approx-imately0.005.In the literature,there are no clear trends how this value changes with technology scaling,thus we use the same rate for projections of future technologies.

Grace Period Failure Rate(F G):For the grace pe-riod failure rate,we use reference data by[26].In[26],a microarchitecture-level model was used to estimate workload-dependent processor hard failure rates at di?erent technolo-gies.The model used supports four main intrinsic failure mechanisms experienced by processors:elegtromigration, stress migration,time-dependence dielectric breakdown,and thermal cycling.For a predicted post-65nm fabrication tech-nology,we adopt their worst-case failure rate(F G)of55,000 FITs.

Breakdown Period Start Point(t B):Previous work [27],estimates the time to dielectric breakdown using ex-trapolation from the measurement conditions(under stress) to normal operation conditions.We estimate the break-down period start point(t B)to be approximately12years for65nm CMOS at1.0V supply voltage.We were unable to ?nd any predictions as to how this value will trend for fab-rication technologies beyond65nm,but we conservatively assume that the breakdown period will be held to periods beyond the expected lifetime of the product.Thus,we need not address reliable operation in this period,other than to provide a limited amount of resilience to breakdown for the purpose of allowing the part to remain in operation until it can be replaced.

The maturing factor during the infant mortality period

and the breakdown factor during the breakdown period used, are m=0.02and b=2.5,respectively.

3.A Fault Impact Evaluation Infrastructure In[7],we introduced a high-?delity simulation infrastruc-ture for quantifying various derating e?ects on a design’s overall soft-error rate.This infrastructure takes into account circuit level phenomena,such as time-related,logic-related and microarchitectural-related fault masking.Since many tolerance techniques for permanent errors can be adapted to also provide soft error tolerance,the remainder of this work concentrates on the exploration of various defect tol-erance techniques.

3.1Simulation Methodology for Permanent Faults In this section,we present our simulation infrastructure for evaluating the impact of silicon defects.We create a model for permanent fault parameters,and develop the in-frastructure necessary to evaluate the e?ects on a given de-sign.

of the uni-

in-the

work

two de-sign intact (golden model),while the other

is subject to fault injection (defect-exposed model).The structural speci?cation of our design was synthesized from a Verilog description using the Synopsys Design Compiler.

Our silicon defect model distributes defects in the design uniformly in time of occurrence and spatial location.Once a permanent failure occurs,the design may or may not con-tinue to function depending on the circuit’s internal struc-ture and the system architecture.The defect analyzer clas-si?es each defect as exposed,protected or unprotected but masked.In the context of defect evaluation,faults accumu-late over time until the design fails to operate correctly.The defect that brings the system to failure is the last injected defect in each experiment and it is classi?ed as exposed.A defect may be protected if,for instance,it is the?rst one to occur in a triple-module-redundant design.An unprotected but masked defect is a defect that it is masked because it occurs in a portion of the design that has already failed,for

in

a is

such defect is considered exposed.

Finally,to gain statistical con?dence of the results,we run the simulations described many times in a Monte Carlo modeling framework.

3.2Reliability of the Baseline CMP Switch Design The?rst experiment is an evaluation of the reliability of the baseline CMP switch design.In Figure4,we used the bathtub curve?tted for the post-65nm technology node as derived in Section2.The FIT rate of this curve is55000 during the grace period,which corresponds to a mean time to failure(MTTF)of2years.We used this failure rate in our simulation framework for permanent failures and we plotted the results.

The baseline CMP design does not deploy any protec-tion technique against defects,and one defect is su?cient to bring down the system.Consequently,the graph of Fig-ure4shows that in a large parts population,50%of the parts will be defective by the end of the second year after shipment,and by the fourth year almost all parts will have failed.In this experiment,we have also analyzed a design variant which deploys triple-module-redundancy(TMR)at the full-system level(i.e.three CMP switches with voting gates at their outputs.Designs with TMR applied at dif-ferent granularities are evaluated at Section5)to present better defect tolerance.

The TMR model used in this analysis is the classical TMR model which assumes that when a module fails it starts pro-ducing incorrect outputs,and if two or more modules fail, the output of the TMR voter will be incorrect.This model is conservative in its reliability analysis because it does not take into account compensating faults.For example,if two faults a?ect two independent output bits,then the voter cir-cuit should be able to correctly mask both faults.However, the bene?t gained from accounting for compensating faults rapidly diminishes with a moderate number of defects be-cause the probabilities of fault independence are multiplied.

Further,though the switch itself demonstrated a moder-ate number of independent fault sites,submodules within the design tended to exhibit very little independence.Also, in[21],it is demonstrated that even when TMR is applied on diversi?ed designs(i.e.three modules with the same func-tionality but di?erent implementation),the probability of independence is small.Therefore,in our reliability analysis, we choose to implement the classical TMR model and for the rest of the paper whenever TMR is applied,the classical TMR model is assumed.

From Figure4,the simulation-based analysis?nds that TMR provides very little reliability improvements over the baseline designs,due to the few number of defects that can be tolerated by system-level TMR.Furthermore,the area of the TMR protected design is more than three times the area of the baseline design.The increase in area raises the probability of a defect being manifested in the design,which signi?cantly a?ects the design’s reliability.In the rest of the paper,we propose and evaluate defect-tolerant techniques that are signi?cantly more robust and less costly than tra-ditional defect-tolerant techniques.

4.Self-repairing CMP Switch Design

The goal of the Bulletproof project is to design a defect-tolerant chip-multiprocessor capable of tolerating signi?cant levels of various types of defects.In this work,we address the design of one aspect of the system,a defect tolerant CMP switch.The CMP switch is much less complex than a modern microprocessor,enabling us to understand the en-tire design and explore a large solution space.Further,this switch design contains many representative components of larger designs including?nite state machines,bu?ers,con-trol logic,and buses.

4.1Baseline Design

The baseline design,consists of a CMP switch similar to the one described in[19].This CMP switch provides wormhole routing pipelined at the?it level and implements credit-based?ow control functionality for a two-dimensional torus network.In the switch pipeline,head?its will pro-ceed through routing and virtual channel allocation stages, while all?its proceed through switch allocation and switch traversal stages.A high-level block diagram of the router architecture is depicted in Figure5.

The implemented router is composed of four functional modules:the input controller,the switch arbiter,the cross-bar controller,and the crossbar.The input controller is re-sponsible for selecting the appropriate output virtual chan-nel for each packet,maintaining virtual channel state infor-mation,and bu?ering?its as they arrive and await virtual channel allocation.Each input controller is enhanced with an8-entry32-bit bu?er.The switch arbiter allocates virtual channels to the input controllers,using a priority matrix to ensure that starvation does not occur.The switch arbiter also implements?ow control by tallying credit information used to determine the amount of available bu?er space at downstream nodes.The crossbar controller is responsible for determining and setting the appropriate control signals so that allocated?its can pass from the input controllers to the appropriate output virtual channels through the inter-connect provided by the crossbar.

The router design is speci?ed in Verilog and was synthe-sized using the Synopsys Design Compiler to create a gate-

high level

a con-level This

dom-inate heavily

84%of the tech-

4.2

A design that is tolerant to permanent defects must pro-vide mechanisms that perform four central actions related to faults:detection,diagnosis,repair,and recovery.Fault detection identi?es that a defect has manifested as an error in some signal.Normal operation cannot continue after fault detection as the hardware is not operating properly.Often fault detection occurs at a macro-level,thus it is followed by a diagnosis process to identify the speci?c location of the defect.Following diagnosis,the faulty portion of the de-sign must be repaired to enable proper system functionality. Repair can be handled in many ways,including disabling, ignoring,or replacing the faulty component.Finally,the system must recover from the fault,purging any incorrect data and recomputing corrupted values.Recovery essen-tially makes the defect’s manifestation transparent to the application’s execution.In this section,we discuss a range of techniques that can be applied to the baseline switch to make it tolerant of permanent defects.The techniques di?er in their approach and the level at which they are applied to the design.

In[9]the authors present the Reliable Router(RR),a switching element design for improved performance and re-liability within a mesh interconnect.The design relies on an adaptive routing algorithm coupled with a link level re-transmission protocol in order to maintain service in the presence of a single node or link failure within the network. Our design di?ers from the RR in that our target domain involves a much higher fault rate and focuses on maintaining switch service in the face of faults rather than simply routing around faulty nodes or links.However,the two techniques can be combined and provide a higher reliability multipro-cessor interconnection network.

4.2.1General Techniques

The most commonly used protection mechanisms are dual and triple modular redundancy,or DMR and TMR[23]. These techniques employ spatial redundancy combined with a majority voter.With permanent faults,DMR provides only fault detection.Hence,a single fault in either of the redundant components will bring the system down.TMR

is more e?ective as it provides solutions to detection,re-covery,and repair.In TMR,the majority voter identi?es a malfunctioning hardware component and masks its a?ects on the primary outputs.Hence,repair is trivial since the defective component is always just simply outvoted when it computes an incorrect value.Due to this restriction,TMR is inherently limited to tolerating a single permanent fault. Faults that manifest in either of the other two copies can-not be handled.DMR/TMR are applicable to both state and logic elements and thus are broadly applicable to our baseline switch design.

Storage or state elements are often protected by parity or error correction codes(ECC)[23].ECC provides a lower overhead solution for state elements than TMR.Like TMR, ECC provides a uni?ed solution to detection and recovery. Repair is again trivial as the parity computation masks the e?ects of permanent faults.In addition to the storage over-head of the actual parity bits,the computation of parity or ECC bits generally requires a tree of exclusive-ORs.This hardware has moderate overhead,but more importantly,it can often be done in parallel,thus not a?ecting latency.For our defect-tolerant switch,the application of ECC is limited due to the small fraction of area that holds state.

4.2.2Domain-speci?c Techniques

The properties of the wormhole router can be exploited to create domain-speci?c protection mechanisms.Here,we focus on one e?cient design that employs end-to-end error detection,resource sparing,system diagnosis,and recon?g-uration.

End-to-End Error Detection and Recovery Mech-anism.Within our router design,errors can be separated into two major classes.The?rst class is comprised of data corrupting errors,for example a defect that alters the data of a routed?it,so that the routed?it is permanently cor-rupted.The second class is comprised of errors that cause functional incorrectness,for example a defect that causes a ?it to be misrouted to a wrong output channel or to get lost and never reach any of the switch’s output channels.

The?rst class of errors,the data corrupting errors,can be addressed by adding Cyclic Redundancy Checkers(CRC)at each one of the switch’s?ve output channels,as shown in Figure6(a).When an error is detected by a CRC checker, all CRC checkers are noti?ed about the error detection and block any further?it routing.The same error detection sig-nal used to notify the CRC checkers also noti?es the switch’s recovery logic.The switch’s recovery logic logs the error oc-currence by incrementing an error counter.In case the error counter surpasses a prede?ned threshold,the recovery logic signals the need for system diagnosis and recon?guration.

In case the error counter is still below the prede?ned threshold,the switch recovers its operation from the last “checkpointed”state,by squashing all in?ight?its and rerout-ing the corrupted?it and all following?its.This is ac-complished by maintaining an extra recovery head pointer at the input bu?ers.As shown in Figure6(a),each input bu?er maintains an extra head pointer which indicates the last?it stored in the bu?er which is not yet checked by a CRC checker.The recovery head pointer is automatically incremented four cycles after the associated input controller grants access to the requested output channel,which is the latency needed to route the?it through the switch,once access to the destination channel is granted.In case of a

is shown.Flits are split into two parts,which are independently routed through the switch pipeline. switch recovery,the recovery head pointer is assigned to the head pointer for all?ve input bu?ers,and the switch recov-ers operations by starting rerouting the?its pointed by the head pointers.Further,the switch’s credit back?ow mecha-nism needs to be adjusted accordingly since an input bu?er is now considered full when the tail pointer reaches the re-covery head pointer.In order for the switch’s recovery logic to be able to distinguish soft from hard errors,the error counter is reset to zero at regular intervals.

The detection of errors causing functional incorrectness is considerably more complicated because of the need to be able to detect misrouted and lost?its.A critical issue for the recovery of the system is to assure that there is at least one uncorrupted copy for each?it in?ight in the switch’s pipeline.This uncorrupted?it can then be used during re-covery.To accomplish this,we add a Bu?er Checker unit to each input bu?er.As shown in Figure6(b),the Bu?er Checker unit compares the CRC checked incoming?it with the last?it allocated into the input bu?ers(tail?it).Fur-ther,to guarantee the input bu?er’s correct functionality, the Bu?er Checker also maintains a copy of the head and the tail pointers which are compared with the input bu?er’s pointers whenever a new?it is allocated.In the case that the comparison fails,the Bu?er Checker signals an alloca-tion retry,to cover the case of a soft error.If the error per-sists,this means that there is a potential permanent error

in the design,and it signals the system diagnosis and recon-?guration procedures.By assuring that a correct copy of the?it is allocated into the input bu?ers and that the input bu?er’s head/tail pointers are maintained correctly,we guar-antee that each?it entering the switch will correctly reach the head of the queue and be routed through the switch’s pipeline.

To guarantee that a?it will get routed to the correct out-put channel,the?it is split into two parts,as shown in Figure6(b).Each part will get its output channel requests from a di?erent routing logic block,and access the requested output channel through a di?erent switch arbiter.Finally, each part is routed through the cross-bar independently.To accomplish this,we add an extra routing logic unit and an extra switch arbiter.The status bits in the input controllers that store the output channel reserved by the head?it are duplicated as well.Since the cross-bar routes the?its at the bit-level,the only di?erence is that the responses to the cross-bar controller from the switch arbiter will not be the same for all the?it bits,but the responses for the?rst and the second parts of the?it are?tted from the?rst and sec-ond switch arbiters,respectively.If a defect causes a?it to be misrouted,it follows that a single defect can impact only one of the two parts of the?it,and the error will be caught later at the CRC check.

The area overhead of the proposed error detection and recovery mechanism is limited to only10%of the switch’s area.The area overhead of the CRC checkers,the Recov-ery Logic and the Bu?er Checker units is almost negligible. More speci?cally,the area of a single CRC checker is0.1% of the switch’s area and the area for the Bu?er Checker and the Recovery Logic is much less signi?cant.The area overhead of the proposed mechanism is dominated by the extra Switch Arbiter(5.7%),the extra Routing Logic units (5x0.5%=2.5%),and the additional CRC bits(1.5%).As we can see,the proposed error detection and recovery mech-anism has a10X times less area overhead than a na¨?ve DMR implementation.

Resource Sparing.For providing defect tolerance to the switch design we use resource sparing for selected partitions of the switch.During the switch operation only one spare is active for each distinct partition of the switch.For each spare added in the design,there is an additional overhead for the interconnection and the required logic for enabling and disabling the spare.For resource sparing,we study two di?erent techniques,dedicated sparing and shared sparing. In the dedicated sparing technique,each spare is owned by a single partition and can be used only when the speci?c par-tition fails.When shared sparing is applied,one spare can be used to replace a set of partitions.In order for the shared sparing technique to be applied,it requires multiple identi-cal partitions,such as the input controllers for the switch design.Furthermore,each shared spare requires additional interconnect and logic overhead because of its need of hav-ing the ability to replace more than one possible defective partitions.

System Diagnosis and Recon?guration.As a system diagnosis mechanism,we propose an iterative trial-and-error method which recovers to the last correct state of the switch, recon?gures the system,and replays the execution until no error is detected.The general concept is to iterate through each spared partition of the switch and swap in the spare for the current copy.For each swap,the error detection and

de-

in of

the will be this

this disabled.

In order for the system diagnosis to operate,it maintains a set of bit vectors as follows:

?Con?guration Vector:It indicates which spare parti-tions are enabled.

?Recon?guration Vector:It keeps track of which con?g-urations have been tried and indicates the next con?g-uration to be tried.It gets updated at each iteration of system diagnosis.

?Defects Vector:It keeps track of which spare partitions are defected.

?Trial Vector:Indicates which spare partitions are en-abled for a speci?c system diagnosis iteration. Figure7,demonstrates an example where system diagno-sis is applied on a system with four partitions and two copies (one spare)for each partition.The?rst copy of partition B has a detected defect(mapped at the Defects Vector).The defect in the?rst copy of partition D is a recently manifested defect and is the one that caused erroneous execution.Once the error is detected,the system recovers to the last correct state using the mechanism described in the previous section (see error detection and recovery mechanism),and it ini-tializes the Recon?guration Vector.Next,the Trial Vector is computed using the Con?guration,Recon?guration,and Defects vectors.In case the Trial Vector is the same with the Con?guration Vector(attempt2),due to a defected spare, the iteration is skipped.Otherwise,the Trial Vector is used as the current Con?guration vector,indicating which spare partitions will be enabled for the current trial.The execu-tion is then replayed from the recovery point until the error detection point.In case the error is detected,a new trial is initiated by updating the Recon?guration Vector and re-computing the Trial Vector.In case no error is detected, meaning that the trial con?guration is a working con?gura-tion,the Trial Vector is copied to the Con?guration Vector and the Defects Vector is updated with the located defected copy.If all the trial con?gurations are exhausted,which are equal to the number of partitions,and no working con?gu-ration was found,then the defect was a fatal defect and the

system won’t be able to recover.The example implemen-tation of the system diagnosis mechanism demonstrated in Figure7,can be adapted accordingly for designs with more partitions and more spares.

We also consider the Built-In-Self-Test(BIST)technique as an alternative for providing system diagnosis.For each distinct partition in the design we store in ROM automati-cally generated test vectors.During system diagnosis with BIST,these test vectors are applied to each partition of the system through scan chains to check its functionality cor-rectness and locate the defected partition.

Both the iterative replay and BIST techniques can be im-plemented as a separate module from the switch and the area overhead for their implementation can be shared by a wide number of switches in a possible chip multiprocessor design.

4.2.3Level of Protection

The error resiliency achieved by implementing one of the protection techniques(e.g.,TMR or sparing)is highly de-pendent on the granularity of the partitions.In general,the larger the granularity of the partitions,the less robust the design.However,as the granularity of the partition becomes smaller,more logic is required.For TMR,each output for a given partition requires a MAJORITY gate.Since each added MAJORITY gate is unshielded from permanent defects,poorly constructed small partitions can make a design less error resilient than designs with larger partitions.

To illustrate these trade-o?s,consider the baseline switch again in Figure5.Sparing and TMR can be done on the system-level where the whole switch is replicated and each output requires some extra logic like a MUX or MAJORITY gate.A single permanent error makes one copy of the switch completely broken.However,the area overhead beyond the spares is limited to only a gate for each primary output.A slightly more resilient design,considers partitioning based on the components that make up the switch.For instance, each of the?ve input controllers can have a spare along with the arbiter,cross-bar,and the cross-bar controllers.This partitioning approach leaves the design more protected as a permanent defect in the input controller would make only that small partition broken and not the other four input controllers.There is a small area penalty for this approach as the sum of the outputs for each partition is greater than the switch as a whole.However,the added unprotected logic is still insu?cient to worsen the error resiliency of the design.Finally,consider partitioning at the gate-level.In this approach,each gate is in its own partition.In this scheme,the error resiliency for each partition is extremely high because the target is very small.However,the over-head of this approach requires an extra gate for each gate in the switch design.Thus for TMR,the area would be four times the original design.In addition,because each added gate is unprotected,the susceptibility of this design to er-rors is actually greater than the larger partitions used in the component-based partition.

The previous analysis shows that the level of partition-ing e?ects the error resiliency and the area overheads of the design.In this paper,we introduce a technique called,Au-tomatic Cluster Decompositions,that generates partitions that minimizes area overhead while maximizing error re-siliency.

2

?elds like VLSI[2].

Figure8shows how these partitions are generated from the netlist of a design.First,the netlist pictured in part(a) is used to generate a hypergraph shown in part(b).A hy-pergraph is an extension of a normal graph where one edge can connect multiple vertices.In the?gure,each vertex rep-resents a separate net in the design.A hyperedge is drawn around each net and its corresponding fanout.If that net is placed in a di?erent partition than one of its fanout,that net becomes an output for its partition thus increasing the overhead of the partition.Thus the goal of the partitioning algorithm is to minimize the number of hyperedges that are cut.For this example,we show a3-way partitioning of the circuit.The algorithm in[12]performs a recursive min-cut operation where the original circuit is bisected and then one of these partitions is bisected again.In Figure8(c),the hy-pergraph is bisected and the number of hyperedges cut is reported.Notice that one of those pieces is twice the size of the other one.Because3-way partitioning is desired,one piece is slightly larger so that the?nal partitions are fairly balanced where each partition has about the same number of vertices/nets.Figure8(d)shows the?nal partitioning as-signment of the hypergraph along with the number of hyper-edges cut which corresponds to the number of total outputs for all the partitions not including the original outputs of the system.In practice,several iterations of[12]are run as the algorithm is heuristically-based and requires many runs for optimal partitions.Also,some imbalance in partition sizes is tolerated if the number of hyperedges cut is signi?cantly smaller as a result.

Once the hypergraph is partitioned,several di?erent repli-cation strategies can be used such as sparing and TMR.In Figure9,an example of performing3-way partitioning over an arbitrary piece of logic using one spare per partition is shown.In part(a),sparing is performed at the system-level.

Figure9:Examples of one spare systems.In part(a) sparing without any cluster decomposition is shown. In part(b)sparing is applied to a three-way par-tition.Cluster decomposition increases MUX in-terconnect overhead,but provides higher protection due to the smaller granularity of the sparing.

The outputs of both identical units are fed into a MUX where a register value determines which unit is active and which one is inactive.In part(b),the circuit is partitioned into three pieces.Notice,that the outputs for each boundary must now have a MUX associated with it.Also,each parti-tion requires a register to determine which copy is active. Thus3-way partitioning requires a total of three registers and the number of MUX es corresponding to the number of outputs generated by the partitioning.

4.3Switch Designs

Each con?guration providing a defect tolerant switch de-sign is characterized by three parameters:level of protec-tion,techniques applied,and system diagnosis method.For each con?guration,we give a name convention as follows: level diagnosis.The con?gurations using TMR as the defect tolerance technique do not use the end-to-end error detection,recovery and system diagnosis techniques, since TMR inherently provides error detection,diagnosis and recovery.All other con?gurations use the end-to-end er-ror detection and recovery technique,along with either iter-ative replay or BIST for system diagnosis.Table1,describes the choices that we considered in our simulated con?gura-tions for the three parameters,and it gives some example con?gurations along with their name conventions.

5.Experimental Results

To evaluate the e?ectiveness of our domain speci?c defect tolerance techniques in protecting the switch design,we sim-ulated various design con?gurations with both traditional and domain speci?c techniques.To assess the e?ectiveness of the various design con?gurations in protecting the switch design,we take into account the area and time overheads of the design along with the mean number of defects that the design can tolerate.We also introduce a new metric, the Silicon Protection Factor(SPF),which gives us a more representative notion about the amount of protection that is o?ered to the system by a given defect tolerance technique. Speci?cally,the SPF is computed by dividing the mean num-ber of defects needed to cause a switch failure with the area overhead of the protection techniques.In other words,the higher the SPF factor,the more resilient each transistor is to defects.Since the number of defects in a design is pro-portional to the area of the design,the use of this metric for assessing the e?ectiveness of the silicon protection tech-niques is more appropriate.

In Table2,we list the design con?gurations that we simu-lated.The naming convention followed for representing each con?guration is described in Table1.For each simulated Table1:Mnemonic table for design con?gurations. For each portion of the naming convention,we show the possible mnemonics with the related description. The last portion provides some example design con-?gurations.

Mnemonic Group Mnemonic Description

Level of applying

defect tolerance

technique

S

C

G

S+CL

C+CL

System level

Component level

Gate level

System level clusters

Component level clusters

Defect tolerance

techniques (can be

applied in

combinations)

TMR

#SP

#SH(X)

ECC

Triple Modular Redundancy

# dedicated spares for each partition

# shared spares for partition of type X

Error Correction Codes applied at state

System diagnosis

technique

IR

BIST

Iterative replay

Built-In-Self-Test

Example

configurations

S+CL_1SP_IR

C_2SH(IC)+1SP_BIST

C+CL_TMR+ECC

System level clusters with 1 spare for each partition and iterative replay.

Component level with 2 shared input controllers and one dedicate spare

for the rest of the components. BIST for system diagnosis.

Component level clusters TMR with ECC protected state. design con?guration,we provide the area overhead needed for implementing the speci?c design.This area overhead includes the extra area needed for the spare units,the ma-jority gates,the logic for enabling and disabling spare units, the logic for the end-to-end error detection,recovery and system diagnosis(di?erent con?gurations have di?erent re-quirements for the extra logic added).We notice that the design con?gurations with the higher area overheads are the ones applying BIST for system diagnosis.This is due to the extra area needed for storing the test vectors necessary for self-testing each distinct partition in the design,along with the additional interconnection and logic needed for the scan chains.Even though the area overhead for the test vectors can be shared over the total number of switches per chip, the area overhead of the BIST technique is still rather large. Another design con?guration with high area overhead is the one where TMR is applied at the gate level due to the extra voting gate needed for each gate in the baseline switch de-sign.On the other hand,designs with shared spares achieve low area overhead(under two)since not every part of the switch is duplicated.The area overhead for the rest of the design con?gurations is dependent on the amount of spares per partition.

In the fourth column of the table,we provide the mean number of defects to failure for each design con?guration. The design con?gurations providing high mean number of defects to failure are the ones employing the ACD(Auto-matic Cluster Decomposition)technique.Another point of interest is that techniques employing ECC even when cou-pled with automatic cluster decomposition perform poorly. Although state is traditionally protected by ECC,when a design is primarily combinational logic,like our switch,the cost of separating the state from the logic exceeds the pro-tection given to the state elements.In other words,if the state is not considered in the ACD analysis and is therefore not part of any of the spared partitions,the boundary be-tween the state and the spared partitions must have some unprotected interconnection logic.This added logic coupled with the unprotected logic required by ECC makes ECC in a logic dominated design undesirable.

The SPF values for each design are presented in the?fth column of Table2.The highest SPFs are given by the de-sign con?gurations that employ automatic cluster decom-position,with the highest being design S+CL IR at 11.11.Even though design S+CL BIST uses the same sparing strategies,the area overhead added from BIST de-creases the design’s SPF signi?cantly.It’s interesting that two design con?gurations have SPFs lower than1.The?rst

Table2:Results of the evaluated designs.For each design con?guration we report the mnemonic,the area factor over the baseline design,the number of defects that can be tolerated,the SPF,the number of partitions and an estimate of the impact on the system delay.

Key Design

Configuration

Area

O.head Defects SPF #Part. %Dly

1 S_TMR 3.0

2 2.49 0.82 1 0.00

2 S+CL_TMR 3.08 16.78 5.45 241 22.22

3 S+CL_TMR+ECC 3.07 6.92 2.25 185 27.78

4 C_TMR 3.04 4.68 1.54 12 0.00

5 C+CL_TMR 3.09 15.8

6 5.13 223 18.75

6 C+CL_TMR+ECC 3.11 6.25 2.01 298 25.00

7 G_TMR 4.00 4.00 1.00 10540 100.00

8 S_1SP_IR 2.22 3.27 1.47 1 0.00

9 S+CL_1SP_IR 2.30 17.53 7.63 206 22.22

10 S+CL_1SP_BIST 3.16 17.53 5.54 206 22.22

11 S+CL_1SP+ECC_IR 2.48 5.96 2.41 183 27.78

12 S+CL_1SP+ECC_BIST 3.34 5.96 1.78 183 27.78

13 C_1SP_IR 2.24 5.87 2.62 12 0.00

14 C_1SP_BIST 2.79 5.87 2.62 12 0.00

15 C+CL_1SP_IR 2.33 16.04 6.88 223 18.75

16 C+CL_1SP_ECC_IR 2.51 5.34 2.13 138 25.00

17 S_2SP_IR 3.32 5.95 1.79 1 0.00

18 S+CL_2SP_IR 3.42 37.99 11.11 206 22.22

19 S+CL_2SP_BIST 4.29 37.99 8.86 206 22.22 one is TMR applied at the system level,which can tolerate 2.5defects but the area overhead is more than triple,thus making the new design less defect tolerant than the baseline switch design by18%.The second one is where the state is protected by ECC.Since our design is logic dominated and the protected fraction of the design is very small,the ex-tra logic required for applying ECC(which is unprotected), is larger than the actual protected area.Thus,this tech-nique makes the speci?c design less defect tolerant than the baseline unprotected design by2%.

The sixth column in the table shows the number of distinct partitions for each design con?guration.This parameter is very important for the con?gurations employing ACD.The SPF of a given design con?gurations,is greatly dependent

on the number of partitions in the decomposed design. Figure10shows the dependency of the SPF over the num-ber of decomposed partitions for the design con?guration S+CL IR.We can see that for the given design con?g-uration the peak SPF occurs around200partitions.As the per partition size decreases,the SPF value increases,and as the number of cut edges per partition increases,the SPF value decreases.Therefore,the initial rise of the SPF oc-curs because the area per partition was decreasing as the number of decomposed partitions was getting larger.Af-ter the optimal point of200partitions,the overhead of the extra unprotected logic required for each cutting edge be-tween partitions causes the SPF to start declining.For each design con?guration employing automatic cluster decompo-sition,we ran several simulations for varying numbers of partitions to achieve an optimal SPF.

The?nal column in Table2,%Delay,gives the percent-age increase of the critical path delay in the switch.We produce a coarse approximation of the delay overheads that is technology independent by making the delay increase pro-portional to the number of interconnection gates added to the critical path.Thus,TMR-based designs will achieve the same delay increase as spare-based designs as multiplex-ers and majority gates are treated the same in the anal-ysis.Our results show that for the best designs,we al-ways achieve a delay increase of less than25%.The designs that involve ACD involve the greatest increase in delay be-cause the partitions generated frequently split up the critical paths.Designs with minimal amount of clustering,such as

S+CL a

generated by

C

logic

de-

the graph represents the defect tolerance provided from a design con?guration in SPFs,and the vertical axis the area over-head of the design con?guration.The further to the right a design con?guration lies,the higher the defect tolerance it provides,while the lower it is,the lower the implementation cost.

At the lower left corner,is the design con?guration S

2SP

12345A r e a O v e r h e a d

Figure 11:The line protection.with much less C IR .These input components in less than 2X,Such designs tation cost at for defect Other two IR and S+CL nique,each partition,applies the C+CL IR two designs is 11%more S+CL IR plying the e?ective defect In Figure 12,rations a?ect post 65nm ures to be of 55000years that the represent the tion of rate (right for the given line,forming forms a part of assume that For each design line showing line starts from a parts lifetime (burn-in)procedure,and that shipped parts are already at their ?rst year of lifetime with a constant failure rate.

From the graph in Figure 12(a),we can observe that when applying TMR at the component level,25%of the shipped parts will be defected by the ?rst year and 75%after the ?rst

ufactured parts become defective,then the clustering design

con?guration S+CL

IR increases the switch’s lifetime by 26X over the TMR design con?guration C

example the design con?guration S+CL IR,where au-tomatic clustering decomposition is applied at system level with one dedicated spare for each partition,where10%of the parts will get defected after7years but with48%less cost in area than design con?guration S+CL IR might be a more attractive solution.

The same data as in Figure12(a),is presented in Fig-ure12(b),with the di?erence that here we assume that the breakdown for the switch design starts after10years of be-ing shipped.For the?rst three design con?gurations,there is no di?erence since by that time all of the parts become defective.For the other two design con?gurations,what is interesting to observe is that even after the breakdown point where the failure rates increase with an exponential rate,most of the parts will be able to provide the user a warning time window of a month before failure.This is a very important feature for a design con?guration,especially for very critical high dependable applications.

6.Conclusions and Future Directions

As silicon technologies continue to scale,transistor relia-bility is becoming an increasingly important issue.Devices are becoming subject to extreme process variation,tran-sistor wearout,and manufacturing defects.As a result,it will likely be no longer possible to create fault-free designs. In this paper,we investigate the design of a defect-tolerant CMP network switch.To accomplish this design,we?rst develop a high-level,architect-friendly model of silicon fail-ures based on the time-tested bathtub curve.Based on this model,we explore the design space of defect-tolerant CMP switch designs and the resulting tradeo?between defect tol-erance and area overhead.We?nd that traditional mecha-nisms,such as triple modular redundancy and error correc-tion codes,are insu?cient for tolerating moderate numbers of defects.Rather,domain-speci?c techniques that include end-to-end error detection,resource sparing,and iterative diagnosis/recon?guration are more e?ective.Further,de-composing the netlist of the switch into modest-sized clus-ters is the most e?ective granularity to apply the protection techniques.

This work provides a solid foundation for future explo-ration in the area of defect-tolerant design.We plan to investigate the use of spare components based on wearout pro?les to provide more sparing for the most vulnerable com-ponents.Further,a CMP switch is only a?rst step towards the over-reaching goal of designing a defect-tolerant CMP system.

Acknowledgments:This work is supported by grants from NSF and Gigascale Systems Research Center.We would also like to acknowledge Li-Shiuan Peh for provid-ing us access to CMP Switch models,and the anonymous reviewers for providing useful comments on this paper. 7.References

[1]H.Al-Asaad and J.P.Hayes.Logic design validation via

simulation and automatic test pattern generation.J.Electron.

Test.,16(6):575–589,2000.

[2] C.J.Alpert and A.B.Kahng.Recent directions in netlist

partitioning:a survey.Integr.VLSI J.,19(1-2):1–81,1995. [3]T.S.Barnett and A.D.Singh.Relating yield models to

burn-in fall-out in time.In Proc.of International Test

Conference(ITC),pages77–84,2003.

[4] E.Bohl,et al.The fail-stop controller AE11.In Proc.of

International Test Conference(ITC),pages567–577,1997. [5]S.Borkar,et al.Design and reliability challenges in nanometer

technologies.In Proc.of the Design Automation Conf.,2004.

[6] F.A.Bower,et al.Tolerating hard faults in microprocessor

array structures.In Proc.of International Conference on

Dependable Systems and Networks(DSN),2004.

[7]K.Constantinides,et al.Assessing SEU Vulnerability via

Circuit-Level Timing Analysis In Proc.of1st Workshop on

Architectural Reliability(WAR),2005.

[8]J.E.D.E.Council.Failure mechanisms and models for

semiconductor devices.JEDEC Publication JEP122-A,2002.

[9]W.J.Dally,et al.The reliable router:A reliable and

high-performance communication substrate for parallel

computers.In Proc.International Workshop on Parallel

Computer Routing and Communication(PCRCW),1994. [10] E.Wu,et al.Interplay of voltage and temperature acceleration

of oxide breakdown for ultra-thin gate dioxides.Solid-state

Electronics Journal,2002.

[11]P.Gupta and A.B.Kahng.Manufacturing-aware physical

design.In Proc.of International Conference on

Computer-Aided Design(ICCAD),2003.

[12]hMETIS.https://www.doczj.com/doc/8412533208.html,/e karypis.

[13] C.K.Hu,et al.Scaling e?ect on electromigration in on-chip

Cu wiring.International Electron Devices Meeting,1999. [14] A.M.Ionescu,M.J.Declercq,S.Mahapatra,K.Banerjee,and

J.Gautier.Few electron devices:towards hybrid CMOS-SET

integrated circuits.In Proc.of the Design Automation

Conference,pages88–93,2002.

[15]G.Karypis,et al.Multilevel hypergraph partitioning:

Applications in VLSI domain.In Proc.of the Design

Automation Conference,pages526–529,1997.

[16]S.Mukherjee,et al.The soft error problem:An architectural

perspective.In Proc.of the International Symposium on

High-Performance Computer Architecture,2005.

[17]S.Mukherjee,et al.A systematic methodology to compute the

architectural vulnerability factors for a high-performance

microprocessor.In Proc.International Symposium on

Microarchitecture(MICRO),pages29–42,2003.

[18] B.T.Murray and J.P.Hayes.Testing ICs:Getting to the core

of the problem.IEEE Computer,29(11):32–38,1996.

[19]L.-S.Peh.Flow Control and Micro-Architectural Mechanisms

for Extending the Performance of Interconnection Networks.

PhD thesis,Stanford University,2001.

[20]R.Rao,et al.Statistical estimation of leakage current

considering inter-and intra-die process variation.In Proc.of

the International Symposium on Low Power Electronics and Design(ISLPED),pages84–89,2003.

[21]N.R.Saxena,and E.J.McCluskey Dependable Adaptive

Computing Systems.IEEE Systems,Man,and Cybernetics

Conf.,1998.

[22]P.Shivakumar,et al.Exploiting microarchitectural redundancy

for defect tolerance.In Proc.of International Conference on

Computer Design(ICCD),2003.

[23] D.P.Siewiorek,et al.Reliable computer systems:Design and

evaluation,3rd edition.AK Peters,Ltd Publisher,1998. [24]J.Smolens,et al.Fingerprinting:Bounding the soft-error

detection latency and bandwidth.In Proc.of the Symposium on Architectural Support for Programming Languages and

Operating Systems(ASPLOS),2004.

[25]L.Spainhower and T.A.Gregg.G4:A fault-tolerant CMOS

mainframe.In Proc.of International Symposium on

Fault-Tolerant Computing(FTCS),1998.

[26]J.Srinivasan,et al.The impact of technology scaling on

lifetime reliability.In Proc.of International Conference on

Dependable Systems and Networks(DSN),2004.

[27]J.H.Stathis.Reliability limits for the gate insulator in CMOS

technology.IBM Journal of Research and Development,2002.

[28]S.B.K.Vrudhula,D.Blaauw,and S.Sirichotiyakul.

Estimation of the likelihood of capacitive coupling noise.In

Proc.of the Design Automation Conference,2002.

[29]N.J.Wang,et al.Characterizing the e?ects of transient faults

on a high-performance processor pipeline.In Proc.of

International Conference on Dependable Systems and

Networks(DSN),pages61–70,2004.

[30] C.Weaver and T.Austin.A fault tolerant approach to

microprocessor design.In Proc.of International Conference

on Dependable Systems and Networks(DSN),2001.

[31] C.Weaver,et al.Techniques to reduce the soft error rate of a

high-performance microprocessor.In Annual International

Symposium on Computer Architecture,2004.

[32]J.F.Ziegler.Terrestrial cosmic rays.IBM Journal of Research

and Development,40(1),1996.

半导体封装制程简介

(Die Saw) 晶片切割之目的乃是要將前製程加工完成的晶圓上一顆顆之芯片(Die)切割分離。首先要在晶圓背面貼上蓝膜(blue tape)並置於鋼 製的圆环上,此一動作叫晶圓粘片(wafer mount),如圖一,而後再 送至晶片切割機上進行切割。切割完後,一顆顆之芯片井然有序的排 列在膠帶上,如圖二、三,同時由於框架之支撐可避免蓝膜皺摺而使 芯片互相碰撞,而圆环撐住膠帶以便於搬運。 圖一 圖二

(Die Bond) 粘晶(装片)的目的乃是將一顆顆分離的芯片放置在导线框架(lead frame)上並用銀浆(epoxy )粘着固定。引线框架是提供芯片一個粘着的位置+ (芯片座die pad),並預設有可延伸IC芯片電路的延伸腳(分為內 引腳及外引腳inner lead/outer lead)一個引线框架上依不同的設計可以有 數個芯片座,這數個芯片座通常排成一列,亦有成矩陣式的多列排法 。引线框架經傳輸至定位後,首先要在芯片座預定粘着芯片的位置上点

上銀浆(此一動作稱為点浆),然後移至下一位置將芯片置放其上。 而經過切割的晶圓上的芯片則由焊臂一顆一顆地置放在已点浆的晶 粒座上。装片完後的引线框架再由传输设备送至料盒(magazine) 。装片后的成品如圖所示。 引线框架装片成品 胶的烧结 烧结的目的是让芯片与引线框晶粒座很好的结合固定,胶可分为银浆(导电胶)和绝缘胶两种,根据不同芯片的性能要求使用不同的胶,通常导电胶在200度烤箱烘烤两小时;绝缘胶在150度烤箱烘烤两个半小时。 (Wire Bond) 焊线的目的是將芯片上的焊点以极细的金或铜线(18~50um)連接到引线框架上的內引腳,藉而將IC芯片的電路訊號傳輸到外界。當

半导体全制程介绍

《晶圆处理制程介绍》 基本晶圆处理步骤通常是晶圆先经过适当的清洗(Cleaning)之后,送到热炉管 (Furnace)内,在含氧的环境中,以加热氧化(Oxidation)的方式在晶圆的表面 形成一层厚约数百个的二氧化硅层,紧接着厚约1000到2000的氮化硅层 将以化学气相沈积Chemical Vapor Deposition;CVP)的方式沈积(Deposition)在刚刚长成的二氧化硅上,然后整个晶圆将进行微影(Lithography)的制程,先在 晶圆上上一层光阻(Photoresist),再将光罩上的图案移转到光阻上面。接着利用蚀刻(Etching)技术,将部份未被光阻保护的氮化硅层加以除去,留下的就是所需要的线路图部份。接着以磷为离子源(Ion Source),对整片晶圆进行磷原子的植入(Ion Implantation),然后再把光阻剂去除(Photoresist Scrip)。制程进行至此,我们已将构成集成电路所需的晶体管及部份的字符线(Word Lines),依光罩所提供的设计图案,依次的在晶圆上建立完成,接着进行金属化制程(Metallization),制作金属导线,以便将各个晶体管与组件加以连接,而在每一道步骤加工完后都必须进行一些电性、或是物理特性量测,以检验加工结果是否在规格内(Inspection and Measurement);如此重复步骤制作第一层、第二层...的电路部份,以在硅晶圆上制造晶体管等其它电子组件;最后所加工完成的产品会被送到电性测试区作电性量测。 根据上述制程之需要,FAB厂内通常可分为四大区: 1)黄光本区的作用在于利用照相显微缩小的技术,定义出每一层次所需要的电路图,因为采用感光剂易曝光,得在黄色灯光照明区域内工作,所以叫做「黄光区」。 2)蚀刻经过黄光定义出我们所需要的电路图,把不要的部份去除掉,此去除的步骤就> 称之为蚀刻,因为它好像雕刻,一刀一刀的削去不必要不必要的木屑,完成作品,期间又利用酸液来腐蚀的,所 以叫做「蚀刻区」。 3)扩散本区的制造过程都在高温中进行,又称为「高温区」,利用高温给予物质能量而产生运动,因为本区的机台大都为一根根的炉管,所以也有人称为「炉管区」,每一根炉管都有不同的作用。 4)真空

15个重要的均线形态

15个重要的均线形态: 1、银山谷 2、金山谷 3、死亡谷 4、首次粘合向上发散 5、首次粘合向下发散 6、首次交叉向上发散 7、首次交叉向下发散 8、再次粘合向上发散9、再次粘合向下发散 10、再次交叉向上发散11、首次交叉向下发散 12、烘云托月13、乌云密布14、蛟龙出海15、断头铡刀 一、银山谷1、出现在上涨初期; 2、由3根均线交叉组成; 3、形成一个尖头向上的不规则三角形。 为见底信号。 通常作为激进型的买点。 案例:002046轴研科技2015年7月28日大跌过后形成银山谷。 二、金山谷1、出现在银山谷之后; 2、构成方式与银山谷相同; 3、金山谷所处的位置必须等于或高于银山谷,否则将成为新的银山谷。 为重要的买进信号,为稳健型的重要买入入场信号。 大多会形成重要的底部形态,如双底、头肩底等。 案例,轴研科技2013年11月26日形成金山谷,量能放大,且伴有跳空,为一个很好的买点,随后股价大涨。 三、死亡谷1、出现在下跌初期; 2、有三根均线交叉组成,形成一个尖头向下的不规则三角形 见顶信号 在股价大幅上扬后,出现次信号要特别小心,可配合成交量加以判断。

案例:000877天山股份2017年4月12日大牛股天山股份形成死亡谷 四、首次粘合向上发散1、既可出现在下跌后横盘末期,又可出现在上涨后的横盘末期; 2、短、中、长均线同时以喷射状向上发散; 3、多根均线发散前形成粘合态。 为重要的买入信号。 向上发散时如果有成交量配合,可作为首次入场信号。 案例:天山股份2017年2月6日首次粘合发散,后面大幅度上涨。 五、首次粘合向下发散1、既可出现在下跌后横盘末期,又可出现在下跌后的横盘末期; 2、短、中、长均线同时以喷射状向下发散; 3、多根均线向下发散前粘合在一起。 这是一个极其重要的卖出信号。 如果再配合以向下跳空等重要K线语言,则更加要引起高度重视。 案例:601390中国中铁2015年6月12日构成首次粘合向下发散。 六、首次交叉向上发散1、出现在下跌后期; 2、短、中、长均线从向下发散不断收敛后再向上发散。 为激进型买进信号。该信号由于未进行有效的低位整理,所以准确率不高。 案例2016年铁龙物流构成首次交叉向上发散。 向上交叉是银山谷的一个极端形态,三线共点,但这种情况较少见,较小的银三角可以认为就是交叉向上发散。

所有均线的用法

5均线、10均线、20均线、30均线、60均线、120均线,240均线操盘手对此有特定称谓. 一、攻击线 5均线 所谓攻击线就是我们日常所说的五日均线。有的朋友觉得很可笑,五日均线还用讲吗,这个傻瓜都知道。事实上问题就出在这里,越简单的你反而不会花大力气去学习深究其里。这里需要给大家强调一点,这些特定称谓一般指常用的日线系统,但攻击线也可用于分时、周线、月线甚至是年线,如果你是中线持股者五周线就是你的攻击线,其他依次类推。攻击线作用有三个,攻击线拐头向上表示其有助涨作用,攻击线走平意味着股票正在做平台整理,拐头向下代表其有助跌作用,读者朋友要注意,在大盘系统较稳定的情况下自己要选择攻击线陡峭向上的个股,斜率大上涨速度快,赚钱速度快。 5日均线:均线系统是最为基本、直观的技术分析工具。它简单、实用,也可以解决很多种复杂情况,因此为许多股民朋友们所喜爱。5日均线是均线中最常见的,也是最必不可少的,常常作为买入卖出的信号,作为一个波段结束反转形成的启动点。5日均线战法将详细介绍利用股价和5日均线配合来观察股市和操盘的方法,我们将十分详细地讲解5日均线在分析和实战中的运用,希望对于各位新手能有所帮助。 一、5日均线战法:概述 5日均线对于短线操盘和波段操作至关重要,甚至可以说,5日均线是其生命线。5日均线战法无非就是弄清楚5日均线的变化以及5日均线和股价、K线的关系。更加深入地来看,5日均线的使用高手最终会淡化5日均线的影响,因为5日均线最终还是由K线走出的收盘价决定的,并且5日均线稍稍滞后于K线的走势,5日均线战法某种程度上只是帮助我们理解市场的一个中间手段。 二、5日均线战法:用法 (1)股价高过5日均线过多,可以认为股价偏离5日均线过大,“乖离率”太大。这时候股价往往会有回调的风险,属于单边走势结束的时机,最好伺机卖出。至于“乖离率”,一般认为股价偏离5日均线7%到15%属于过高,可以适时进行短线轧平头寸。 一、攻击线即是5日均线。其主要作用是推动价格在短期内形成攻击态势,不断引导价格上涨或下跌。如果攻击线上涨角度陡峭有力(没有弯曲疲软的状态),则说明价格短线爆发力强。反之,则弱。同样,在价格进入下跌阶段时,攻击线也是重要的杀跌武器,如果向下角度陡峭,则杀跌力度极强。 在临盘实战中,当价格突破攻击线,攻击线呈陡峭向上的攻击状态时,则意味着短线行情已经启动,此时应短线积极做多。同理,当价格击穿攻击线,攻击线呈向下拐头状态时,则意味着调整或下跌行情已经展开,此时应短线做空。 5均线开平仓标准: 1.实体中阳上穿5均线回踩站稳做多。 2.实体中阴下穿5均线回测受阻做空。 3.K线拉大阴线超卖,5均线乖离率过大,短线轻仓做多。 4.K线拉大阳线超买,5均线乖离率过大,短线轻仓做空。 5.5均线走平时应观望,等待实体K线突破阻力或支撑回测确认在顺势跟进。

三维可视化智能物联网管理平台设计

三维可视化智能物联网管理平台 技术方案 二〇一二年八月

目录 一、概述 (3) 1.1项目背景 (3) 1.2建设系统的意义 (4) 1.3设计依据和参考资料 (5) 二、系统特点 (5) 三、设计原则 (6) 3.1可靠性 (6) 3.2先进性与合理性 (6) 3.3开发性 (6) 3.4可扩展性 (6) 四、系统总体构架 (6) 4.1系统整体框图 (6) 4.2系统研究内容 (7) 五、系统组成 (8) 5.1软件组成 (8) 5.2 硬件组成 (9) 5.3 软件功能 (10) 5.4 开发环境 (14) 5.5 系统报价 (14)

一、概述 1.1项目背景 物联网是指通过信息传感设备,按照约定的协议,把需要联网的物品与网络连接起来,进行信息交换和通讯,以实现智能化识别、定位、跟踪监控和管理的一种网络,它是在网络基础上的延伸和扩展应用。物联网是被称为继计算机、互联网之后世界信息产业发展的第三次浪潮。有业内专家认为物联网一方面可以提高经济效益,大大节约成本,另一方面可以为全球经济的复苏提供技术动力。 目前,美国、加拿大、欧盟、日本、韩国等都在投入巨资深入研究探索物联网,并启动了以物联网为基础的“智慧地球”、“U-Japan”、“U-Korea”、“物联网行动计划”等国家性区域战略规划。 我国把发展物联网已经提到国家的战略高度,它不但是信息技术发展到一定阶段的升级需要,同时也是实现国家产业结构调整,推动产业转型升级的一次重要契机。2010年9月,《国务院关于加快培育和发展战略性新兴产业的决定》发布,新一代信息技术、节能环保、新能源等七个产业被列为中国的战略性新兴产业,将在今后加快推进,其中物联网技术作为新一代信息技术的重要组成部分,更是在近一年里受到政府、企业和科研机构的大力支持。 当前,世界各国的物联网基本都处于技术研究与试验阶段,物联网相关技术研究还处于起步发展阶段,在物联网基础研究和技术开发等方面还面临许多挑战。物联网涉及到的关键技术领域很多,包括RFID识别技术、泛在传感技术与纳米嵌入技术、IPV6地址技术以及等。从软件的角度来看,物联网软件技术研究方面也是处于起步阶段,尤其是基础软件的研究均处于探索阶段。 面对物联网所带来的大数据量、数据时效性高、安全与隐私性要求高等挑战,人们也在不断地探索亲的解决办法。在物联网系统中,由于传感器节点及采样数据的异构性,基础软件显得尤为重要。物联网基础软件不仅屏蔽了各类传感器硬件及数据的差异,实现了物联网节点及数据的统一处理,而且实现了海量物联网节点之间的协同工作,从而大大简化了物联网应用程序的开发。我们以动态位置感知类应用为例,相关的传感器可以包括GPS传感器、RFID传感器、手机定

丝瓜对人体的功效与作用

丝瓜对人体的功效与作用 丝瓜的营养价值 丝瓜的营养价值很高,丝瓜中含有蛋白质、脂肪、碳水化合物、粗纤维、钙、磷、铁、瓜氨酸以及核黄素等B族维生素、维生素C,还含有人参中所含的成分--皂甙。丝瓜实味甘性平,有清暑凉血、解毒通便、祛风化痰、润肌美容、通经络、行血脉、下乳汁等功效,其络、籽、藤、花、叶均可入药。【深圳清渠农场提供】 女性多吃丝瓜对身体很有好处,丝瓜中含防止皮肤老化的B族维生素,增白皮肤的维生素C等成分,能保护皮肤、消除斑块,使皮肤洁白、细嫩,是不可多得的美容佳品,故丝瓜汁“美人水”之称。

长期食用或用丝瓜液擦脸,还能使人皮肤变得光滑、细腻。另外,丝瓜还有清暑凉血、解毒通便、调理月经不顺等功效,月经不调者,身体疲乏,体质虚弱的女性更宜多食用。 1、丝瓜子性寒,味苦微甘,有清热化痰、解毒、润燥、驱虫等作用。 2、丝瓜络性平味甘,以通络见长,可以用于产后缺乳和气血淤滞之胸胁胀痛。 3、丝瓜花性寒,味甘微苦,有清热解毒之功,可以用于肺热咳嗽、咽痛、疔疮等。 4、丝瓜藤舒筋活络,并且可以祛痰。 5、丝瓜藤茎的汁液具有美容去皱的特殊功能。 6、丝瓜根可以用来消炎杀菌、去腐生肌。 7、月经不调者、身体疲乏者适宜多吃丝瓜。丝瓜还有清暑凉血、解毒通便、调理月经不顺等功效,月经不调者,身体疲乏,体质虚弱的女性更宜多食用。 1、看外观 不要选择瓜形不周正、有突起的丝瓜。然后,摸摸丝瓜的外皮,挑外皮细嫩些的,不要太粗,不然丝瓜很可能已老,有一颗颗的种。 2、掂重量

把丝瓜拿在手里掂量掂量,感觉整条瓜要有弹性;手指稍微用力捏一捏,如果感觉到硬硬的,就千万不要买,硬硬的丝瓜就非常有可能是苦的。 3、尝味道 如果卖丝瓜老板答应,还可以掰断丝瓜头,放在嘴里尝一尝,一尝就知道苦或者不苦啦。 4、看手感 好的丝瓜,粗细均匀,用手捏一下,很结实,这是新鲜的;如果软塌塌的,是放久了。皮的颜色发暗,不青翠且较粗就是老了,丝瓜一定要去皮吃,即使很嫩,也要削掉皮,不然很难吃。 1、保湿 丝瓜水含有果胶、植物黏液和多糖成分,涂抹在面部不易挥发,而且丝瓜水所含的维生素还能渗透我们的皮肤细胞,滋养我们的皮肤,从而达到保湿养肤的目的。 2、抗衰老,去皱纹 丝瓜水含有维生素E等抗氧化营养成分,这些抗氧化营养成分能直接作用于我们的脸部肌肤,使我们脸部皮肤长久的充满活力,衰老减慢,而且丝瓜水还有一定的粘附能力,用丝瓜水涂抹脸部洗净后,可以去除我们脸部皮肤的深层污垢,使我们的皮肤洁净健康,所以丝瓜水能从各个方面达到活肤去皱的目的。 3、美白嫩肤

六大均线使用技巧

现货产品投资中,除了运用基本面分析外,技术分析其地位也是是相当重要的,很多投资者热衷于技术分析,可以根据技术分析判断现货产品价格走势。下面小编给大家介绍下六大均线使用技巧。 一、攻击线,即5日均线 攻击线的主要作用是推动价格在短期内形成攻击态势,不断引导价格上涨或下跌。如果攻击线上涨角度陡峭有力(没有弯曲疲软的状态),则说明价格短线爆发力强。反之,则弱。同样,在价格进入下跌阶段时,攻击线也是重要的杀跌武器,如果向下角度陡峭,则杀跌力度极强。 在临盘实战中,当价格突破攻击线,攻击线呈陡峭向上的攻击状态时,则意味着短线行情已经启动,此时应短线积极做多。同理,当价格击穿攻击线,攻击线呈向下拐头状态时,则意味着调整或下跌行情已经展开,此时应短线做空。 二、操盘线,即10日均线 此线也有行情线之称。操盘线的主要作用是推动价格在一轮中级波段行情中持续上涨或下跌。如果操盘线上涨角度陡峭有力,则说明价格中期上涨力度强。反之,则弱。同样,在价格进入下跌波段时,操盘线同样可促使价格反复盘跌。 在临盘实战中,当价格突破操盘线,操盘线呈持续向上的攻击状态时,则意味着波段性中线行情已经启动,此时应短线积极做多。同理,当价格击穿操盘线,

操盘线呈向下拐头状态时,则意味着上涨行情已经结束,大波段性调整或下跌行情已经展开,此时应中线做空。 三、辅助线,即22日均线 辅助线的主要作用是协助操盘线,推动并修正价格运行力度与趋势角度,稳定价格趋势运行方向。同时,也起到修正生命线反应迟缓的作用。在一轮波段性上涨行情中,如果辅助线上涨角度较大并陡峭有力,则说明价格中线波段上涨力度极强。反之,则弱。同样,价格在下跌阶段时,辅助线更是价格反弹时的强大阻力,并可修正价格下跌轨道,反复促使价格震荡盘跌。 在临盘实战中,当价格突破辅助线,辅助线呈持续向上的攻击状态时,则意味着波段性中线行情已经启动,此时应短线积极做多。同理,当价格击穿辅助线,辅助线呈向下拐头状态时,则意味着阶段性中线上涨行情已经结束,而阶段性调整或下跌行情已经展开,此时应中线做空。 四、生命线,即66日均线 生命线的主要作用是指明价格的中期运行趋势。在一个中期波段性上涨趋势中,生命线有极强的支撑和阻力作用。如果生命线上涨角度陡峭有力,则说明价格中期上涨趋势强烈,主力洗盘或调整至此位置可坚决狙击。反之,则趋势较弱,支撑力也将疲软。同样,在价格进入下跌趋势时,生命线同样可压制价格的反弹行为,促使价格持续走弱。

物联网管理平台解决方案

物联网管理平台解决方案 行业背景 “物联网”时代,钢筋混凝土、电缆将与芯片、宽带整合为统一的基础设施,基础设施像是一块新的地球工地,世界的运转就在它上面进行,其中包括经济管理、生产运行、社会管理乃至个人生活。 物联网被广泛地应用到各行各业的服务中,打破物品之间的壁垒,使之日益为客户提供融合的服务体验。以无处不在的互联网和一系列高科技术为标志的数字经济积极地打破顾客、企业、竞争对手和合作伙伴之间的界限,提供数字化快速传输和大量数据存储,提高信息传播的安全性,实现商业流程的自动化。 随着互联网产业蓬勃发展,芯片、传感器、网络设备成本不断降低,自动化设备的网络通讯和信息处理能力日渐强大,加之宽带网络日益普及,奠定了产业发展的现实基础。 方案概述 XXXX物联网解决方案涵盖物联网的各个层次,研发范围包括物联网应用平台、多种物联终端以及应用系统。 物联网应用平台 XXXX研发的物联网应用平台是一个支撑物联网应用的核心支撑平台,是一个高效、稳定的物联网应用基础运行平台,向下提供连接各种物联终端等硬件设备的通信接口(包括各种有线和无线通信方式),方便和终端进行数据的交换;向上为智能交通、智能物流、城市管理、智能家居等应用提供一些业务基础服务(例如位置服务、工作流服务、电子地图显示服务等等)以及业务基础框架。 通过平台,上层物联网应用系统不用关心底层终端的工作方式,只要专注于功能的实现,促进应用系统的开发部署更加快捷高效。

平台采用模块化设计,包括以下功能组件: ●统一服务接口:平台为上层应用提供统一的服务接口,对于各种构件的调用采用 SOA服务模式。 ●统一安全认证:以用户信息、系统权限为核心,集成各业务系统的认证信息,提 供一个高度集成且统一的认证平台。其结构具有系统健壮、结构灵活、移动办公、 安全可靠等特点。 ●业务基础服务:提供物联网常用的GIS、定位、工作流、调度等基础服务,提供 图形报表、终端控制、数据加密、数据传输等统一的实现方法,实现系统的松散耦 合,提高系统的灵活性和可扩展性,保障快速开发、降低运营维护成本。 ●业务基础框架:支撑基础服务运行,提供统一的开发模式、数据持久化、系统权 限管理及其它一些公共模块(多语言、日志、缓存、配置管理等)。 ●通信层:物联应用平台和终端设备之间沟通桥梁,支持无线和有线的联系方式。 主要功能是提供平台和终端层之间安全高效的数据传输服务。 ?终端产品 终端层涵盖各行业涉及的相关终端设备,如个人使用的手机、PAD、PC、笔记本电脑;智能家居相关的IP可视门禁、IP可视电话、机顶盒;工控行业的RTU、DTU、温控传感器;

几种不同均线的设置.--转载新浪博客

几种不同均线的设置 (2008-10-08 23:17:37) 转载▼ 均线是研究大盘和股票最基本的要素,看均线也是股民的基本功,有的炒股高手就凭几条均线就能判断出大盘和股票的涨与跌,而且照样能赚到大钱。对于均线的设置各人都有不同的习惯,也都有各自的道理,关键是自己能够不能够把握,这一点十分重要。 均线的设置正常情况下,应当是5日、10日、20日、30日、60日、120日、250日、500日均线等,5日线实际上代表的是一周的价格水平,也可视为周线,以此类推,10日线是半月线,20日线是月线,30日线是1.5月线,60日线是三月线,120日线是半年线,250日线是年线,也是牛熊分界线,500日线是两年线。以此类推。 在益盟公司的几位老师中,在大盘上俞涌老师设一条38日均线和一条73 日均线,冯崎老师设一条18日均线。他们的设置都有各自的道理,38日线是差两天的两月线,也是差两天的8周线;73日线是比4月线早7天的一条均线。关于38日线,我经过验证,对于看大盘非常有价值,大盘在牛市时指数基本是不会破这条线的,而在这波熊市里,大盘的指数基本是在38日线以下运行,很难突破这条线,也可以说在这条均线下面买入股票,想赚钱是非常难的。而且,大盘每每反弹到 38日线这个阻力位时必定回调。现在大盘的指数正好是受阻于这条均线的,可见,用38日线看上证指数是有特别意义的。 冯崎老师设的18日线,是比月线早两天的一条线,冯崎老师把这条线称为大盘的生命线,也就是说指数在这条线以下不操作股票,在这条线以下买入股票不赚钱是正常的,赚了钱就是不正常的。在这次一年多的下跌行情中,大盘只有三次短暂地在18日均线上方运行,我们赚钱的时机也就这么三回,抓住了就可赚一笔,而在这条线以下想赚钱,都是侥幸的。 在个股上做短线和长线的人习惯设为3、5、18、30、62、133、250、500日线。做超短线的人习惯设为3、5、13、20、35、70、 120、250日线。做中线的习惯设为5、10、20、30、60、120、185日线。各有各的独到之处,这需要自己仔细体味和研究。一个比较正规的主力,一个老道的操盘手在拉升股价时,都可以会预设一条线,股价大多数时间是沿着这条线运行的,只要不破这条线,我们可以放心持有,破了这条线就要果断卖出。至于这条线是多少日线,需要自己去揣摩了。

物联网管理系统

1、简介 昆仑海岸物联网云服务平台是由北京昆仑海岸传感技术有限公司开发的面向物联网设备的数据服务平台。目前昆仑海岸物联网云服务平台需要和本公司自主研发的KL-H系列物联网网关产品配套使用,通过物联网网关可以实现对温度、湿度、照度、土壤温度、土壤水分、照度、二氧化碳、氧气等环境值的监控,同时可以下发控制命令,完成对一些设备的控制。通过这套系统,可以很好地实现智慧农业、智慧城市等一些项目。如图1: 图1

云服务平台功能分三个模块:应用模块、数据服务模块、单机版模块,如图2: 图2 应用模块:会员通过该模块,可直接享受数据服务、地图定位、历史记录、历史曲线等功能。 数据服务模块:会员通过该模块,可以将我公司服务器上的数据信息下载到本地计算机上,便于二次开发。单机版模块:会员通过该模块,可在自己的服务器上实现数据收发功能,便于二次开发。

2、申请账号与登录 用户通过浏览器(推荐使用非IE 内核的浏览器,如火狐浏览器)访问昆仑海岸物联网云服务平台(以下简称为“平台”),在浏览器地址栏中输入域名https://www.doczj.com/doc/8412533208.html, 进入平台主界面。 点击【注册】进入用户注册界面,填写相关信息并点击【确认提交用户信息】按键,如图3所示。 当用户申请账号操作完成后,需要等待账号被平台管理员授权后方可使用,若账 号未被授权,则登录时会出现如图4所示的提示。 会员通过浏览器(推荐使用非IE 内核的浏览器,如火狐浏览器)访问平台,在浏 览器地址栏中输入管理平台的域名https://www.doczj.com/doc/8412533208.html, 进入登陆界面。使用已被授权的账号 登陆管理平台,如图5 所示。 图 4 图 3 图5

登陆成功后,会员可【查看账户信息】,来查看账户信息、设备信息、以及联系 方式等。本平台一个账户最多可以提供5只物联网网关的服务,如果多于5只将不能 添加设备,会员可以拨打电话,来实现扩容服务。如图6: 图6

半导体各工艺简介5

Bubbler Wet Thermal Oxidation Techniques

Film Deposition Deposition is the process of depositing films onto a substrate. There are three categories of these films: * POLY * CONDUCTORS * INSULATORS (DIELECTRICS) Poly refers to polycrystalline silicon which is used as a gate material, resistor material, and for capacitor plates. Conductors are usually made of Aluminum although sometimes other metals such as gold are used. Silicides also fall under this category. Insulators refers to materials such as silicon dioxide, silicon nitride, and P-glass (Phosphorous-doped silicon dioxide) which serve as insulation between conducting layers, for diffusion and implantation masks,and for passivation to protect devices from the environment.

高手的均线总结及使用心得

高手的均线总结及使用心得 https://www.doczj.com/doc/8412533208.html,/post_46_908231_1_d.aspx 一:对于均线的支撑作用的论证 截止到目前为止,人们通常对于均线是否具有真正有效的支撑作用抱有疑问,笔者的心得如下:对于 一只股票,只要主力尚在其中,根据主力的类型和目标的不同,主力不会允许股价有效击破某条重要均线 ,因为通常主力会顾及到主力的成本区域,大众的心理层面以及主力的目标和操作手法等,此时股价往往 在重要均线获得支撑,而当主力已经基本离场出局,均线的支撑作用就自然消失。所以均线的支撑作用在 个股的行情展开过程中才会发生作用,如果主力出逃则失效。如果由于主力出逃导致有效杀破某条均线, 由于缺乏主力关照,日后在反弹到此均线时由于散户的抛压就会再次下跌,看起来就好象均线产生了压制 作用。下面我们依据主力运作的周期和目标把主力分为短线主力,中线主力和长线主力,除了控盘程度极 高的庄股,通常一支股票中上述三种类型的主力是共存的。另外对于日线图,通常定义的有效击破的概念 是连续三日在某均线下方运行,幅度超过3%。 二:通用重要均线 通用短期均线:5日,10日均线

心得:对于走主升浪的强势股,5日均线是低吸的重要位置,强势股在拉升的过程中经常会技术性的 缩量回挡5日线之后立刻拉起,判断的重要依据是缩量,时机最好在一波拉升的初中期也就是幅度不大的 时候。对于由短线主力发动的行情来说,通常主力会在回档到5日或者10日线的时候出来护盘或者向上拉 起。此种类型个股的典型例子是华夏银行。 通用中期均线:20日,30日,60日均线 心得:中线主力必须维护的均线,一般逐浪上攻的慢牛型股票会在20日或者30均线获得支撑继续向上 ,如果30日线跟60日线相距不远的话瞬时打到60日线也有可能。通常来说,30日线是中线的生命线,30日 线走平或者向上,股价在30日线上方运行是中线看多做多的标志,30日线向下,股价在30线下方运行是中 线看空做空的标志,此方法如果结合个股的涨幅和基本面来配合使用成功率会更高。依托中期均线逐浪上 攻型的个股例子可以参考1月5日之前皖通高速的技术形态。 通用长期均线:120,240日线 心得:长线主力需要维护的均线,如果中短线主力出逃,股价往往在长期均线处获得支撑,经过连续 的下跌后,由于止跌回升导致长期均线走平或者拐头向上是中长线开始走好的标志。股价有效突破原本反

半导体制程气体介绍

一、半導體製程氣體介紹: A.Bulk gas: ---GN2 General Nitrogen : 只經過Filter -80℃ ---PN2 Purifier Nitrogen ---PH2 Purifier Hydrgen (以紅色標示) ---PO2 Purifier Oxygen ---He Helium ---Ar Argon ※“P”表示與製程有關 ※台灣三大氣體供應商: 三福化工(與美國Air Products) 亞東氣體(與法國Liquid合作) 聯華氣體(BOC) 中普Praxair B.Process gas : Corrosive gas (腐蝕性氣體) Inert gas (鈍化性氣體) Flammable gas (燃燒性氣體) Toxic gas (毒性氣體) C.General gas : CDA : Compressor DryAir (與製程無關,只有Partical問題)。 ICA : Instrument Compressor Air (儀表用壓縮空氣)。 BCA: Breathinc Compressor Air (呼吸系統用壓縮空氣)。 二、氣體之物理特性: A.氣體分類: 1.不活性氣體: N2、Ar、He、SF6、CO2、CF4 , ….. (惰性氣體) 2.助燃性氣體: O2、Cl2、NF3、N2O ,….. 3.可燃性氣體: H2、PH3、B2H6、SiH2Cl2、NH3、CH4 ,….. 4.自燃性氣體: SiH4、SC2H6 ,….. 5.毒性氣體: PH3、Cl2、AsH3、B2H6、HCl、SiH4、Si2H6、NH3 ,…..

均线的设置和使用技巧详解

均线的设置和使用技巧详解 均线是最常用的分析指标之一,所反映的是过去一段时间内市场的平均成本变化情况。均线和多根均线形成的均线系统可以为判断市场趋势提供依据,同时也能起到支撑和阻力的作用。 一、均线的构成 均线,按照计算方式的不同,常见地可分为普通均线、指数均线、平滑均线和加权均线。这里主要介绍普通均线和指数均线。 普通均线:对过去某个时间段的收盘价进行普通平均。比如20日均线,是将过去20个交易日的收盘价相加然后除以20,就得到一个值;再以昨日向前倒推20个交易日,同样的方法计算出另外一个值,以此类推,将这些值连接起来,就形成一个普通均线。 指数均线:形成方式和普通均线完全一致,但在计算均线值的时候,计算方式不一样。比如20日均线,指数均线则采取指数加权平均的方法,越接近当天,所占的比重更大,而不是像普通均线中那样平均分配比重。所以指数均线大多数情况下能够更快地反映出最新的变化。 二者的优劣:没有绝对的优劣,在不同的运行阶段,二者可以体现出不同的效果,选取时的经验成分相对更大。个人更习惯于使用指数均线,用多根指数均线的结合来辅助判断市场趋势。 下图是欧元/美元日线图上所体现的两种不同均线的效果,这里的均线参数设置都是20,其中红色均线为普通20日均线,蓝色均线

为20日指数均线。从图片中可以看出,大多数情况下指数均线的拐头速度都相对早于普通均线,能相对更快地跟上市场的趋势。 说明:一般而言,均线的计算价格用收盘价,但也可根据需求的不同使用最低价、最高价或者开盘价。 二、均线系统的设置 如果将多根有规律的均线排列在一起,这是就可以形成一个对分析更有帮助的均线系统。一般来说,在用均线系统进行分析时,常见的参数排列方式为: A、5、10、20、30、40、50(等跨度排列方式) B、5、8、13、21、34、55(费波纳奇排列方式) 在这两种排列方式的基础上,增加125、200和250三个参数的均线三、利用均线系统排列状态判断市场节奏

洋丝瓜的功效与作用

洋丝瓜的功效与作用 洋丝瓜在生活中很受欢迎,也是比较常见的食材,其口感脆嫩,营养丰富,不仅仅含有诸多有益物质而且有保健的功效,所以也是餐桌上常见的食物。洋丝瓜的烹饪方法有很多,切片切丝均可,不仅可以烧炒,还可以做汤。下面就为大家详细介绍下洋丝瓜的功效和作用,大家可以了解下。 洋丝瓜的营养价值 洋丝瓜别名佛手瓜、梨瓜、福寿瓜、洋丝瓜、拳头瓜、合掌瓜、隼人瓜、菜肴梨,佛手瓜的功效:理气和中、疏肝止哎,佛手瓜的作用:治消化不良、胸闷气胀、呕吐、肝胃气痛、气管炎咳嗽多痰,佛手瓜食用禁忌:佛手瓜属于温性食物,阴虚体热和体质虚弱的人应少食。 别名:梨瓜、洋丝瓜、拳头瓜、合掌瓜、福寿瓜、隼人瓜、菜肴梨。 功效:理气和中,疏肝止哎。 主治:适宜于消化不良、胸闷气胀、呕吐、肝胃气痛以及气管炎咳嗽多痰者食用。 用法用量:佛手瓜的食用方法很多,鲜瓜可切片、切丝,作荤炒、素炒、凉拌,做汤、涮火锅、优质饺子馅等。还可加工成腌制品或做罐头。在国外,佛手瓜以蒸制、烘烤、油炸、嫩煎等方法食用。每次1个。 洋丝瓜的功效与作用

1、预防疾病:洋丝瓜含有的营养物质比较全面丰富,经常食用可以大大提高人体免疫力和预防疾病的入侵。 2、利尿排钠、降血压:洋丝瓜含有的蛋白质和钙元素是黄瓜的2-3倍,同时含有丰富的维生素和人体所需的各种矿物质,而且又是一种低热量和低钠的食物,对防治高血压和心脏病具有很好的保健作用。所以经常食用洋丝瓜具有利尿排钠、降血压的神奇功效。 3、提高智力:据现代医学研究发现洋丝瓜中含有大量的锌元素,而锌元素又是青少年儿童智力发育所必需的物质,所以经常食用洋丝瓜有助于提高智力。 4、理气和中、疏肝止咳:据中医的说法洋丝瓜具有理气和中、疏肝止咳的作用,常被中医用来治疗消化不良、胸闷气胀、呕吐、肝胃气痛以及气管炎咳嗽多痰等疾病症状。 5、抗氧化、防癌抗癌:洋丝瓜中含有的硒元素的含量是其他蔬菜不可比拟的,医学研究证明硒元素是人体不可缺少的一种微量元素,具有很好的抗衰老和防癌抗癌功效,可以有效的保护人体细胞膜结构和功能不被损害。 6、补肾壮阳 洋丝瓜还有一个奇妙的功效就是可以帮助男女改善因营养不良引起的不育症,特别是对男性性功能衰退具有显著的效果,因此被誉为男人的“天然伟哥”,因此长期食用洋丝瓜可以起到补肾壮阳的效果。

半导体全制程介绍

半导体全制程介绍 《晶圆处理制程介绍》 基本晶圆处理步骤通常是晶圆先经过适当的清洗 (Cleaning)之后,送到热炉管(Furnace)内,在含氧的 环境中,以加热氧化(Oxidation)的方式在晶圆的表面形 成一层厚约数百个的二氧化硅层,紧接着厚约1000到 2000的氮化硅层将以化学气相沈积Chemical Vapor Deposition;CVP)的方式沈积(Deposition)在刚刚长成的二氧化硅上,然后整个晶圆将进行微影(Lithography)的制程,先在晶圆上上一层光阻(Photoresist),再将光罩上的图案移转到光阻上面。接着利用蚀刻(Etching)技术,将部份未被光阻保护的氮化硅层加以除去,留下的就是所需要的线路图部份。接着以磷为离子源(Ion Source),对整片晶圆进行磷原子的植入(Ion Implantation),然后再把光阻剂去除(Photoresist Scrip)。制程进行至此,我们已将构成集成电路所需的晶体管及部份的字符线(Word Lines),依光罩所提供的设计图案,依次的在晶圆上建立完成,接着进行金属化制程(Metallization),制作金属导线,以便将各个晶体管与组件加以连接,而在每一道步骤加工完后都必须进行一些电性、或是物理特性量测,以检验加工结果是否在规格内(Inspection and Measurement);如此重复步骤制作第一层、第二层的电路部份,以在硅晶圆上制造晶体管等其它电子组件;最后所加工完成的产品会被送到电性测试区作电性量测。 根据上述制程之需要,FAB厂内通常可分为四大区: 1)黄光本区的作用在于利用照相显微缩小的技术,定义出每一层次所需要的电路图,因为采用感光剂易曝光,得在黄色灯光照明区域内工作,所以叫做「黄光区」。

股票均线的设置和使用技巧

股票均线的设置和使用技巧 2010-10-26 12:51:04| 分类:股票| 标签:买卖股票 macd 股价平均|举报|字号大中小订阅股票均线的设置和使用技巧(2010-09-21 23:00:28) 内容:均线是最常用的分析指标之一,所反映的是过去一段时间内市场的平均成本变化情况。均线和多根均线形成的均线系统可以为判断市场趋势提供依据,同时也能起到支撑和阻力作用 < XMLNAMESPACE PREFIX ="O" /> 一般来说,在用均线系统进行分析时,常见的参数排列方式为: A、5、10、20、30、40、50(等跨度排列方式) B、5、8、13、21、34、55(费波纳奇排列方式) 在这两种排列方式的基础上,增加120、200两个参数的均线 一、利用均线系统排列状态判断市场节奏 均线系统的排列状态,一般可分为股聚、发散、平行三种状态。其中股聚状态表明的是市场经历过一段单边走势后出现的对原有动能消化或积蓄新运行动能的过程,股聚的时间越长,产生的动能越大;发散状态则是股聚之后产生的爆发动能的前奏;平行状态则表明市场运行动能已经单方向爆发,引发了新的单边运行趋势。在单边市运行节奏当中,这三种状态的循环为:股聚-〉发散-〉平行-〉股聚而在盘整市中,则体现为股聚-〉发散-〉股聚的循直到突破的发生。 二、均线常规用法均线的两个基本功能;

1、“金叉”和“死叉”指示的基本信号 2、均线对股价的支撑和阻力作用当股价运行在均线系统的下方时,此时均线系统将起到明显的阻力作用,阻碍股价的反弹;同样地,当股价站稳于均线系统之上后,均线系统又将起到明显的支撑作用,这个作用的产生原因就在于均线计算方式所体现的市场成本变化,这种支撑和阻力效果也被称为“均线的斥力”。当然,均线也有吸引力,当股价单边运行过程中较远地离开了均线系统后,均线也会发出“引力”作用,均线本身跟随股价运行方向前进的同时,也会拉 动股价出现回调。 五、特殊均线的用法所谓特殊均线是指在特定阶段能起到明显多空分水岭的均线,一般来说我们将其运用于单边市的主趋势中。其中30周指数均线就体现出特殊的多空分水岭效果。 股票均线简单看 年线下变平,准备捕老熊。年线往上拐,回踩坚决买! 年线往下行,一定要搞清,如等半年线,暂做壁上观。 深跌破年线,老熊活年半。价稳年线上,千里马亮相。 要问为什么?牛熊一线亡!半年线下穿,千万不要沾。 半年线上拐,坚决果断买!季线如下穿,后市不乐观! 季线往上走,长期做多头!月线不下穿,光明就在前! 股价踩季线,入市做波段。季线如被破,眼前就有祸,长期往上翘,短期呈缠绕,平台一做成,股价往上跃

与均线相关的技术指标

第二章与均线相关的技术指标 前一章谈了均线,而许多传统经典指标与之息息相关,如BISA、MACD、DMA、TRIX、BOLL、ENV等,在它们共同的特点是在指标的设计中,都以价格平均线作为首要因素进行数学模型构建的。可见均线无论在指标设计还是在技术分析中都占有极重要的地位。 下面就上述指标进行逐一讲解,通过对指标的数学表达、曲线形态对比,以及指标间的关联分析,揭示指标本质与外在关系,并会发现一些有趣的现象。 一、乖离率BISA与变动速率ROC指标 先看乖离率BISA指标的公式源码: BIAS:(CLOSE-MA(CLOSE,N))/MA(CLOSE,N)*100; 算法:乖离率=〔当日收盘价-N日移动平均线〕×100/N日移动平均线 公式算法表明,以N日移动平均线为基准线(即指标中的零轴),计算价格上下波动远离基准线的幅度,其大小用乖离率表示。换句话说,BIAS实质上表示的是价格相对于N日均线的涨跌幅。所以当正乖离率愈大时,价格相对于均线的涨幅愈大,短期获利丰厚,回吐可能性愈高;反之负乖离率愈大,价格相对于均线的跌幅愈大,空头回补可能性愈高。至于乖离率的技术点怎么确定,除与设定的基准均线(参数N)有关外,还会因个股的股性以及所处行情性质相关,也因个人的操作风格不同而不同。比如以60平均线为参考,在一般震荡行情中,通常BIAS波幅在±25%左右,极端行情或主升段时也可达±60%或+100%以上;而对短线涨升行情,股价相对于10均线的乖离率通常为5%~8%。 现在来分析与BIAS原理很相近的变动速率ROC指标,原公式表达为: ROC:(CLOSE-REF(CLOSE,N))/REF(CLOSE,N)*100; ROCMA:MA(ROC,M); 即:变动速率=〔当日收盘价-N日前的收盘价〕×100/N日前的收盘价ROCMA是再计算变动速率ROC的M日移动平均。

物联网称重管理系统

物联网称重管理系统 Document number:NOCG-YUNOO-BUYTT-UU986-1986UT

物联智能计量管控平台-一体化解决方案 ?计重之星-防作弊智能称重管理系统(版本~ ?无人值守智能称重管控系统(单机/网络/WEB版本 ?基于物联网技术的集团级计量自动化管控平台 物联:支持从计量现场(RFID读写器、红外检测仪、光栅、计量仪器仪表、地感、LED屏、声波定 位、雷达测速装置、信号指示灯、抓拍相机(照片、车牌)、道匝/拦道机、喇叭/音箱/功放/语音对讲)到管控办公间(机柜、工控机、PLC主控柜、各类仪器仪表及传感装置、有线无线网络设备、UPS、条码打印机)、从控制办公间到管理办公间(PC、身份认证U盘)、再到客户终端 (PDA、手机、盘点机、扫描器)的整个信息网络共享接连,网络连接支持有线局域网、GPRS、WIFI、3G等. 智能:支持单片机、PLC数据采集与信号控制(外部设备),实现数据与设备的智能控制与自动化 计量:软件提供了对各类计重、计数、计温、计湿、计雾、计微量元素仪器仪表的数据采集 管控:实现对业务数据的管理与控制、对工作设备状态的管理与控制 平台:提供一体化集成化的WEB操作平台(个人电脑、手机、PDA) “计重之星-防作弊智能网络称重管理系统”是为适应信息化时代的发展要求,结合我国计重(衡器)行业的市场发展的需要和客户的实际需求而开发的一套现代计重、物流、智能控制管理系统。该系统对衡器行业在当前信息化普及时代刚刚起步的背景,为国内外衡器企业及相关行业用户在计重管理上的需要,为提高各行业用户在计重管理业务上的管理水平,实现其从当初习惯的手工操作模式有效的过渡到自动化,智能化的计算机管理模式,实现行业用户在计重管理全面信息化的完整解决方案。该系统可以运行于各种通用的操作系统及微机和服务部构建的各种单机、局域网、内部网(Intranet)或广域网等多种软硬件环境,适用于水泥、煤矿、钢材、化工、港口、采矿、电力、粮油、食

均线的12种形态

均线的12种形态

在技术分析中,市场成本原理非常重要,它是趋势产生的基础,市场中的趋势之所以能够维持,是因为市场成本的推动力,例如:在上升趋势里,市场的成本是逐渐上升的,下降趋势里,市场的成本是逐渐下移的。成本的变化导致了趋势的沿续。 均线代表了一定时期内的市场平均成本变化,我非常看重均线,本人在多年的实战中总结出一套完整的市场成本变化规律----均线理论。他用于判断大趋势是很有效果的。 内容包扩: 1,均线回补。 2,均线收敛。 3,均线修复。 4,均线发散。 5,均线平行。 6,均线脉冲。 7,均线背离。 8,均线助推。 9,均线扭转。 10,均线服从。 11,均线穿越。 12,均线角度。 今日,我首先给大家重点介绍一下均线回补和均线收敛,希望在和大家交流中得到完善和补充。

一,均线回补。 均线反应了市场在均线时期内的市场成本,例如:30日均线反应了30日内的市场平均成本。股价始终围绕均线波动,当股价偏离均线太远时,由于成本偏离,将导致股价向均线回补(回抽),在牛市里表现为上升中的回调,在熊市里表现为下跌中的反弹。二,均线收敛。 均线的状态是市场的一个重要信号,当多条均线出现收敛迹象时,表明:市场的成本趋于一致,此时是市场的关建时刻,因为将会出现变盘!大盘将重新选择方向。 当均线出现收敛时大家都能看出来,这并不重要,而重要的是:在变盘前,如何来判断变盘的方向呢?判断均线收敛后将要变盘的方向是非常重要的,他是投资成败的关建,判断的准,就能领先别人一步。 判断变盘方向时,应该把握另外的两条原则:均线服从原理和均线扭转扭转原理。 均线服从原理是:短期均线要服从长期均线,变盘的方向将会按长期均线的方向进行,长期均线向上则向上变盘,长期均线向下则向下变盘。日线要服从周线,短期要服从长期。利用均线服从原理得出的结论并不是一定正确的,它只是一种最大的可能性,而且他还存在一个重要确陷:也就是迟钝性,不能用来判断最高点和最底点,只能用来判断总体大方向。另外在实战中,市场往往会发生出人意料的逆转,此时均线服从原理就要完全失效了,

相关主题
文本预览
相关文档 最新文档