当前位置:文档之家› Business Process Coordination State of the Art, Trends, and Open Issues

Business Process Coordination State of the Art, Trends, and Open Issues

Business Process Coordination State of the Art, Trends, and Open Issues
Business Process Coordination State of the Art, Trends, and Open Issues

Business Process Coordination: State of the Art, Trends, and

Open Issues

Umeshwar Dayal Meichun Hsu Rivka Ladin

Hewlett-Packard Labs. CommerceOne Labs. Compaq Tandem Labs.

Palo Alto, California Cupertino, California Haifa

USA USA Israel

dayal@https://www.doczj.com/doc/5117690353.html, meichun.hsu@https://www.doczj.com/doc/5117690353.html, https://www.doczj.com/doc/5117690353.html,din@https://www.doczj.com/doc/5117690353.html,

Abstract

Over the past decade, there has been a lot of

work in developing middleware for integrating

and automating enterprise business processes.

Today, with the growth in e-commerce and the

blurring of enterprise boundaries, there is

renewed interest in business process

coordination, especially for inter-organizational

processes. This paper provides a historical

perspective on technologies for intra- and inter-

enterprise business processes, reviews the state

of the art, and exposes some open research

issues. We include a discussion of process-based

coordination and event/rule-based coordination,

and corresponding products and standards

activities. We provide an overview of the rather

extensive work that has been done on advanced

transaction models for business processes, and of

the fledgling area of business process

intelligence.

1. Introduction

Competitive pressures are forcing organizations to increasingly integrate and automate their business operations such as order processing, procurement, claims processing, administrative procedures and the like. Such business processes are typically of long duration, involve coordination across many manual and automated tasks, require access to several different databases and the invocation of several application systems (e.g., ERP systems), and are governed by complex business rules. A typical business process may consist of up to 100 IT transactions. Coordinating the entire process correctly and efficiently places severe demands on the organization’s IT infrastructure.

In addition, with the growth of e-commerce and the trend toward increasing globalization of operations and outsourcing of functions to external service providers, there is an emerging need to integrate and automate processes that span organizational boundaries (the so-called B2C and B2B processes). One industry analyst predicts: “By 2003, more than 90 percent of e-businesses will be exploiting process automation technology” [Gartner2000]. These place additional demands on the IT infrastructure.

Different forms of middleware have been introduced to enable integration and automation of business processes, both within and across organizations. Message brokers, transactional queue managers, and publish/subscribe mechanisms provide means for automating processes by allowing the component applications to post events and to react to events posted by other component applications. The logic for how to react to events and chain the steps of the process typically resides in the applications themselves. This provides autonomy for the applications and flexibility in that the logic for reacting to events can be changed as business needs or policies change.

In contrast, a business process management system (or business process manager) is a middleware system that provides a central point of control for defining business processes and orchestrating their execution [WfMC, JB96, SGJR97]. The process manager records the execution state of the process and routes requests to component applications or human agents to execute tasks. Business process management systems have their roots in departmental workflow (or office automation) systems where the goal was to route work items or documents among human workers. Over the years, research prototypes and commercial systems have been developed

Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment

Proceedings of the 27th VLDB Conference,

Roma, Italy, 2001

to deal with enterprise-scale processes that include both human and automated tasks. Such systems typically provide transactional semantics and support for backward and forward recovery of business processes. Business process managers may (and, in fact, often do) rely on an underlying message broker, transactional queue manager, or publish/subscribe middleware layer to wrap participating applications, to detect business process-related events, and to guarantee reliable delivery of events and messages to applications.

In [DHL90, DHL91], we introduced two approaches to modelling and managing business processes (which we called long-running activities). The first approach used database rules (triggers) and transactions to model business processes [DHL90]. Subsequently, we realized that relying on triggers alone to chain the steps of a business process led to executions that might be difficult to understand or explain, because the logic was scattered over many rules. We, therefore, introduced a new model in which the “normal” logic of a business process was explicitly defined (scripted), and rules were used primarily for checking integrity constraints, handling exceptions, and so on [DHL91]. In practice, today, business processes are often modelled through a combination of scripts and rules. The script defines the process flow at a high level, and the rules are used to dynamically change the flow at execution time, determine the next step to be executed, or bind the next step to be executed to a specific resource (human or software); the decision may be based on the process execution state, other relevant data that might be retrieved from databases or passed in messages, the availability of resources, and business policies.

Over the last decade, there has been a lot more research in business process modelling, advanced transaction models for business processes, and architectures and implementation techniques for business process management systems, and numerous commercial products have appeared. There is also recent work in inter-enterprise collaborative business process management, and standards are being introduced by various consortia such as RosettaNet and ebXML.

Another recent area of research is that o f business process intelligence. The motivation for business process automation is to improve operational efficiencies and reduce error, but commercial business process management middleware lacks tools for quantitatively tracking these business metrics. Business process intelligence aims to apply data warehousing, data analysis, and data mining techniques to process execution data, thus enabling the analysis, interpretation, and optimization of business processes.

In this paper, we provide a historical perspective and an overview of the state of the art in business process coordination, and we identify open research issues. In Section 2, we describe a framework that allows us to describe the space of models and approaches to business process management.We discuss both intra- and inter-enterprise business process management. In Section 3, we focus on publish/subscribe, messaging, and queuing services for implementing business processes. In Section 4, we discuss advanced transaction models for business processes, touching on both research and commercial practice. In Section 5, we give a brief introduction to the emerging area of business process intelligence. We conclude in Section 6 with the disclaimer that we could not possibly be comprehensive in our coverage of this vast area, so we have chosen to highlight the topics that reflect our own interests.

2. A Framework for Business Process Management

2.1 Approaches to Business Process Modelling and Coordination

A business process is a persistent unit of work started by a business event such as an invoice, request for proposal or a request for funds transfer. The process is driven by business rules that trigger tasks and sub-processes, with each state transition being executed within a transaction and audited for business reasons when required. Tasks and sub-processes are assigned to resources, which are organizational units that are capable and authorized to play specific roles in the processes.

The scripting of the rules, tasks, sub-processe, and resource policies, constitute a process description. An execution of a business process consists of invoking existing business services, which can reside anywhere. The context associated with the process is usually stored in a database and archived on comp letion. The process manager manages the state of a business process, and routes requests among participating applications.

Most commonly, the sequence of steps to be traversed in executing a business process is defined before the process instance is initiated. Most business process management systems allow branch logic to determine the next step, after a step completes, so that different sequences of steps can be followed according to the outcome of the branch logic. Often a graphical tool is used to construct the path, the decision points and the branch logic. The branch logic is usually called ‘rules’. These rules are evaluated based on workflow context, workflow history, current resource availability database context, and business policies. Figure 1 illustrates a process description supported in a representative business process management system, ObjectFlow [HK96].

Figure 1: An example graphical representation of a

process description

Instead of representing a process as a flow diagram some systems simply use a collection of rules to represent the process without explicitly delineating the paths. Such systems are generally more flexible in capturing the dynamic behavior.

Regardless of whether the process is described as a flow diagram, as a collection of rules, or some combination, the life cycle of automating a business process starts with a business analyst defining the process as a structured collection of abstract steps. The process description is further annotated with applications to be executed at each step. This stage most likely involves application development and/or wrapping of existing applications, and it produces executable process descriptions that can then be registered with a process management system. During execution time, an event triggers the process management system to create an instance of the process; the process management system then coordinates the step execution, and monitors and records the history. A business analyst, in turn, analyzes the history of process execution, potentially leading to improved definitions.

The long-running and distributed nature of a business process poses a challenge in enforcing transaction semantics over the entire process. Extending transaction models to support business processes has attracted considerable amount of research attention; this subject is further discussed in Section 4.

Not all business processes are naturally supported by process management systems. It is useful to distinguish business processes from two dimensions: task automation and process structure (Figure 2). Tasks in a process may be application centric (i.e., performed automatically by an application, typically some component of an ERP system or a software agent), or human centric (i.e., manual tasks involving human judgment, manual information gathering, or manual processing of desktop documents). The process can be highly-structured (the business rules and sequences that the tasks follow are predetermined and pre-scripted), semi-structured (parts of the rules are pre-scripted, parts of the rules may be modified or scripted on the fly - often referred to as an ad hoc process), and unstructured (there do not exist repeatable patterns of rules or sequences among the tasks, and participants often need to meet at the same time to perform work.) The design center for process management systems lies in the upper right-hand shaded corner, i.e., they are intended for managing processes composed of more application-centric tasks and are more structured, although the business processes they support often contain both manual and automated tasks, and they often accommodate certain degree of ad hoc scripting. In contrast, the lower left corner has been better supported by capabilities such as on-line shared space systems (e.g., Lotus Notes), or on-line meeting systems. One of the challenges in process management has been to provide end-to-end support for business processes that span both design centers: for example, a procurement process spans strategic sourcing, which tends to be less structured and involves human manipulation of documents, and order processing and payment processing, which tend to be more structured and involve mostly automated steps.

Figure 2: Framework for systems supporting business

process management

2.2 Historical Perspective

Process management systems are often traced back to early office automation systems, document management systems, and workflow systems. [HK96] characterized the historical progression of the movement towards open process management as homegrown workflow (up to 1980’s), where the systems were monolithic in nature with all information and control flow hard-coded in the applications; object-routing workflow (late 1980’s and

Human-centric

tasks Design center for process manage. systems

early 1990’s), where flexible scripting of workflow was offered in specific application packages such as document management or office automation systems, and open architected process managers (starting in the early to mid 1990’s) where generic and open process management engines provide an infrastructure for an enterprise to integrate applications, data, and procedures from disparate systems and organizations. Open architected business process systems combine predefined work flow with ad hoc changes, use database and repository technologies for information sharing and persistence, use middleware technology for notification, distribution, and application invocation, and take advantage of object oriented technologies to provide customization. They focus on means for optimizing resources, enforcing policies, a nd providing monitoring and audit trail services.

The movement towards an architected process management infrastructure that started in the early 1990’s has resulted in a number of product offerings (e.g. Action Technology’s Action Workflow [MWF93, Dunham91], Xsoft’s InConcert (now acquired by Ticbco) [MS93], DEC’s ObjectFlow [HK96], IBM’s FlowMark [LA93] (and subsequently, MQSeries Workflow), HP’s Changengine (now called Process Manager [CS00], [HP]). The Workflow Management Coalition [WfMC], founded in 1993, was the first industrial consortium aimed at promoting frameworks and interoperability for open architected process management. It published a reference model and a set of associated standards.

This generation of product offerings coincided and amplified the wave of Business Process Reengineering (BPR) practice [HC93] that swept corporations in the early to mid-1990’s. While many of the products have met commercial success, they, like BPR, have predominantly focused on intra-enterprise business process improvement and automation. The industry has not been as successful in driving pervasive standards in process-flow infrastructure as it has been in some other areas, such as message queues and object transaction management. The shift towards inter-enterprise business processes in the late 1990’s has inevitably created new perspectives and challenges, as well as research and commercial opportunities.

2.3 Inter-enterprise (B2B) Collaborative Business Process Management

The potential business value of streamlining inter-enterprise business processes has fueled a renewed interest in process management technologies. However, the conventional intra-enterprise process management architecture faces a number of challenges. First of all, there must exist a mechanism to allow participating enterprises to reach agreement on the business process description and the data to be exchanged during process execution. To allow scalable B2B interoperation and alleviate the burden of pair-wise negotiation of integration points, a library of common, standard processes must exist so that by binding to a common, standard process, an enterprise achieves the ability of collaborating with a large number of partners’ processes. Second, the process management function needs to be carried out as a collaboration among multiple distributed process managers. In essence, in crossing enterprise boundaries, the technologies traditionally suited for central coordination and integration need to be fundamentally reworked. The CrossFlow p roject is one research effort aimed at collaborative business process management [CrossFlow].

In [CH01], a collaborative process framework is proposed to extend the centralized process management technology (Figure 3). A collaborative process involves multiple parties, each playing a role in the process. Two aspects distinguish the collaborative process model from the conventional centralized process model:

1. The process definition is based on a commonly

agreed business interaction protocol, such as the protocol for on-line purchase or auction.

2. The process execution is not performed by a

centralized workflow engine, but by multiple engines collaboratively.

Common Definition of Collaborative Processes

The starting point of the collaborative process framework is the common definitions of collaborative business processes. There are at least 3 enablers for effective use of common business process definitions:

? there must exist a common business process meta-model and its associated schema language, so that business processes can be codified in a standard way; ? there must also exist a mechanism for common process descriptions, including business documents associated with these processes, to be easily re-used as building blocks for more complex processes, or customized for the special needs of vertical industries or geographical segments;

? finally, there must exist a mechanism for enterprises to publish their ability to participate in specific roles of common business processes as process flow-enabled web services, so that potential partners can automatically discover each other and engage in the process execution.

Several vendor- or consortium-based web service frameworks are emerging (e.g., World Wide Web Consortium [W3C], ebXML[ebXML], RosettaNet [RN], Open Buying Internet [OBI], Microsoft .Net/BizTalk [.Net], UDDI [UDDI], HP E-Speak [E-Speak]). RosettaNet’s PIP (Partner Interface Process) is one of the earlier frameworks that made a significant contribution to the notion of business process-based e-commerce. T he ebXML consortium, which is particularly interesting because it attempts to leverage the pre-existing industry experience in Electronic Data Interchange (EDI) in

designing new XML-based B2B framework, has several has several efforts that attempt to address the 3 issues listed above.1 In particular, it is working on the ebXML Business Process Specification Schema, as a proposal to a standard business process meta-model. This model attempts to unify document flow, sometimes also referred to as document choreography, with process collaboration, which composes of document flows. Its proposal for XML Core Components, which addresses the issue of reusability of XML business documents, can potentially be extended to address the issue of reusability of business process schemas. Finally its Registry & Repository proposal specifies how to access a registry of information covering, for example, Business Partners, Business Services, and XML Schema definitions.

Collaborative Execution

In the collaborative process framework, the common business process definitions drive the definition and development of a process-compliant service at an enterprise. This is illustrated in the two Role Spec boxes in Figure 3. An enterprise determines the role(s) it wishes to play in a process, and develops the role process specifications and the corresponding internal execution control, including invocation or dispatching of local services (e.g. wrapped legacy applications). Once the process-compliant role specification is developed, it can be published as a web service. Note that there is no requirement that local services be published as web services, i.e., such services may not be directly accessible from trading partners. Local services are accessed indirectly through the local collaborative process manager.

As shown in Figure 3, each execution of a collaborative process, or a logical process instance, consists of a set of peer process instances run by the Collaborative Process Managers (CPMs) of participating parties. These peer instances conform in behavior to the specification of the role set forth in the common process definition, but may have private process data and sub-processes. The CPM of each party is used to schedule, dispatch and control the tasks that that party is responsible for, and the CPMs interoperate through an inter-CPM protocol to exchange the process data (or documents) and to inform each other on the progress of the process execution.

The framework requires that local CPMs interoperate with one another; however, there is no requirement that each local CPM be identical. To the extent that local CPMs are capable of enforcing the common business

1 EbXML is an 18-month effot jointly sponsored by the United Nations body for Trade Facilitation and Electronic Business (UN/CEFACT) and the Organization for the Advancement of Structured Information Standards (OASIS) to produce a global standard for e-commerce. process specifications, they can differ significantly in functionality, such as support for internal data flow, local service integration, and nested sub flows. Therefore it is possible that, for example, one CPM is based on Microsoft’s BizTalk Server [BizTalk], another is based on IBM’s MQSeries ([IBM]) and WSFL (Web Service Flow Language) server ([WSFL]), and yet others can be based on NetFish’s Process Manager ([pHub]), the APEx process engine ([APEx], or HP Process Manager [HP]. The Businesss Process Management Initiative ([BPMI]) intends to complement the public process interface standards, such as what ebXML’s BPSS and RosettaNet’s PIP strive to be, by providing a standard way to describe their private implementations.

Many challenging research issues remain in inter-enterprise collaborative business processes. They include: the ability to express and support transactional semantics, exception handling, and quality of services in the common process description; methods and tools conducive to achieving a library of reusable and composable common business documents and processes; efficient registration and searching/matching mechanisms; efficient and reliable inter-CPM execution protocols; and end-to-end process monitoring and analysis in a distributed environment. We also need to examine the requirements of many inter-enterprise business processes that contain a significant amount of human interactions: how traditional collaboration tools that are built around intra-enterprise collaborations (i.e., within a firewall) can be extended to an inter-enterprise environment, and ho they can interoperate with inter-enterprise collaborative process management. While the standards organizaitons and software vendors have made a great deal of progress, most of these issues are still not well understood, and will require joint efforts on the part of the research communities, standards organizations, software vendors, and leading edge enterprise participants.

3. Business Process Implementation based

on Publish/Subscribe and Messaging Traditionally, process coordination was achieved by using reliable queues [BHM90, MQSeries, MSMQ, BEA]. Message Queuing technology enables applications running at different times to communicate across heterogeneous networks and systems that may be temporarily offline. Applications send messages to queues and read messages from queues. Queuing products usually provide guaranteed message delivery, routing, security, and priority-based messaging. They can be used to implement solutions for both asynchronous and synchronous scenarios requiring high performance. Queues were usually implemented by using files. To provide reliability and transactional semantics, queues products had to implement many of the capabilities that databases traditionally implemented. Consequently,

several vendors recognized the advantages to be gained by integrating databases and queues [Oracle, Sybase].

In the last decade, Publish/Subscribe (or Pub/Sub) became the preferred building block for complex process implementation and coordination. Publish/Subscribe is a simple, yet powerful, paradigm for implementing dynamic business processes over intranet and extranet networks. Through the use of information channels, Pub/Sub provides an easy facility for instantly disseminating business information and events to multiple destinations. Usually, events are classified through a set of subjects (also known as groups, channels, or topics). Publishers publish events, by tagging each event with an appropriate subject, over real time channels that are subscribed to by consumers. Pub/Sub supports coordination by facilitating asynchronous one to many dissemination of events.

An alternative to subject-based systems, known as content-based systems, allows information consumers to request events based on the content of published events. An event in this case has a schema and can be modelled as a tuple containing attributes; e.g., (company, price, volume). A subscription is then a predicate over these attributes; e.g., (company = “EMC”, price >100 and volume > 1000). This model is considerably more flexible than the subject-based model since it does not require predefinition of channels. Providing content-based filtering of messages is very powerful but more complex to implement.

The pub/sub paradigm was proven to be very useful in implementing business processes in a dynamically evolving system where the publishers and subscribers had to be anonymous, and hence a business step need not be aware of its preceding or following steps. In the past decade, very powerful and flexible business process automation systems were built on top of this basic paradigm. TIBCO (through its Information Bus) [OPS93, Chan98], was the pioneer in the use of Pub/Sub to build loosely-coupled distributed applications initially for the financial market, with other commercial companies following suit [Vitria, SQL/MX, MQSeries, Oracle, Sybase, Informix, NSSQL]. Pub/Sub is used by many of the world's largest financial institutions, deployed in the top semiconductor manufacturer' factory floors, utilized in the implementation of large-scale tracking and routing systems like FedEx, Internet services like Yahoo, Intuit, and ETrade, and chosen by many of the world's leading corporations as the enterprise infrastructure for integrating disparate applications.Implementations of Pub/Sub systems vary in their capabilities to guarantee reliable delivery of notification, provide “at most once” semantics, support for subject or content based routing, achieve scalability and availability, and the extent of their transactional support.

Many processes require transactionally guaranteed delivery for those applications that must update databases, consume messages on one set of subjects, and publish messages on another set of subjects, all within properly bracketed atomic transactions. Transactions that have to access queuing and/or publish/subscribe resource managers and a SQL database system, are forced to pay a high performance penalty. All resource managers have to participate in an expensive two-phase commit protocol. Furthermore, their lack of integration does not allow the SQL compiler to optimize access to both notifications and SQL data. Consequently, vendors like Tibco as well as some SQL database vendors (e.g. Oracle, Sybase, Informix) have integrated transactional queuing and publish/subscribe services with database products.

While these implementations remove the need for the two-phase commit protocol, their implementations use special purpose objects for queues and publication channels. This prevents queues and publication channels from being accessed as part of SQL select and update statements. This also prevents the SQL compiler from optimizing access to notifications and SQL data.

The transactional queuing and publish/subscribe extensions added to NonStop SQL/MX [SQL/MX, KV99] are tightly integrated into the database infrastructure. The pub/sub extensions do not introduce any special objects. Applications access regular SQL tables and use SQL select statements to subscribe and/or dequeue notifications. Applications use SQL insert and update statements to publish events. The extensions remove the need for a two-phase commit protocol and allow the SQL compiler to optimize access to both notifications and normal SQL data. The implementation can accomplish powerful filtering based on the message content, possibly joined with other database tables, as well as fully leverage the fault-tolerance (i.e. process pairs) and scalability features (i.e. horizontal partitioning) of the database engine. The queuing and publish/subscribe services are made available through embedded SQL and ODBC.

4. Advanced Transaction Models for Business Processes

An important contribution that the database community has made to business process management is to marry transactional concepts with workflows. Quite early on, it was recognized that business processes contained activities that accessed shared, persistent data resources, and so had to be subject to transactional semantics. In particular, the properties of atomicity and isolation could be usefully applied to business processes (or at least to parts of a business process). For example, one might want to declare that the payment of an invoice and crediting the payee’s account were part of an atomic unit of work.

However, it was also recognized that it would be overkill to treat an entire business process as a single ACID transaction. First, since business processes typically are of long duration, treating an entire process as a transaction would require locking resources for long periods of time. Second, since business processes

typically involve many independent database and application systems, enforcing ACID properties across the entire process would require expensive coordination among these systems. Third, since business processes almost always have external effects, guaranteeing atomicity using conventional transactional rollback mechanisms is infeasible, and may not even be semantically desirable. It became apparent that the existing database transaction models would have to be extended.

Several models for long-running transactions have been developed to allow the definition of ACID-like properties at the business process level and to handle task failures (see [Elmagarmid92], [JK97] for papers describing several of these models).

Perhaps the earliest of the long-running transaction models was the Saga model [GS97]. A saga was a chain of transactions that was itself atomic. Each transaction in the chain was assumed to have a semantic inverse, or compensation, transaction, associated with it. If one of the transactions in the saga failed, the transactions were rolled back in the reverse order of their execution; committed transactions were rolled back by executing their corresponding compensation transactions.

In the Activity-Transaction Model (ATM), we allowed long-running transactions to be both nested and chained [DHL91]. Nesting allowed concurrency within a transaction (so, for example, tasks that were triggered by some event occurring in a parent transaction could execute in concurrent subtransactions) and also provided fine-grained, hierarchical control for failure and exception handling.. The original nested transaction model of [Moss85], which supported only closed subtransactions, was extended to include also open subtransactions [WS92]. A closed subtransaction commits its results tentatively to its parent; these partial results are externalized only after the top (root) transaction commits, thus ensuring atomicity and isolation of the whole transaction. In contrast, open subtransactions sacrifice isolation by directly externalizing their results. In ATM, failure handling was hierarchical. When a subtransaction failed, its parent was notified, and the parent could decide to execute an exception handler and retry the failed child, execute an alternate (contingency) task, or propagate the failure up the hierarchy. Propagating failures required compensating already committed subtransactions. Some tasks could be defined as vital, which meant that their failure caused the failure of the transaction hierarchy.

Subsequent work extended the ATM model to allow subtransactions whose commit scopes were in between the two extremes of closed and open nested transactions (essentially, a subtransaction could commit to any ancestor) [CD96, CD97]. Failure handling was hierarchical: the highest ancestor that needed to be aborted was identified, and then the subtree rooted at this ancestor was compensated or aborted. The model allowed compensation and contingencies to be associated with different levels of the hierarchy. Thus, sometimes it might be preferable to c ompensate an entire subtree instead of compensating every subtransaction in it. A further extension of this model applied to the case of propagating failures from one transaction hierarchy to another (a situation that might occur if data dependencies are established between two transactions in the same business process or between two business processes) [CD97, CD00].

Ideas similar to those in ATM also exist in other transactional workflow models. For example, ConTracts provide an execution and failure model for long-lived transactions and for workflow applications [Reuter92, RSS97]. A ConTract is a long-lived transaction composed of steps, whose order of execution is specified by a script. Isolation between steps is relaxed, so that the results of completed steps are visible to other steps; to guarantee semantic atomicity, each step is associated with a compensating step (or sub-script, if the compensation is a complex process) that semantically undoes the effect of the step. ConTracts provide both forward and backward recovery to manage failures. Backward recovery is achieved by compensating completed steps, typically in the reverse order of their execution. Compensation may be partial, meaning that it is performed up to a point in the contract from where forward execution can be resumed, possibly along a path that is different from the faulty one.

The basic ideas of transactional bracketing of parts of a business process, attaching compensation and contingency activities to the activities of the process, declaring some of the activities to be vital (or critical), and defining points in the process up to which rollback occurs on failure, followed by forward execution (roll forward) permeate many of the transactional models that were subsequently invented (e.g., WAMO [EL95, EL98], WIDE [GVBA99], CREW [KR98]). These models typically differ in how much flexibility the process designer has in specifying the backward compensation and forward execution process.

Commercial Products and Standards

A few commercial process management systems offer process meta-models that allow the definition of transactional properties (e.g., InConcert [MS93]). The transaction models are very similar to the ones described in the previous section.

The Exotica project [AKAE94, AAEK96] described methods and tools to implement advanced transaction models on top of Flowmark (predecessor of IBM MQ Series [IBM]). The basic idea was to provide the user with an extended workflow model that integrates advanced transaction concepts. The user could define a compensating task for each task of the workflow. A preprocessor would then translate these specifications into plain FDL (Flowmark Definition Language) by properly

inserting additional “compensating” paths after each task or group of tasks, which are conditionally executed upon a task failure. In particular, it was shown how sagas and flexible transactions could be implemented in Flowmark.

In [KS00] a transactional model for HP Changengine (now called Process Manager) is presented. The model allows the definition of Virtual Transaction (VT) regions on top of a workflow graph. If a failure occurs during the execution of a task enclosed in a VT region, then all tasks in the region are compensated in the reverse order of their forward execution, until a compensation end point is reached. Then, the system can retry the execution (up to a maximum number of times), follow an alternate path, or terminate the entire process execution. The virtual transaction model also allows for different isolation levels for VT regions: serializable (needs shared locks for reads and exclusive locks for writes), read committed (like serializable, but releases shared locks after reading), read uncommitted (no locks needed for reads), and virtual isolation (get read locks and release after reading, and get write locks only at the end of the transaction to perform all the updates in one shot).

Microsoft BizTalk Server 2000 includes a set of components, called Orchestration Services, which support the design and execution of business processes [Roxburgh00]. BizTalk processes, called schedules, are described by a graph whose nodes represent exchanges of BizTalk messages, typically corresponding to invocation of COM+ components. Process designers in BizTalk can associate transactional properties to subgraphs in a schedule. Three types of transactions are provided: short-lived (SLT), long-running (LRT), and timed. BizTalk also includes support for different levels of isolation, analogous to those of HP Changengine.

Standards bodies and industry consortia are also engaged in efforts to define transaction models at the business process level, both for processes within an organization and for inter-organization processes. In particular, OASIS has formed a Business Transaction Technical Committee (BTTC), with the goal of defining a transaction protocol for business processes that span across organizational boundaries [BTP]. While the proposals introduced above aim at defining transactional semantics for business processes, BTTC aims at defining transactional semantics for B2B protocols, such as the RosettaNet standards. BTTC assumes that each party involved in a multi-party B2B interaction is responsible for supporting transactional properties for the internal business processes, and instead defines a coordination protocol to ensure that all or none of the involved parties “commit” the effects of the B2B interaction. This problem is similar to that of coordinating distributed transactions in database systems, although carried over to the context of business processes and long-running transactions. 5. Business Process Intelligence

Organizations automate their business processes with the objectives of improving operational efficiency, reducing costs, improving the quality of service offered to their customers, and reducing human error. However, apart from graphical tools for designing business processes, and some simulation tools for detecting performance bottlenecks, business process management systems today provide few, if any, analytic tools to quantify performance against specific business metrics.

There has been some research on defining formal Petri Net-based execution semantics for business processes (see, for example, [van der Alst98]). In these approaches, the process definition graph is translated into some form of Petri Net, enabling formal properties such as termination and freedom from deadlock to be proved. Other approaches use formal logic to specify and reason about workflows and transactions (see, for example, [Bonner99].) However, these approaches are focused on verifying process definitions, not on analyzing or optimizing process executions.

The emerging area of business process intelligence is aimed at applying data warehousing and data mining technologies to analyzing and optimizing business processes.

Business process management systems today collect a lot of data about process executions: They log significant events that occur during process execution, such as the start and completion times of activities, the input and output data of each activity, failure events or other exceptions, and the assignment of resources to each activity. The log data is typically stored in a relational database, which can be queried to produce basic reports such as the number of workflow instances executed in a given time period, the average execution time, resource utilization statistics, etc. Traditional graphical querying and reporting tools are used for this purpose. However, these tools are limited in the types of analysis supported.

A more promising approach is to build a business process data warehouse, which can be loaded with the log data suitably cleaned, transformed, and aggregated, and perhaps integrated with data from other data source. The data in the warehouse can then be analyzed using multidimensional (OLAP) analysis tools and data mining tools to answer questions that a business manager or process analyst may be interested in:

? analyze the performance and quality of resources

(e.g., resources of type A take 50% longer than

average to execute activity B when the activity is initiated on Friday afternoons)

? understand and predict exceptions (e.g., what factors are strongly correlated with missed deadlines or other violations of service level agreements)

? discover conditions under which paths or subgraphs of the process are executed, so that the process can be redefined

? optimize process execution time and quality through optimal configuration of the system, assignment of resources, and dynamic adaptation of the process.

Exactly how to answer these questions is a ripe area for research. Two accompanying papers in this conference describe challenges in designing a process data warehouse to support these kinds of analysis [BCDS01], and the application of data mining techniques for understanding, predicting, and preventing exceptions [GCDS01].

Another interesting area of research is business process discovery, where the goal is to learn the structure of a business process from workflow log data [AGL98]. This would be especially useful for semi-structured and unstructured business activities where the process is not defined a priori. Such a “process” is usually implemented via rules triggered by events such as the start and completion of tasks. In some situations, there may be a latent structure that occurs regularly and that can be discerned by analyzing the history of events and rules. This learned structure can then be scripted and implemented efficiently as a business process. For inter-enterprise business processes, in particular, it is very unlikely that a complete business process that spans activities in different enterprises will be easily defined and agreed to by all the participants. In such situations, it may be possible to learn the underlying process by analyzing the sequences of interactions among the participants. Although [AGL98] made a promising start in this direction, process discovery is very much an open research issue.

6. Summary

Business process integration and automation have become high priorities for enterprises to achieve operational efficiency. With the burgeoning of e-commerce, there is a renewed interest in technologies for coordinating and automating intra- and inter-enterprise business processes. IN this paper, we reviewed the progress that has been made in the last decade and the state of the art today, in research, commercial products and standards. We identified open research issues, especially in collaborative, inter-enterprise process management, transaction models for business processes, and in the emerging area of business process intelligence.

We could not possibly be comprehensive in our coverage of this vast area, so we have chosen to highlight the topics that reflect our own interests and ongoing research activities.

Acknowledgment

Umesh and Mei wish to thank our colleagues at HP, especially Fabio Casati (who provided much of the material on transaction models, and who is leading the work on business process intelligence), Qiming Chen (who did much of the work on extensions to ATM and on collaborative process models), and Ming-Chien Shan (who leads the Business Process Management team at HP Laboratories).

References

[AAEK96] G. Alonso, D. Agrawal, A. El Abbadi, M. Kamath, R. Gunthor, and C. Mohan. Advanced transaction model in workflow context. Proceedings of

the 12th International Conerence. on Data Engineering,

New Orleans, Louisiana, February 1996.

[AGL98] R. Agrawal, D. Gunopulos, and F. Leymann. Mining Process Models from Workflow Logs. Proc. of

the Sixth International Conference on Extending Database Technology (EDBT), Valencia, Spain, 1998.

[AKAE94] G.Alonso, M.Kamath, D.Agrawal, A.El Abbadi, R.Gunthor, and C.Mohan. Failure handling in large scale workflow management systems. Technical Report RJ9913, IBM Almaden Research Center, November 1994.

[APEx] https://www.doczj.com/doc/5117690353.html,/solutions/apex.htm [BCDS01] A. Bonifati, F. Casati, U. Dayal, and M.-C. Shan. Warehousing Workflow Data: Challenges and Opportunities. Proc. 25th Intl. Conference on Very Large

Data Bases, September 2001.

[BizTalk] https://www.doczj.com/doc/5117690353.html,/technet/biztalk/btsdocs/ [Bonner99] A. Bonner. Workflow, Transactions, and Datalog. Proc. Of the Intl. Conference on Principles of Database Systems. May 1999.

[BPMI] https://www.doczj.com/doc/5117690353.html,/index.esp

[BTP] Business Transaction Protocol. Information available from https://www.doczj.com/doc/5117690353.html,/committees/business-transactions/

[CD96] Q. Chen and U. Dayal. A Transactional Nested Process Management System. Proc. 12th Intl. Conf. On

Data Engineering, February 1996.

[CD97] Q. Chen and U. Dayal. Failure handling for Transaction Hierarchies. Proceedngs of the 13th Intl. Conference on Data Engineering, IEEE Computer Society, April 1997.

[CD00] Q. Chen and U. Dayal. Multi-Agent Cooperative Transactions for E-Commerce. Proc. 7th Intl. Conference

on Cooperative Information Systems, Lecture Notes in Computer Science 1901, Springer Verlag, 2000.

[CH01] Q. Chen and M. Hsu. Inter-Enterprise Collaborative Business Process Management. Proc. International Conference on D ata Engineering (ICDE), Germany, 2001.

[Chan] Arvola Chan. Transactional Publish / Subscribe:

The Proactive Multicast of Database Changes. Proceedings of the ACM SIGMOD Intl. Conference on Management of Data, May 1998,

[CrossFlow] https://www.doczj.com/doc/5117690353.html,

[CS00] F. Casati and M.-C. Shan. Process Automation as

the Foundation for E-Business. Proc. 26th Intl. Conference

on Very Large Data Bases, September 2000.

[DHL90] U. Dayal, M. Hsu. R. Ladin. Organizing Long-Running Activities with Triggers and Transactions. Proc. ACM SIGMOD Intl. Conference on Management of Data, May 1990.

[DHL91] U. Dayal, M. Hsu. R. Ladin. A Transactional Model for Long-Running Activities. Proc. 17th nt; Conference on Very Large Data Base s, September 1991. [Dunham91] Dunham R., "Business Design Technology Software Development For Customer Satisfaction", Proc.

24th Annual Hawaii International Conference on Systems Sciences, 1991.

[ebXML] https://www.doczj.com/doc/5117690353.html,/

[EL95] J. Eder and W. Liebhart. The workflow activity model WAMO. Proceedings of the 3rd International Conference on Cooperative Information Systems (CoopIs), Vienna, Austria, May 1995.

[EL98] J. Eder and W. Liebhart. Contributions to exception handling in workflow management. Proceedings of the Sixth International Conference on Extending Database Technology, Valencia, Spain, March 1998.

[Elmagarmid92] A.K. Elmagarmid (editor). Database Transaction Models for Advanced Applications. Morgan-Kaufmann, 1992.

[E-Speak] https://www.doczj.com/doc/5117690353.html,

[GS87] H. Garcia-Molina and K. Salem. Sagas. Proc. ACM SIGMOD Intl. Conference on Management of Data, May 1987.

[Gartner00] R. Casonato. Application Integration: Making

E-Business Work. Gartner Report, September 2000. [GCDS01] D. Grigori, F. Casati, U. Dayal, M.-C. Shan. Improving Business Process Quality through Exception Understanding, Prediction, and Prevention. Proc. 25th Intl. Conference on Very Large Data Bases, September 2001.

[GVBA97] P.Grefen, J.Vonk, E.Boertjes, P.Apers. Two-layer Transaction Management for Workflow Management Applications. Proceedings of the Eight International Conference on Database and Expert Systems Administration, Touluse, France, 1997.

[HC93] Hammer M., and Champy J., Reengineering the Corporation, A Manifesto for Business Revolution, HarperBusiness, Harper Collins Publishers, 1993.

[HK96] M. Hsu and K. Kleissner. “ObjectFlow: Towards an Infrastructure for Process Management,” Journal of Distributed and Parallel Databases, Vol 4, pp. 169-194, April 1996 [HP] Hewlett-Packard. HP Process Manager Product Overview. May 2001. Available from https://www.doczj.com/doc/5117690353.html,/go/e-process

[IBM] MQSeries. https://www.doczj.com/doc/5117690353.html,/

[JB96] S. Jablonski and C. Bussler. Workflow Management: Modeling, Concepts, Architecture, and Implementation. International Thomson Computer Press, 1996. [JK97] S.Jajodia and L.Kerschberg (editors), Advanced Transaction Models and Architectures. Kluwer Academic Publishers, New York, 1997

[KR98] M.Kamath and K.Ramamritham. Failure handling and coordinated execution of concurrent workflows. Proceedings of the 14th International Conference on Data Engineering, Orlando, Florida, February 1998.

[KS00] V. Krishnamoorty and M.C. Shan. Virtual Transaction Model for Workflow Applications. Procs of SAC’00. Como, Italy. Mar 2000.

[KV98] J. Klein, R. Van der Linde Non Stop SQL/MX Transactional Queuing and Publish/Subscriber Services. Workshop on High Performance Transaction Systems, 1998.

[LA93] Leymann F., Altenhuber W., Managing Business Processes As Information Resource, ITL IRM Conference, October 1993.

[Moss85]. E. Moss. Nested Transactions. MIT Press, 1985

[MS93] D. McCarthy and S. Sarin. Workflow and Transactions in InConcert. Bulletin of the Technical Committee on Data Engineering, IEEE Computer Society, 16(2), 1993.

[MWF93] Medina-Mora R., Wong H.T., Flores P., ActionWorflow as the Enterprise Integration Technology, IEEE Data Engineering, 16(2), June 1993.

[.Net] https://www.doczj.com/doc/5117690353.html,/net/defining.asp [OBI] https://www.doczj.com/doc/5117690353.html,/

[OMG] Object Management Group. CORBA services: Common object service specification. Technical report OMG , July 1998.

[OPS93] Brian Oki, Manfred Pfluegl, Alex Siegel, Dale Skeen. The Information Bus – An Architecture for Extensible Distributed Systems. Operating Systems Review, 27(5), Dec. 1993.

[Oracle] Oracle 8i. https://www.doczj.com/doc/5117690353.html,/

[pHub] https://www.doczj.com/doc/5117690353.html,/products/b2bihome.htm [Roxburgh01] Ulrich Roxburgh. BizTalk Orchestration: Transactions, Exceptions, and Debugging. Microsoft Corporation. Feb 2001.

[Reuter92] A. Reuter. ConTracts. In A. Elmagarmid, ed., Transaction Models for Advanced Database Applications. Morgan Kaufmann, 1992.

[RosettaNet] https://www.doczj.com/doc/5117690353.html,/

[RSS97] A. Reuter, K. Schneider, and F. Schwenkreis. Contracts revisited. In S. Jajodia and L. Kerschberg, editors, Advanced Transaction Models and Architectures. Kluwer Academic Publishers, New York, 1997. [SGJR97] A. Sheth, D. Georgakopoulos, S.M.M. Joosten, M. Rusinkiewicz, W. Sacchi. J. Wileden, A. Wolf. Report from the NSF Workshop on Workflow and Process Automation in Information Systems. ACM SIGSOFT Software Engineering Notes, 22(1), 1997.

[SQL/MX] SQL/MX User Manuel, Compaq Computer Corp.

[UDDI] https://www.doczj.com/doc/5117690353.html,/

[van der Alst98] W.M.P. van der Alst. The Application of Petri Nets to Worflow Management. The Journal of Circuits, Systems, and Computers 8(1), 1998.

[Vitria] https://www.doczj.com/doc/5117690353.html,

[W3C] https://www.doczj.com/doc/5117690353.html,/[WfMC] https://www.doczj.com/doc/5117690353.html,/

[WS92] G.Weikum and H. Schek. Concepts and Applications of Multi-level Transactions and Open Nested Transactions. In A. Elmagarmid, ed., Transaction Models for Advanced Database Applications. Morgan Kaufmann, 1992.

[WSFL]

https://www.doczj.com/doc/5117690353.html,/software/solutions/webservices/pdf /WSFL.pdf

Figure 3: Collaborative Process Framework: Process Defintion and Execution

助焊剂技术标准

助焊剂技术标准 免清洗液态助焊剂——————————————————————————————————————— 1 范围 本标准规定了电子焊接用免清洗液态助焊剂的技术要求、实验方法、检验规则和产品的标志、包装、运输、贮存。 本标准主要适用于印制板组装及电气和电子电路接点锡焊用免清洗液态助焊剂(简称助焊剂)。使用免清洗液态助焊剂时,对具有预涂保护层印制板组件的焊接,建议选用与其配套的预涂覆助焊剂。 2 规范性引用文件 下列文件中的条款通过本标准的引用而成为本标准的条款。凡是注日期的引用文件,其随后所有的修改单(不包括勘误的内容)或修订版均不适用于本标准,然而,鼓励根据本标准达成协议的各方研究是否使用这些文件。凡是不注日期的引用文件,其最新版本适用于本标准。GB 190 危险货物包装标志 GB 2040 纯铜板 GB 3131 锡铅焊料 GB 电工电子产品基本环境试验规程润湿称量法可焊性试验方法 GB 2828 逐批检查计数抽样程序及抽样表(适用于连续批的检查) GB 2829 周期检查计数抽样程序及抽样表(适用于生产过程稳定性的检查) GB 4472 化工产品密度、相对密度测定通则 GB 印制板表面离子污染测试方法 GB 9724 化学试剂PH值测定通则 YB 724 纯铜线 3 要求 外观 助焊剂应是透明、均匀一致的液体,无沉淀或分层,无异物,无强烈的刺激性气味; 在——————————————————————————————————————— 中华人民共和国信息产业部标 200X-XX-XX发布200X-XX-XX实施 SJ/T 11273-2002 ——————————————————————————————————————— 一年有效保存期内,其颜色不应发生变化。 物理稳定性 按试验后,助焊剂应保持透明,无分层或沉淀现象。 密度 按检验后,在23℃时助焊剂的密度应在其标称密度的(100±)%范围内。

中介效应和调节效应分析方法论文献解读

中介效应和调节效应分析方法论文献 1. 温忠麟,张雷,侯杰泰,刘红云.(2004.中介效应检验程序及其应用. 心理学报,36(5,614-620. 2. 温忠麟,侯杰泰,张雷.(2005.调节效应与中介效应的比较和应用. 心理学报,37(2,268-274. 3. 温忠麟,张雷,侯杰泰.(2006.有中介的调节变量和有调节的中介变量. 心理学报,38(3,448-452. 4. 卢谢峰,韩立敏.(2007.中介变量、调节变量与协变量——概念、统计检验及其比较. 心理科学,30(4,934-936. 5. 柳士顺,凌文辁.(2009.多重中介模型及其应用. 心理科学,32(2,433-435. 6. 方杰,张敏强,邱皓政.(2010.基于阶层线性理论的多层级中介效应. 心理科学进展,18(8,1329-1338. 7. 刘红云,张月,骆方,李美娟,李小山.(2011.多水平随机中介效应估计及其比较. 心理学报,43(6,696-709. 8. 方杰,张敏强,李晓鹏.(2011.中介效应的三类区间估计方法. 心理科学进展,19(5,765-774. 9. 方杰,张敏强.(2012.中介效应的点估计和区间估计:乘积分布法、非参数 B ootstrap 和MCMC 法. 心理学报,44(10,1408-1420. 10. 方杰,张敏强.(2013.参数和非参数Bootstrap 方法的简单中介效应分析比较. 心理科学,36(3,722-727. 11. 叶宝娟,温忠麟.(2013.有中介的调节模型检验方法:甄别和整合. 心理学报,45(9,1050-1060.

12. 刘红云,骆方,张玉,张丹慧.(2013.因变量为等级变量的中介效应分析. 心理学报,45(12,1431-1442. 13. 方杰,温忠麟,张敏强,任皓.(2014.基于结构方程模型的多层中介效应分析. 心理科学进展,22(3,530-539. 14. 方杰,温忠麟,张敏强,孙配贞.(2014.基本结构方程模型的多重中介效应分析. 心理科学,37(3,735-741.

中介效应重要理论及操作务实

中介效应重要理论及操作务实一、中介效应概述中介效应是指变量间的影响关系(X→Y)不是直接的因果链关系而是通过一个或一个以上变量(M)的间接影响产生的,此时我们称M为中介变量,而X通过M对Y产生的的间接影响称为中介效应。中介效应是间接效应的一种,模型中在只有一个中介变量的情况下,中介效应等于间接效应;当中介变量不止一个的情况下,中介效应的不等于间接效应,此时间接效应可以是部分中介效应的和或所有中介效应的总和。在心理学研究当中,变量间的关系很少是直接的,更常见的是间接影响,许多心理自变量可能要通过中介变量产生对因变量的影响,而这常常被研究者所忽视。例如,大学生就业压力与择业行为之间的关系往往不是直接的,而更有可能存在如下关系:eq \o\ac(○,1)就业压力→个体压力应对→择业行为反应。此时个体认知评价就成为了这一因果链当中的中介变量。在实际研究当中,中介变量的提出需要理论依据或经验支持,以上述因果链为例,也完全有可能存在另外一些中介因果链如下:eq \o\ac(○,2)就业压力→个体择业期望→择业行为反应;eq \o\ac(○,3)就业压力→个体生涯规划→择业行为反应;因此,研究者可以更具自己的研究需要研究不同的中介关系。当然在复杂中介模型中,中介变量往往不止一个,而且中介变量和调节变量也都有可能同时存在,导致同一个模型中即有中介效应又有调节效应,而此时对模型的检

验也更复杂。以最简单的三变量为例,假设所有的变量都已经中心化,则中介关系可以用回归方程表示如下: Y=cx+e1 1) M=ax+e2 2) Y=c’x+bM+e3 3) 上述3个方程模型图及对应方程如下:二、中介效应检验方法中介效应的检验传统上有三种方法,分别是依次检验法、系数乘积项检验法和差异检验法,下面简要介绍下这三种方法:1.依次检验法(causual steps)。依次检验法分别检验上述1)2)3)三个方程中的回归系数,程序如下: 1.1首先检验方程1)y=cx+ e1,如果c显著(H0:c=0被拒绝),则继续检验方程2),如果c不显著(说明X对Y无影响),则停止中介效应检验; 1.2 在c显著性检验通过后,继续检验方程2)M=ax+e2,如果a显著(H0:a=0被拒绝),则继续检验方程3);如果a不显著,则停止检验;1.3在方程1)和2)都通过显著性检验后,检验方程3)即y=c’x + bM + e3,检验b的显著性,若b显著(H0:b=0被拒绝),则说明中介效应显著。此时检验c’,若c’显著,则说明是不完全中介效应;若不显著,则说明是完全中介效应,x对y的作用完全通过M 来实现。评价:依次检验容易在统计软件中直接实现,但是这种检验对于较弱的中介效应检验效果不理想,如a较小而b较大时,依次检验判定为中介效应不显著,但是此时ab 乘积不等于0,因此依次检验的结果容易犯第二类错误(接受虚无假设即作出中介效应不存在的判断)。2.系数乘积项

焊接质量检验标准

JESMAY 培训资料 焊接质量检验标准焊接在电子产品装配过程中是一项很重要的技术,也是制造电子产品的重要环节之一。它在电子产品实验、调试、生产中应用非常广泛,而且工作量相当大,焊接质量的好坏,将直接影响到产品的质量。电子产品的故障除元器件的原因外,大多数是由于焊接质量不佳而造成的。因此,掌握熟练的焊接操作技能对产品质量是非常有必要的。(一)焊点的质量要求:保证焊点质量最关键的一点,就是必应该包括电气接触良好、机械接触牢固和外表美观三个方面,对焊点的质量要求,须避免虚焊。1.可靠的电气连接锡焊连接不是靠压力而是靠焊接过程形成牢固连接的合金层达到电焊接是电子线路从物理上实现电气连接的主要手段。气连接的目的。如果焊锡仅仅是堆在焊件的表面或只有少部分形成合金层,也许在最初的测试和工作中不易发现焊点存在的问题,这种焊点在短期内也能通过电流,但随着条件的改变和时间的推移,接触层氧化,脱离出现了,电路产生时通时断或者干脆不工作,而这时观察焊点外表,依然连接良好,这是电子仪器使用中最头疼的问题,也是产品制造中必须十分重视的问题。2.足够机械强度为保证被焊件在受振动或冲击时不至脱落、同时也是固定元器件,保证机械连接的手段。焊接不仅起到电气连接的作用,松动,因此,要求焊点有足够的机械强度。一般可采用把被焊元器件的引线端子打弯后再焊接的方法。作为焊锡材料的铅锡2。要想增加强度,就要有足够的,只有普通钢材的合金,本身强度是比较低的,常用铅锡焊料抗拉强度约为3-4.7kg/cm10% 连接面积。如果是虚焊点,焊料仅仅堆在焊盘上,那就更谈不上强度了。3.光洁整齐的外观并且不伤及导线的绝缘层及相邻元件良好桥接等现象,良好的焊点要求焊料用量恰到好处,外表有金属光泽,无拉尖、的外表是焊接质量的反映,注意:表面有金属光泽是焊接温度合适、生成合金层的标志,这不仅仅是外表美观的要求。 主焊体所示,其共同特点是:典型焊点的外观如图1①外形以焊接导线为中心,匀称成裙形拉开。 焊接薄的边缘凹形曲线焊料的连接呈半弓形凹面,焊料与焊件交界处平② 滑,接触角尽可能小。③表面有光泽且平滑。1图④无裂纹、针孔、夹渣。焊点的外观检查除用目测(或借助放大镜、显微镜观测)焊点是否合乎上述标准以外,还包括以下几个方面焊接质量的;导线及元器件绝缘的损伤;布线整形;焊料飞溅。检查时,除检查:漏焊;焊料拉尖;焊料引起导线间短路(即“桥接”)目测外,还要用指触、镊子点拨动、拉线等办法检查有无导线断线、焊盘剥离等缺陷。(二)焊接质量的检验方法:⑴目视检查目视检查就是从外观上检查焊接质量是否合格,也就是从外观上评价焊点有什么缺陷。目视检查的主要内容有: 是否有漏焊,即应该焊接的焊点没有焊上;① ②焊点的光泽好不好; ③焊点的焊料足不足;(a)(b) ④焊点的周围是否有残留的焊剂;正确焊点剖面图2图6-1 JESMAY 培训资料

电子技术基础1.4(半导体器件)

场效应管是利用电场效应来控制电流的一种半导体器件,它的输出电流决定于输入电压的大小,基本上不需要信号源提供电流,所以输入电阻高,且温度稳定性好。 绝缘栅型场效应管 MOS管增强型NMOS管耗尽型NMOS管增强型PMOS管耗尽型PMOS管 1.4 绝缘栅场效应管(IGFET)

1. G 栅极D 漏极 S 源极B 衬极 SiO 2 P 型硅衬底耗尽层 N + N + 栅极和其它电极之间是绝缘的,故称绝缘栅场效应管。 MOS Metal oxide semiconductor 1.4.1 N 沟道增强型绝缘栅场效应管(NMOS)电路符号 D G S

G D S B P N + N + 2. 工作原理 (1) U GS 对导电沟道的控制作用(U DS =0V) 当U GS ≥U GS(th)时,出现N 型导电沟道。 耗尽层 开启电压:U GS(th) U GS N 型沟道 U GS 值越大沟道电阻越小。

G D S B P N + N + (2) U DS 对导电沟道的影响(U GS >U GS(th)) U GS U DD R D U DS 值小,U GD >U GS(th),沟道倾斜不明显,沟道电阻近似不变,I D 随U DS 线性增加。 I D U GD =U GS -U DS 当U DS 值增加使得U GD =U GS(th),沟道出现预夹断。U DS =U GS -U GS(th) 随着U DS 增加,U GD

1 234 U GS V 2 4 6I D /mA 3. 特性曲线 输出特性曲线:I D =f (U DS ) U GS =常数 转移特性曲线:I D =f (U GS ) U DS =常数 U GS =5V 6V 4V 3V 2V U DS =10V 恒流区 U GS(th) U DS /V 5 10 151 234 I D /mA 可变电阻区 截止区 U GD =U GS(th) 2 GS D DO GS(th)1U I I U ?? =- ? ??? I DO 是U GS =2U GS(th)时的I D 值 I DO U GD >U GS(th) U GD

产品质量检验标准

CaiNi accessories factory
第 1 页 共 1 页
采 妮 饰 品 厂
产品品质检验标准
一、目的: 产品及产品用料检验工作,规范检验过程的判定标准和判定等级,使产品出货的品质满足顾 确保本工厂产品和产品用料品质检验工作的有效性。 二、本标准的适用范围: 本标准适用采妮工厂所有生产的产品及产品用料的品质检验。 三、权责: 1、品质部:检验标准及检验样本的制定,产品检验及判定、放行。 2、生产车间、物控部:所有产品及产品用料报检和产品品质异常的处理。 3、总经理:特采出货及特采用料的核准。 四、定义 1、首饰类: A、项链/手链/腰链 F、发夹 G、手表带 B、耳环 H、领夹 C、胸针 D、介子 E、手镯 J、皮带扣
规范工厂
客需求,
I、袖口钮/鞋扣钮
K、其他配件类(服装、皮包、眼镜等等) 2、产品用料: A、铝质料类 F、钛金属 B、铜料类 G、皮革类 C、铅锡合金 H、不锈钢类 D、锌合金 E、铁质料类 J、包装用料类
I、水晶胶类
K、硅料类(玻璃珠、玻璃石、宝石、珍珠、玛瑙) 3、客户品质等级分类及说明: 1)客户品质等级分类: 品质部根据客户订单注明的: “AAA”、“AA”、“A”三个等级分别对客户品质标准进 行分类 。 2)客户品质等级说明: A、 “AAA”: 品质标准要求比较严格,偏高于正常标准和行业标准。 B、 “AA”: 品质标准要求通用国际化标准和行业标准吻合。 C、 “A”: 品质要求为一般市场通用品质标准。
第 1 页 共 1 页

半导体照明技术作业答案

某光源发出波长为460nm 的单色光,辐射功率为100W ,用Y 值表示其光通量,计算其色度坐标X 、Y 、Z 、x 、y 。 解:由教材表1-3查得460nm 单色光的三色视觉值分别为0.2908X =,0.0600Y =, 1.6692Z =,则对100W P =,有 4356831000.2908 1.98610lm 6831000.0600 4.09810lm 683100 1.6692 1.14010lm m m m X K PX Y K PY Z K PZ ==××=×==××=×==××=× 以及 )()0.144 0.030x X X Y Z y Y X Y Z =++==++=

1. GaP绿色LED的发光机理是什么,当氮掺杂浓度增加时,光谱有什么变化,为什么?GaP红色LED的发光机理是什么,发光峰值波长是多少? 答:GaP绿色LED的发光机理是在GaP间接跃迁型半导体中掺入等电子陷阱杂质N,代替P原子的N原子可以俘获电子,又靠该电子的电荷俘获空穴,形成束缚激子,激子复合发光。当氮掺杂浓度增加时,总光通量增加,主波长向长波移动,这是因为此时有大量的氮对形成新的等电子陷阱,氮对束缚激子发光峰增加,且向长波移动。 GaP红色LED的发光机理是在GaP晶体中掺入ZnO对等电子陷阱,其发光峰值波长为700nm的红光。 2. 液相外延生长的原理是什么?一般分为哪两种方法,这两种方法的区别在哪里? 答:液相外延生长过程的基础是在液体溶剂中溶质的溶解度随温度降低而减少,而且冷却与单晶相接触的初始饱和溶液时能够引起外延沉积,在衬底上生长一个薄的外延层。 液相外延生长一般分为降温法和温度梯度法两种。降温法的瞬态生长中,溶液与衬底组成的体系在均处于同一温度,并一同降温(在衬底与溶液接触时的时间和温度上,以及接触后是继续降温还是保持温度上,不同的技术有不同的处理)。而温度梯度法则是当体系达到稳定状态后,整个体系的温度再不改变,而是在溶液表面和溶液-衬底界面间建立稳定的温度梯度和浓度梯度。 3. 为何AlGaInP材料不能使用通常的气相外延和液相外延技术来制造? 答:在尝试用液相外延生长AlGaInP时,由于AlP和InP的热力学稳定性的不同,液相外延的组分控制十分困难。而当使用氢化物或氯化物气相外延时,会形成稳定的AlCl化合物,会在气相外延时阻碍含Al磷化物的成功生长。因此AlGaInP 材料不能使用通常的气相外延和液相外延技术来制造。

中介效应分析方法

中介效应分析方法 1 中介变量和相关概念 在本文中,假设我们感兴趣的是因变量(Y) 和自变量(X) 的关系。虽然它们之间不一定是因果关系,而可能只是相关关系,但按文献上的习惯而使用“X对的影响”、“因果链”的说法。为了简单明确起见,本文在论述中介效应的检验程序时,只考虑一个自变量、一个中介变量的情形。但提出的检验程序也适合有多个自变量、多个中介变量的模型。 中介变量的定义 考虑自变量X 对因变量Y 的影响,如果X通过影响变量M来影响Y,则称M 为中介变量。例如“, 父亲的社会经济地位”影响“儿子的教育程度”,进而影响“儿子的社会经济地位”。又如,“工作环境”(如技术条件) 通过“工作感觉”(如挑战性) 影响“工作满意度”。在这两个例子中,“儿子的教育程度”和“工作感觉”是中介变量。假设所有变量都已经中心化(即均值为零) ,可用下列方程来描述变量之间的关系: Y = cX + e 1 (1) M = aX + e 2 (2) Y = c’X + bM + e 3 (3) 1 Y=cX+e 1 M=aX+e 2 e 3 Y=c’X+bM+e 3 图1 中介变量示意图 假设Y与X的相关显着,意味着回归系数c显着(即H : c = 0 的假设被拒绝) ,在这个前提下考虑中介变量M。如何知道M真正起到了中介变量的作用,

或者说中介效应(mediator effect ) 显着呢目前有三种不同的做法。 传统的做法是依次检验回归系数。如果下面两个条件成立,则中介效应显着: (i) 自变量显着影响因变量;(ii) 在因果链中任一个变量,当控制了它前面的变量(包括自变量) 后,显着影响它的后继变量。这是Baron 和Kenny 定义的(部分) 中介过程。如果进一步要求: (iii) 在控制了中介变量后,自变量对因变量的影响不显着, 变成了Judd和Kenny 定义的完全中介过程。在只有一个中介变量的情形,上述条件相当于(见图1) : (i) 系数c显着(即H : c = 0 的假设被拒绝) ; (ii) 系数a 显着(即H 0: a = 0 被拒绝) ,且系数b显着(即H : b = 0 被拒绝) 。 完全中介过程还要加上: (iii) 系数c’不显着。 第二种做法是检验经过中介变量的路径上的回归系数的乘积ab是否显着,即检验H : ab = 0 ,如果拒绝原假设,中介效应显着 ,这种做法其实是将ab作为中介效应。 第三种做法是检验c’与c的差异是否显着,即检验H : c - c’= 0 ,如果拒绝原假设,中介效应显着。 中介效应与间接效应 依据路径分析中的效应分解的术语 ,中介效应属于间接效应(indirect effect) 。在图1 中, c是X对Y的总效应, ab是经过中介变量M 的间接效应(也就是中介效应) , c’是直接效应。当只有一个自变量、一个中介变量时,效应之间有如下关系 c = c’+ ab (4) 当所有的变量都是标准化变量时,公式(4) 就是相关系数的分解公式。但公式(4) 对一般的回归系数也成立)。由公式(4) 得c-c’=ab,即c-c’等于中介效 应,因而检验H 0 : ab = 0 与H : c-c’= 0 是等价的。但由于各自的检验统计量 不同,检验结果可能不一样。 中介效应都是间接效应,但间接效应不一定是中介效应。实际上,这两个概念是有区别的。首先,当中介变量不止一个时,中介效应要明确是哪个中介变量的中介效应,而间接效应既可以指经过某个特定中介变量的间接效应(即中介效应) ,也可以指部分或所有中介效应的和。其次,在只有一个中介变量的情形,虽然中介效应等于间接效应,但两者还是不等同。中介效应的大前提是自变量与因变量相关显着,否则不会考虑中介变量。但即使自变量与因变量相关系数是零,仍然可能有间接效应。下面的人造例子可以很好地说明这一有趣的现象。设Y是装配线上工人的出错次数, X 是他的智力, M 是他的厌倦程度。又设智力(X) 对厌倦程度(M) 的效应是 ( =a) ,厌倦程度(M) 对出错次数( Y ) 的效应也是 ( = b) ,而

可焊性试验规范标准

. '. 检验规范 INSPECTION INSTRUCTION 第1页 / 共2页 版本 变更内容 日期 编写者 名称 A 新版 可焊性试验规范 设备 EQUIPMENT 熔锡炉,温度计,显微镜 1.0 目的: 阐述可焊性试验的方法及验收标准 2.0 范围: 适用于上海molex 组装产品的针/端子的可焊性试验 3.0 试验设备与材料: 3.1 试验设备 熔锡炉`温度计`显微镜 3.2 试验材料 无水酒精`助焊剂(液体松香)`焊锡(Sn60或Sn63) 4.0 定义: 4.1 沾锡—--焊锡在被测金属表面上形成一层均匀`光滑`完整而附着的锡层状态,具体见图片A. 4.2 缩锡—--上锡时熔化焊锡覆盖了整个被测表面,试样产品离开熔炉后, 在被测表面上形成形状不规则的锡块,基底金属不暴露, 具体见图片B. 4.3不粘锡—试样产品离开熔锡炉后,被测表面仍然暴露,未形成锡层, 具体见图片C. 4.4 针孔----穿透锡层的小孔状缺陷, 具体见图片D 。 图片A (焊接测试合格) 图片B(表面形成不规则的锡块) 编写者: 校对: 批准: 缩锡 表面形成均匀`光滑`完整而附着的锡层状态

. '. 检验规范 INSPECTION INSTRUCTION 第2页 / 共2页 图片C (铜基底未被锡层覆盖) 图片D (表面有小孔缺陷) 5.0 程序: 5.1试样准备 应防止试样产品沾染油迹,不应刻意的对试样进行清洗`擦拭等清洁工作,以免影响试验的客观性. 5.2熔锡 打开熔锡炉,熔化焊锡,并使熔锡温度保持在245?C ±5?C. 5.3除渣 清除熔锡池表面的浮渣或焦化的助焊剂. 5.4上助焊剂 确保试样产品直立浸入助焊剂中5-10sec,再取出使其直立滴流10-20sec,使的被测部位不会存在多余助焊剂.浸入深度须覆盖整个待测部分. 5.5 上锡 确保试样产品直立浸入熔剂池中5±0.5sec ,以25±6mm/sec 的速度取出,浸入深度须覆盖整个待测 部分. 5.6 冷却 上锡完成后,置放自然冷却. 5.7 清洗 将冷却后的试样产品浸入无水酒精中除去助焊剂,清洗完成后,置于无尘纸上吸干溶液. 6.0 验收标准 在30倍的显微镜下观察,针孔`缩锡`不沾锡等缺陷不得集中于一处,且缺陷所占面积不得超过整个测试面积的5%, 不沾锡 针孔

助焊剂的检验方法(依据标准)

助焊剂的檢驗方法(依據標准) 项目 规格 测试标准 助焊剂分类 ORM0 J-STD-004 物理状态(20℃) 液体 目测 颜色 无色 目测 比重(20℃) 0.822±0.010 GB611-1988 酸价(mgKOH/g) 49.00±5.00 J-STD-004 固态含量(w/w%) 7.50±1.00 JIS-Z-3197 卤化物含量 (w/w%) 无 J-STD-004/2.3.35 吸入容许浓度 (ppm) 400 WS/T206-2001 助焊剂检测方法 6.1助焊剂外观的测定 目视检测成品外观应均匀一致,透明,无沉淀、分层现象,无异物。 6.2助焊剂固体含量的测定 6.2.1(重量分析法) A)原理 将已称重的助焊剂样品先后在水浴及烘箱中除去挥发性物质,冷却后再称重。 助焊剂的固体含量由以上所得到的数值计算而得。 B)仪器 A.实验室常规仪器 B.水浴 C.烘箱 D.电子天平:灵敏度为0.0001g C)步骤 A.有机溶剂助焊剂(沸点低于100℃): a.将烧杯放入恒温110℃± 5℃的烘箱中烘干,放入干燥器中,冷却至室温, 称重(精确至0.001g)。重复以上操作直至烧杯恒重(两次称量相差不超过 0.001g)。 b.移取足量的样品1.0±0.1入烧杯,称重(精确至0.001g)。 c.将烧杯放入110 ± 2℃烘箱中烘1小时,取出后在干燥器中冷却至室温称重 (精确至0.001g) 。 B.水溶剂助焊剂: a.将烧杯放入恒温110°± 2℃的烘箱中烘干,放入干燥器中,冷却至室温,称 重(精确至0.001g)。重复以上操作直至烧杯恒重(两次称量相差不超过 0.001g)。 b.移取足量的样品1.0±0.1入烧杯,称重(精确至0.001g)。 c.将烧杯放入110 ±2℃烘箱中烘3小时,取出后在干燥器中冷却至室温称重 (精确至0.001g) 。

LED半导体照明的发展与应用

LED半导体照明的发展与应用 者按:半导体技术在上个世纪下半叶引发的一场微电子革命,催生了微电子工业和高科技IT产业,改变了整个世界的面貌。今天,化合物半导体技术的迅猛发展和不断突破,正孕育着一场新的革命——照明革命。新一代照明光源半导体LED,以传统光源所没有的优点引发了照明产业技术和应用的革命。半导体LED固态光源替代传统照明光源是大势所趋。1、LED半导体照明的机遇 (1)全球性的能源短缺和环境污染在经济高速发展的中国表现得尤为突出,节能和环保是中国实现社会经济可持续发展所急需解决的问题。作为能源消耗大户的照明领域,必须寻找可以替代传统光源的新一代节能环保的绿色光源。 (2)半导体LED是当今世界上最有可能替代传统光源的新一代光源。 其具有如下优点: ①高效低耗,节能环保; ②低压驱动,响应速度快安全性高; ③固体化封装,耐振动,体积小,便于装配组合; ④可见光区内颜色全系列化,色温、色纯、显色性、光指向性良好,便于照明应用组合; ⑤直流驱动,无频闪,用于照明有利于保护人眼视力; ⑥使用寿命长。 (3)现阶段LED的发光效率偏低和光通量成本偏高是制约其大规模进入照明领域的两大瓶颈。目前LED的应用领域主要集中在信号指示、智能显示、汽车灯具、景观照明和特殊照明领域等。但是,化合物半导体技术的迅猛发展和关键技术的即将突破,使今天成为大力发展半导体照明产业的最佳时机。2003年我国人均GDP首次突破1000美元大关,经济实力得到了进一步的增强,市场上已经初步具备了接受较高光通量成本(初始成本)光

源的能力。在未来的10~20年内,用半导体LED作为光源的固态照明灯,将逐渐取代传统的照明灯。 (4)各国政府予以高度重视,相继推出半导体照明计划,已形成世界性的半导体照明技术合围突破的态势。 ①美国:“下一代照明计划”时间是2000~2010年投资5亿美元。美国半导体照明发展蓝图如表1所示; ②日本:“21世纪的照明计划”,将耗费60亿日元推行半导体照明目标是在2006年用白光LED替代50%的传统照明; ③欧盟:“彩虹计划”已在2000年7月启动通过欧共体的资助推广应用白光LED照明; ④中国:2003年6月17日,由科技部牵头成立了跨部门、跨地区、跨行业的“国家半导体照明工程协调领导小组”。从协调领导小组成立之日到2005年年底之前,将是半导体照明工程项目的紧急启动期。从2006年的“十一五”开始,国家将把半导体照明工程作为一个重大项目进行推动; (5)我国 的半导体LED产业链经过多年的发展已相对完善,具备了一定的发展基础。同时,我国又是照明灯具产业的大国,只要政府和业界协调整合好,发展半导体LED照明产业是大有可为的; 2LED的发展历程(如图1所示) 2.1LED技术突破的历程

助焊剂通用规范.

助焊剂通用规范 2014-08-15发布2014-09-01实施 xxx电子分厂发布

助焊剂通用规范 免清洗液态助焊剂——————————————————————————————————————— 1 范围 本标准规定了电子焊接用免清洗液态助焊剂的技术要求、实验方法、检验规则和产品的标志、包装、运输、贮存。 本标准主要适用于印制板组装及电气和电子电路接点锡焊用免清洗液态助焊剂(简称助焊剂)。使用免清洗液态助焊剂时,对具有预涂保护层印制板组件的焊接,建议选用与其配套的预涂覆助焊剂。 2 规范性引用文件 下列文件中的条款通过本标准的引用而成为本标准的条款。凡是注日期的引用文件,其随后所有的修改单(不包括勘误的内容)或修订版均不适用于本标准,然而,鼓励根据本标准达成协议的各方研究是否使用这些文件。凡是不注日期的引用文件,其最新版本适用于本标准。 GB 190 危险货物包装标志 GB 2040 纯铜板 GB 3131 锡铅焊料 GB 2423.32 电工电子产品基本环境试验规程润湿称量法可焊性试验方法 GB 2828 逐批检查计数抽样程序及抽样表(适用于连续批的检查) GB 2829 周期检查计数抽样程序及抽样表(适用于生产过程稳定性的检查) GB 4472 化工产品密度、相对密度测定通则 GB 4677.22 印制板表面离子污染测试方法 GB 9724 化学试剂PH值测定通则 YB 724 纯铜线 3 要求 3.1 外观 助焊剂应是透明、均匀一致的液体,无沉淀或分层,无异物,无强烈的刺激性气味;一年有效保存期内,其颜色不应发生变化。 3.2 物理稳定性 按5.2试验后,助焊剂应保持透明,无分层或沉淀现象。 3.3 密度 按5.3检验后,在23℃时助焊剂的密度应在其标称密度的(100±1.5)%范围内。 3.4 不挥发物含量 按5.4检验后,助焊剂不挥发物含量应满足表1的规定。

中介效应分析方法

中介效应分析方法 1 中介变量与相关概念 在本文中,假设我们感兴趣的就是因变量(Y) 与自变量(X)的关系。虽然它们之间不一定就是因果关系,而可能只就是相关关系,但按文献上的习惯而使用“X对的影响”、“因果链”的说法。为了简单明确起见,本文在论述中介效应的检验程序时,只考虑一个自变量、一个中介变量的情形。但提出的检验程序也适合有多个自变量、多个中介变量的模型。 1、1 中介变量的定义 考虑自变量X 对因变量Y的影响,如果X通过影响变量M来影响Y,则称M为中介变量。例如“, 父亲的社会经济地位”影响“儿子的教育程度”,进而影响“儿子的社会经济地位”。又如,“工作环境”(如技术条件) 通过“工作感觉”(如挑战性) 影响“工作满意度”。在这两个例子中,“儿子的教育程度”与“工作感觉”就是中介变量。假设所有变量都已经中心化(即均值为零) ,可用下列方程来描述变量之间的关系: Y = cX + e1 (1) M = aX + e2(2) Y = c’X + bM + e3 (3) e1 Y=cX+e1 M=aX+e2 e3 Y=c’X+bM+e3 中介变量示意图 假设Y与X的相关显著,意味着回归系数c显著(即H0: c = 0 的假设被拒绝) ,在这个前提下考虑中介变量M。如何知道M真正起到了中介变量的作用,或者说中介效应(mediator effect ) 显著呢? 目前有三种不同的做法。 传统的做法就是依次检验回归系数。如果下面两个条件成立,则中介效应显著: (i) 自变量显著影响因变量;(ii)在因果链中任一个变量,当控制了它前面的变量(包括自变量) 后,显著影响它的后继变量。这就是Baron与Kenny定义的(部分) 中介过程。如果进一步要求: (iii)在控制了中介变量后,自变量对因变量

中介效应分析方法

中介效应分析方法 1 中介变量和相关概念 在本文中,假设我们感兴趣的是因变量(Y) 和自变量(X) 的关系。虽然它们之间不一定是因果关系,而可能只是相关关系,但按文献上的习惯而使用“X 对的影响”、“因果链”的说法。为了简单明确起见,本文在论述中介效应的检验程序时,只考虑一个自变量、一个中介变量的情形。但提出的检验程序也适合有多个自变量、多个中介变量的模型。 1.1 中介变量的定义 考虑自变量X 对因变量Y 的影响,如果X 通过影响变量M 来影响Y ,则称M 为中介变量。例如“, 父亲的社会经济地位”影响“儿子的教育程度”,进而影响“儿子的社会经济地位”。又如,“工作环境”(如技术条件) 通过“工作感觉”(如挑战性) 影响“工作满意度”。在这两个例子中,“儿子的教育程度”和“工作感觉”是中介变量。假设所有变量都已经中心化(即均值为零) ,可用下列方程来描述变量之间的关系: Y = cX + e 1 (1) M = aX + e 2 (2) Y = c ’X + bM + e 3 (3) 1 Y=cX+e 1 e 2 M=aX+e 2 a b M

e3 Y=c’X+bM+e3 图1 中介变量示意图 假设Y与X的相关显著,意味着回归系数c显著(即H0 : c = 0 的假设被拒绝) ,在这个前提下考虑中介变量M。如何知道M真正起到了中介变量的作用,或者说中介效应(mediator effect ) 显著呢? 目前有三种不同的做法。 传统的做法是依次检验回归系数。如果下面两个条件成立,则中介效应显著: (i) 自变量显著影响因变量;(ii) 在因果链中任一个变量,当控制了它前面的变量(包括自变量) 后,显著影响它的后继变量。这是Baron 和Kenny 定义的(部分) 中介过程。如果进一步要求: (iii) 在控制了中介变量后,自变量对因变量的影响不显著, 变成了Judd和Kenny 定义的完全中介过程。在只有一个中介变量的情形,上述条件相当于(见图1) : (i) 系数c显著(即H0 : c = 0 的假设被拒绝) ;(ii) 系数a 显著(即H0: a = 0 被拒绝) ,且系数b显著(即H0: b = 0 被拒绝) 。完全中介过程还要加上: (iii) 系数c’不显著。 第二种做法是检验经过中介变量的路径上的回归系数的乘积ab是否显著,即检验H0 : ab = 0 ,如果拒绝原假设,中介效应显著 ,这种做法其实是将ab作为中介效应。 第三种做法是检验c’与c的差异是否显著,即检验H0 : c - c’= 0 ,如果拒绝原假设,中介效应显著。 1.2 中介效应与间接效应 依据路径分析中的效应分解的术语 ,中介效应属于间接效应(indirect effect) 。在图1 中, c是X对Y的总效应, ab是经过中介变量M 的间接效应(也就是中介效应) , c’是直接效应。当只有一个自变量、一个中介变量时,效应之间有如下关系 c = c’+ ab (4) 当所有的变量都是标准化变量时,公式(4) 就是相关系数的分解公式。但公式(4) 对一般的回归系数也成立)。由公式(4) 得c-c’=ab,即c-c’等于中介效应,因而检验H0 : ab = 0 与H0 : c-c’= 0 是等价的。但由于各自的检验统计量不同,检验结果可能不一样。 中介效应都是间接效应,但间接效应不一定是中介效应。实际上,这两个概念

半导体照明技术及其应用

《半导体照明技术及其应用》课程教学大纲 (秋季) 一、课程名称:半导体照明技术及其应用Semiconductor Lighting Technology and Applications 二、课程编码: 三、学时与学分:32/2 四、先修课程: 微积分、大学物理、固体物理、半导体物理、微电子器件与IC设计 五、课程教学目标: 半导体照明是指用全固态发光器件LED作为光源的照明,具有高效、节能、环保、寿命长、易维护等显著特点,是近年来全球最具发展前景的高新技术领域之一,是人类照明史上继白炽灯、荧光灯之后的又一场照明光源的革命。本课程注重理论的系统性﹑结构的科学性和内容的实用性,在重点讲解发光二极管的材料、机理及其制造技术后,详细介绍器件的光电参数测试方法,器件的可靠性分析、驱动和控制方法,以及各种半导体照明的应用技术,使学生学完本课程以后,能对半导体照明有深入而全面的理解。 六﹑适用学科专业:电子科学与技术 七、基本教学内容与学时安排: 绪论(1学时) 半导体照明简介、学习本课程的目的及要求 第一章光视觉颜色(2学时) 1光的本质 2光的产生和传播 3人眼的光谱灵敏度 4光度学及其测量 5作为光学系统的人眼 6视觉的特征与功能 7颜色的性质 8国际照明委员会色度学系统 9色度学及其测量 第二章光源(1学时) 1太阳 2月亮和行星 3人工光源的发明与发展 4白炽灯 5卤钨灯 6荧光灯 7低压钠灯

8高压放电灯 9无电极放电灯 10发光二极管 11照明的经济核算 第三章半导体发光材料晶体导论(2学时) 1晶体结构 2能带结构 3半导体晶体材料的电学性质 4半导体发光材料的条件 第四章半导体的激发与发光(1学时) 1PN结及其特性 2注入载流子的复合 3辐射与非辐射复合之间的竞争 4异质结构和量子阱 第五章半导体发光材料体系(2学时) 1砷化镓 2磷化镓 3磷砷化镓 4镓铝砷 5铝镓铟磷 6铟镓氮 第六章半导体照明光源的发展和特征参量(1学时)1发光二极管的发展 2发光二极管材料生长方法 3高亮度发光二极管芯片结构 4照明用LED的特征参数和要求 第七章磷砷化镓、磷化镓、镓铝砷材料生长(3学时)1磷砷化镓氢化物气相外延生长(HVPE) 2氢化物外延体系的热力学分析 3液相外延原理 4磷化镓的液相外延 5镓铝砷的液相外延 第八章铝镓铟磷发光二极管(2学时) 1AlGaInP金属有机物化学气相沉积通论 2外延材料的规模生产问题 3电流扩展 4电流阻挡结构 5光的取出 6芯片制造技术

PCB.A焊锡作业标准及通用检验标准

PCB.A焊锡作业标准及通用检 验标准 【培训教材】 部门: 工艺工程部 编制: 代建平 ※烙铁工作原理 ※烙铁的分类及适用范围 ※烙铁使用前的准备 ※烙铁的使用与操作 ※一般无铅组件焊接参考温度及时间 ※烙铁的使用注意事项及保养要求 ※焊点的焊接标准及焊点的判别 ※焊点不良的原因分析 ※焊接的操作顺序 审查﹕核准﹕版本﹕

A B D E F A. 滑动盖 D 未端 B. 滑动柱 E 感应器 C. 未端 F 测量点 C 1. 目的:为使作业者正确使用电烙铁进行手工焊接作业,使电烙铁得到有效的利用,特制定本教材。 2. 范围:升邦钟表制品厂所有使用电烙铁,烙铁架的人员。 3. 定义: 恒温烙铁:是一种能在一定温度范围自由调节烙铁发热温度,并稳定在较小温度范围内的手工焊接工具,一般在100℃-400℃可调,温度稳定性达±5℃或更好状况。 4.工作原理:通过能量转换使锡合金熔化后适量转移到工件预焊位置,使其凝固,达到人们预期的目的。 5.内容: 5.1电烙铁的分类及适用范围: 5.1.1:一般情况下,烙铁按功率分,可分为30W 烙铁、40W 烙铁、60W 烙铁、恒温烙铁等几种类型,按加热方式分可分 为内热式、外热式。30W 烙铁温度能控制在250℃-300℃之间;40W 烙铁温度能控制在280℃-360℃之间; 60W 烙铁温度能控制在350℃-480℃之间;恒温烙铁具有良好的温度稳定性能,因其温度能在100℃-400℃之间可调和稳定,因此可适用各种焊接环境,因内热式具有发热稳定,不易损坏等优点,所以目前大多数使用内热式烙铁。 5.1.2:烙铁嘴一般是采用紫铜或类似合金的材料制成, 其特点是传热快, 易与锡合金物亲合,烙铁头根据焊接物的大小 形状要求不同的形状, 通常有圆锥、斜圆、扁平、一字形…,通常电烙铁配置相同功率的烙铁嘴,因圆锥型烙铁嘴适用各种焊接环境,尤其在组件脚较密,普通电子组件、焊盘焊接位置较紧凑的位置,焊接对温度敏感组件或其它易烫坏组件和焊盘的环境下使用,具有其不可替代的优越性,如封装IC 脚焊接,普通电容、电阻、二极管排线、石英的焊接等,所以在工厂里运用较多。 5.2:电烙铁的使用前准备: 5.2.1:恒温烙铁是通过电源插线把电源与焊台连接起来的, 一般使用总插头+防静电线, 其中总插头为电源线, 另一根线 为防静电连接线, 在焊接有特殊要求的器件时, 必须作好防静电保护,(烙铁的接地线必须接地)因此电烙铁使用前应由生产管理人员检查烙铁电源线及地线是否有铜线外露或胶落,电源插头接线有无脱落,烙铁嘴是否有松动、破损等,如有异常情况,应及时交由工艺工程部相关人员维修或更换。 5.2.2:使用电烙铁,须配套领取烙铁架,同时应将海棉浸湿再挤干后置入烙铁架相应位置,以作烙铁嘴擦拭之用。 5.2.3:将恒温烙铁插上电源,2分钟左右,烙铁嘴将迅速升温至所设定温度,这时应将烙铁嘴在湿润的海棉上擦拭干净 用锡线在烙铁嘴上涂上薄薄的一层锡,再次在海棉上擦拭到烙铁嘴光亮后方可使用。 5.2.4:电烙铁在开始使用或在使用过程中,须由IPQC 对烙铁嘴的温度及感应电压进行监测,一般监测温度使用烙铁温 度测试仪,监测感应电压使用数字万用表,以确保其焊接温度及感应电压在工艺要求范围,一般至少每4小时须检查一次. 监测烙铁温度具体检测步骤如下: (1)打开电池箱检查电池并确认电池安装极性正确; (2)装传感器红色的一边装到红色的未端C ,装传感器蓝色的一边到蓝色的未端D ;推动滑动盖A 并装另一边到 滑动柱B ; (3)打开电源开关,检查显示屏是否有显示,显示出当时室温时,可以使用该仪器,将使用中的烙铁嘴加上薄 薄一层锡后密切接触到测试点F 上,显示器上将在2-3S 钟内显示烙铁温度。

焊点工艺标准及检验规范

焊点工艺标准及检验规范文件编号 页数生效日期 冷焊 OK NG 特点:焊点程不平滑之外表,严重时于线脚四周,产生这褶裰或裂缝 1.焊锡表面粗糙,无光泽,程粒状。 2.焊锡表面暗晦无光泽或成粗糙粒状,引脚与铜箔未完全熔接。 允收标准:无此现象即为允收,若发现即需二次补焊。 影响性:焊点寿命较短,容易于使用一段时间后,开始产生焊接不良之现象,导致功能失效。 造成原因:1.焊点凝固时,收到不当震动(如输送皮带震动) 2.焊接物(线脚、焊盘)氧化。 3.润焊时间不足。 补救处置:1.排除焊接时之震动来源。 2.检查线脚及焊盘之氧化状况,如氧化过于严重,可事先去除氧化。 3.调整焊接速度,加长润焊时间。 编制审核批准 日期日期日期

焊点工艺标准及检验规范文件编号 页数生效日期 针孔 OK NG 特点:于焊点外表上产生如针孔般大小之孔洞。 允收标准:无此现象即为允收,若发现即需二次补焊。 影响性:外观不良且焊点强度较差。 造成原因:1.PCB含水汽。 2.零件线脚受污染(如矽油) 3.导通孔之空气受零件阻塞,不易逸出。 补救处置:1.PCB过炉前以80~100℃烘烤2~3小时。 2.严格要求PCB在任何时间任何人都不得以手触碰PCB表面,以避免污染。 3.变更零件脚成型方式,避免Coating(零件涂层)落于孔内,或察看孔径与线搭配是否有风孔之现象。 编制审核批准 日期日期日期

焊点工艺标准及检验规范文件编号 页数生效日期 短路 OK NG 特点:在不同线路上两个或两个以上之相邻焊点间,其焊盘上这焊锡产生相连现象。 1.两引脚焊锡距离太近小于0.6mm,接近短路。 2.两块较近线路间被焊锡或组件弯角所架接,造成短路。 允收标准:无此现象即为允收,若发现即需二次补焊。 影响性:严重影响电气特性,并造成零件严重损害。 造成原因:1.板面预热温度不足。 2.助焊剂活化不足。 3.板面吃锡高度过高。 4.锡波表面氧化物过多。 5.零件间距过近。 6.板面过炉方向和锡波方向不配合。 补救处置:1.调高预热温度。 2.更新助焊剂。 3.确认锡波高度为1/2板厚高。 4.清除锡槽表面氧化物。 5.变更设计加大零件间距。 6.确认过炉方向,以避免并列线脚同时过炉,或变更设计并列线脚同一方向过炉。 编制审核批准 日期日期日期

相关主题
文本预览
相关文档 最新文档